Category Archives: PDAs

Early iPhone Rumours

[The Lar.me/2ke link originally pointed to Mike Davidson’s 2005 piece. More explanations here.]

[Update, a bit later… Added some thoughts, links, and tags…]

While listening to the Critical Path podcast on 5by5 with Asymco’s Horace Dediu, I got stuck on Dediu’s comment that there weren’t iPhone rumours when Google acquired Android. After a quick search, I ended up on this 2005 piece by Mike Davidson (written eight months before the Google purchase), so I tweeted to @Asymco with a link to Davidson’s post. Several people, including Dediu himself, tell me that this wouldn’t qualify as a rumour (though my own definition of rumour probably differs from theirs). Still, I’ve received some comments about how insightful this piece was. It was partly based on a November 2004 piece by Russell Beattie, which was itself a partial reaction to a short Ross Mayfield post about a “WiFi iPod”. In comments on Davidson’s piece, Ste Grainer mentioned a Robert X. Cringely piece about a Mac Media Centre.

I later found a NYT piece from 2002 which contained an actual rumour about the “iPhone”, including the name:

industry analysts see evidence that Apple is contemplating what inside the company is being called an ”iPhone.”

This, I think, would qualify as a rumour in most people’s definitions, though it didn’t include “leaked prototypes”.

But back to this Davidson piece, which might have been more insightful than the NYT’s one or even Beattie’s…

In hindsight, Davidson’s piece was both prescient of what would actually happen and telling in what didn’t happen. He talked about satellite radio, Plays for Sure, and WiMAX none of which panned out as planned. Also, Davidson surmised some things about Apple’s “content play” which were both less ambitious and more impactful (on Apple’s bottomline) than what actually happened. Apple’s 2007 move against DRM might have been surprising to the 2005 Davidson. And it’s funny to think back to an era when high prices for flash storage made it prohibitive to build a mobile device… 😉

Basically, though, Davidson was speculating about an integrated device which would replace several devices at once:

It won’t be long before the cell phone is your camera, your music player, your organizer, your portable web client, your remote control, and your digital wallet

[We could argue about Android’s NFC play being closer to the digital wallet ideal than Apple’s passbook. The other parts are closer to a Treo anyway…]

In the abstract at least (and in Steve Jobs’s way of describing it), the iPhone has been this integrated communicating device about which people had been talking for years. So, kudos to Mike Davidson for predicting this a while in advance. He was neither the first nor the last, but he painted an interesting portrait.

Now, there are other parts to this story, I think. Given the fact that work on what would become iOS devices (iPad first, we’re told) hadn’t begun when Charles Wolf told the New York Times about a device called “iPhone” internally at Apple, I get the impression that the rumours predated much of the actual development work leading to the device. Speculation happened later still. It seems to relate to a number of things demonstrated by STS generally and SCOT specifically. Namely that technological development is embedded in a broader social process.

I also find interesting some side notions in all of these pieces. For instance, ideas about the impact the device might have on people’s usage. Or the fact that the move from the Treo to the iPhone ends up being quite significant, in retrospect. Even Davidson’s points about headphones and retail stores seem to relate to current things. So does the existence of the iPod touch and Apple TV in Apple’s lineup, addressing Mayfield and Cringely, respectively.

I also end up reflecting upon the shift from the “digital hub” strategy (peaking around 2007 or so) to the one revealed with iCloud, “Back to the Mac” and, yes, even Apple Maps. Dediu devotes much time to his mentor Clay Christensen’s notion of “disruptive innovation” and spent part of this latest Critcal Path episode talking about the risks behind Apple not being disruptive enough.

All of this makes me think…

Not that I have a very clear idea of what might happen but, recently, I’ve been thinking about the broader picture. Including the Maps kerfuffle. The importance of social disruption. Apple’s financial state and market presence. The so-called “Post-PC” era in relation to other “post-” notions (post-industrialism, post-colonialism, post-nationalism, post-modernism…). The boring nature of the Google/Apple conflict. The recent financial crisis. The tech world’s emphasis on Apple. The future of academia and education. The iconicity of Steve Jobs…

As Mike Wesch has been saying:

We’ll need to rethink a few things…

Why I Need an iPad

I’m one of those who feel the iPad is the right tool for the job.

This is mostly meant as a reply to this blogthread. But it’s also more generally about my personal reaction to Apple’s iPad announcement.

Some background.

I’m an ethnographer and a teacher. I read a fair deal, write a lot of notes, and work in a variety of contexts. These days, I tend to spend a good amount of time in cafés and other public places where I like to work without being too isolated. I also commute using public transit, listen to lots of podcast, and create my own. I’m also very aural.

I’ve used a number of PDAs, over the years, from a Newton MessagePad 130 (1997) to a variety of PalmOS devices (until 2008). In fact, some people readily associated me with PDA use.

As soon as I learnt about the iPod touch, I needed one. As soon as I’ve heard about the SafariPad, I wanted one. I’ve been an intense ‘touch user since the iPhone OS 2.0 release and I’m a happy camper.

(A major reason I never bought an iPhone, apart from price, is that it requires a contract.)

In my experience, the ‘touch is the most appropriate device for all sorts of activities which are either part of an other activity (reading during a commute) or are simply too short in duration to constitute an actual “computer session.” You don’t “sit down to work at your ‘touch” the way you might sit in front of a laptop or desktop screen. This works great for “looking up stufff” or “checking email.” It also makes a lot of sense during commutes in crowded buses or metros.

In those cases, the iPod touch is almost ideal. Ubiquitous access to Internet would be nice, but that’s not a deal-breaker. Alternative text-input methods would help in some cases, but I do end up being about as fast on my ‘touch as I was with Graffiti on PalmOS.

For other tasks, I have a Mac mini. Sure, it’s limited. But it does the job. In fact, I have no intention of switching for another desktop and I even have an eMachines collecting dust (it’s too noisy to make a good server).

What I miss, though, is a laptop. I used an iBook G3 for several years and loved it. For a little while later, I was able to share a MacBook with somebody else and it was a wonderful experience. I even got to play with the OLPC XO for a few weeks. That one was not so pleasant an experience but it did give me a taste for netbooks. And it made me think about other types of iPhone-like devices. Especially in educational contexts. (As I mentioned, I’m a teacher)

I’ve been laptop-less for a while, now. And though my ‘touch replaces it in many contexts, there are still times when I’d really need a laptop. And these have to do with what I might call “mobile sessions.”

For instance: liveblogging a conference or meeting. I’ve used my ‘touch for this very purpose on a good number of occasions. But it gets rather uncomfortable, after a while, and it’s not very fast. A laptop is better for this, with a keyboard and a larger form factor. But the iPad will be even better because of lower risks of RSI. A related example: just imagine TweetDeck on iPad.

Possibly my favourite example of a context in which the iPad will be ideal: presentations. Even before learning about the prospect of getting iWork on a tablet, presentations were a context in which I really missed a laptop.

Sure, in most cases, these days, there’s a computer (usually a desktop running XP) hooked to a projector. You just need to download your presentation file from Slideshare, show it from Prezi, or transfer it through USB. No biggie.

But it’s not the extra steps which change everything. It’s the uncertainty. Even if it’s often unfounded, I usually get worried that something might just not work, along the way. The slides might not show the same way as you see it because something is missing on that computer or that computer is simply using a different version of the presentation software. In fact, that software is typically Microsoft PowerPoint which, while convenient, fits much less in my workflow than does Apple Keynote.

The other big thing about presentations is the “presenter mode,” allowing you to get more content than (or different content from) what the audience sees. In most contexts where I’ve used someone else’s computer to do a presentation, the projector was mirroring the computer’s screen, not using it as a different space. PowerPoint has this convenient “presenter view” but very rarely did I see it as an available option on “the computer in the room.” I wish I could use my ‘touch to drive presentations, which I could do if I installed software on that “computer in the room.” But it’s not something that is likely to happen, in most cases.

A MacBook solves all of these problems. and it’s an obvious use for laptops. But how, then, is the iPad better? Basically because of interface. Switching slides on a laptop isn’t hard, but it’s more awkward than we realize. Even before watching the demo of Keynote on the iPad, I could simply imagine the actual pleasure of flipping through slides using a touch interface. The fit is “natural.”

I sincerely think that Keynote on the iPad will change a number of things, for me. Including the way I teach.

Then, there’s reading.

Now, I’m not one of those people who just can’t read on a computer screen. In fact, I even grade assignments directly from the screen. But I must admit that online reading hasn’t been ideal, for me. I’ve read full books as PDF files or dedicated formats on PalmOS, but it wasn’t so much fun, in terms of the reading process. And I’ve used my ‘touch to read things through Stanza or ReadItLater. But it doesn’t work so well for longer reading sessions. Even in terms of holding the ‘touch, it’s not so obvious. And, what’s funny, even a laptop isn’t that ideal, for me, as a reading device. In a sense, this is when the keyboard “gets in the way.”

Sure, I could get a Kindle. I’m not a big fan of dedicated devices and, at least on paper, I find the Kindle a bit limited for my needs. Especially in terms of sources. I’d like to be able to use documents in a variety of formats and put them in a reading list, for extended reading sessions. No, not “curled up in bed.” But maybe lying down in a sofa without external lighting. Given my experience with the ‘touch, the iPad is very likely the ideal device for this.

Then, there’s the overall “multi-touch device” thing. People have already been quite creative with the small touchscreen on iPhones and ‘touches, I can just imagine what may be done with a larger screen. Lots has been said about differences in “screen real estate” in laptop or desktop screens. We all know it can make a big difference in terms of what you can display at the same time. In some cases, two screens isn’t even a luxury, for instance when you code and display a page at the same time (LaTeX, CSS…). Certainly, the same qualitative difference applies to multitouch devices. Probably even more so, since the display is also used for input. What Han found missing in the iPhone’s multitouch was the ability to use both hands. With the iPad, Han’s vision is finding its space.

Oh, sure, the iPad is very restricted. For instance, it’s easy to imagine how much more useful it’d be if it did support multitasking with third-party apps. And a front-facing camera is something I was expecting in the first iPhone. It would just make so much sense that a friend seems very disappointed by this lack of videoconferencing potential. But we’re probably talking about predetermined expectations, here. We’re comparing the iPad with something we had in mind.

Then, there’s the issue of the competition. Tablets have been released and some multitouch tablets have recently been announced. What makes the iPad better than these? Well, we could all get in the same OS wars as have been happening with laptops and desktops. In my case, the investment in applications, files, and expertise that I have made in a Mac ecosystem rendered my XP years relatively uncomfortable and me appreciate returning to the Mac. My iPod touch fits right in that context. Oh, sure, I could use it with a Windows machine, which is in fact what I did for the first several months. But the relationship between the iPhone OS and Mac OS X is such that using devices in those two systems is much more efficient, in terms of my own workflow, than I could get while using XP and iPhone OS. There are some technical dimensions to this, such as the integration between iCal and the iPhone OS Calendar, or even the filesystem. But I’m actually thinking more about the cognitive dimensions of recognizing some of the same interface elements. “Look and feel” isn’t just about shiny and “purty.” It’s about interactions between a human brain, a complex sensorimotor apparatus, and a machine. Things go more quickly when you don’t have to think too much about where some tools are, as you’re working.

So my reasons for wanting an iPad aren’t about being dazzled by a revolutionary device. They are about the right tool for the job.

Personal Devices

Still thinking about touch devices, such as the iPod touch and the rumoured “Apple Tablet.”

Thinking out loud. Rambling even more crazily than usual.

Something important about those devices is the need for a real “Personal Digital Assistant.” I put PDAs as a keyword for my previous post because I do use the iPod touch like I was using my PalmOS and even NewtonOS devices. But there’s more to it than that, especially if you think about cloud computing and speech technologies.
I mentioned speech recognition in that previous post. SR tends to be a pipedream of the computing world. Despite all the hopes put into realtime dictation, it still hasn’t taken off in a big way. One reason might be that it’s still somewhat cumbersome to use, in current incarnations. Another reason is that it’s relatively expensive as a standalone product which requires some getting used to. But I get the impression that another set of reasons has to do with the fact that it’s mostly fitting on a personal device. Partly because it needs to be trained. But also because voice itself is a personal thing.

Cloud computing also takes a new meaning with a truly personal device. It’s no surprise that there are so many offerings with some sort of cloud computing feature in the App Store. Not only do Apple’s touch devices have limited file storage space but the notion of accessing your files in the cloud go well with a personal device.
So, what’s the optimal personal device? I’d say that Apple’s touch devices are getting close to it but that there’s room for improvement.

Some perspective…

Originally, the PC was supposed to be a “personal” computer. But the distinction was mostly with mainframes. PCs may be owned by a given person, but they’re not so tied to that person, especially given the fact that they’re often used in a single context (office or home, say). A given desktop PC can be important in someone’s life, but it’s not always present like a personal device should be. What’s funny is that “personal computers” became somewhat more “personal” with the ‘Net and networking in general. Each computer had a name, etc. But those machines remained somewhat impersonal. In many cases, even when there are multiple profiles on the same machine, it’s not so safe to assume who the current user of the machine is at any given point.

On paper, the laptop could have been that “personal device” I’m thinking about. People may share a desktop computer but they usually don’t share their laptop, unless it’s mostly used like a desktop computer. The laptop being relatively easy to carry, it’s common for people to bring one back and forth between different sites: work, home, café, school… Sounds tautological, as this is what laptops are supposed to be. But the point I’m thinking about is that these are still distinct sites where some sort of desk or table is usually available. People may use laptops on their actual laps, but the form factor is still closer to a portable desktop computer than to the kind of personal device I have in mind.

Then, we can go all the way to “wearable computing.” There’s been some hype about wearable computers but it has yet to really be part of our daily lives. Partly for technical reasons but partly because it may not really be what people need.

The original PDAs (especially those on NewtonOS and PalmOS) were getting closer to what people might need, as personal devices. The term “personal digital assistant” seemed to encapsulate what was needed. But, for several reasons, PDAs have been having a hard time. Maybe there wasn’t a killer app for PDAs, outside of “vertical markets.” Maybe the stylus was the problem. Maybe the screen size and bulk of the device weren’t getting to the exact points where people needed them. I was still using a PalmOS device in mid-2008 and it felt like I was among the last PDA users.
One point was that PDAs had been replaced by “smartphones.” After a certain point, most devices running PalmOS were actually phones. RIM’s Blackberry succeeded in a certain niche (let’s use the vague term “professionals”) and is even beginning to expand out of it. And devices using other OSes have had their importance. It may not have been the revolution some readers of Pen Computing might have expected, but the smartphone has been a more successful “personal device” than the original PDAs.

It’s easy to broaden our focus from smartphones and think about cellphones in general. If the 3.3B figure can be trusted, cellphones may already be outnumbering desktop and laptop computers by 3:1. And cellphones really are personal. You bring them everywhere; you don’t need any kind of surface to use them; phone communication actually does seem to be a killer app, even after all this time; there are cellphones in just about any price range; cellphone carriers outside of Canada and the US are offering plans which are relatively reasonable; despite some variation, cellphones are rather similar from one manufacturer to the next… In short, cellphones already were personal devices, even before the smartphone category really emerged.

What did smartphones add? Basically, a few PDA/PIM features and some form of Internet access or, at least, some form of email. “Whoa! Impressive!”

Actually, some PIM features were already available on most cellphones and Internet access from a smartphone is in continuity with SMS and data on regular cellphones.

What did Apple’s touch devices add which was so compelling? Maybe not so much, apart from the multitouch interface, a few games, and integration with desktop/laptop computers. Even then, most of these changes were an evolution over the basic smartphone concept. Still, it seems to have worked as a way to open up personal devices to some new dimensions. People now use the iPhone (or some other multitouch smartphone which came out after the iPhone) as a single device to do all sorts of things. Around the World, multitouch smartphones are still much further from being ubiquitous than are cellphones in general. But we could say that these devices have brought the personal device idea to a new phase. At least, one can say that they’re much more exciting than the other personal computing devices.

But what’s next for personal devices?

Any set of buzzphrases. Cloud computing, speech recognition, social media…

These things can all come together, now. The “cloud” is mostly ready and personal devices make cloud computing more interesting because they’re “always-on,” are almost-wearable, have batteries lasting just about long enough, already serve to keep some important personal data, and are usually single-user.

Speech recognition could go well with those voice-enabled personal devices. For one thing, they already have sound input. And, by this time, people are used to seeing others “talk to themselves” as cellphones are so common. Plus, voice recognition is already understood as a kind of security feature. And, despite their popularity, these devices could use a further killer app, especially in terms of text entry and processing. Some of these devices already have voice control and it’s not so much of a stretch to imagine them having what’s needed for continuous speech recognition.

In terms of getting things onto the device, I’m also thinking about such editing features as a universal rich-text editor (à la TinyMCE), predictive text, macros, better access to calendar/contact data, ubiquitous Web history, multiple pasteboards, data detectors, Automator-like processing, etc. All sorts of things which should come from OS-level features.

“Social media” may seem like too broad a category. In many ways, those devices already take part in social networking, user-generated content, and microblogging, to name a few areas of social media. But what about a unified personal profile based on the device instead of the usual authentication method? Yes, all sorts of security issues. But aren’t people unconcerned about security in the case of social media? Twitter accounts are being hacked left and right yet Twitter doesn’t seem to suffer much. And there could be added security features on a personal device which is meant to really integrate social media. Some current personal devices already work well as a way to keep login credentials to multiple sites. The next step, there, would be to integrate all those social media services into the device itself. We maybe waiting for OpenSocial, OpenID, OAuth, Facebook Connect, Google Connect, and all sorts of APIs to bring us to an easier “social media workflow.” But a personal device could simplify the “social media workflow” even further, with just a few OS-based tweaks.

Unlike my previous, I’m not holding my breath for some specific event which will bring us the ultimate personal device. After all, this is just a new version of my ultimate handheld device blogpost. But, this time, I was focusing on what it means for a device to be “personal.” It’s even more of a drafty draft than my blogposts usually have been ever since I decided to really RERO.

So be it.

Speculating on Apple's Touch Strategy

This is mere speculation on my part, based on some rumours.

I’m quite sure that Apple will come up with a video-enabled iPod touch on September 9, along with iTunes 9 (which should have a few new “social networking” features). This part is pretty clear from most rumour sites.

AppleInsider | Sources: Apple to unveil new iPod lineup on September 9.

Progressively, Apple will be adopting a new approach to marketing its touch devices. Away from the “poorperson’s iPhone” and into the “tiny but capable computer” domain. Because the 9/9 event is supposed to be about music, one might guess that there will be a cool new feature or two relating to music. Maybe lyrics display, karaoke mode, or whatever else. Something which will simultaneously be added to the iPhone but would remind people that the iPod touch is part of the iPod family. Apple has already been marketing the iPod touch as a gaming platform, so it’s not a radical shift. But I’d say the strategy is to make Apple’s touch devices increasingly more attractive, without cannibalizing sales in the MacBook family.

Now, I really don’t expect Apple to even announce the so-called “Tablet Mac” in September. I’m not even that convinced that the other devices Apple is preparing for expansion of its touch devices lineup will be that close to the “tablet” idea. But it seems rather clear, to me, that Apple should eventually come up with other devices in this category. Many rumours point to the same basic notion, that Apple is getting something together which will have a bigger touchscreen than the iPhone or iPod touch. But it’s hard to tell how this device will fit, in the grand scheme of things.

It’s rather obvious that it won’t be a rebirth of the eMate the same way that the iPod touch wasn’t a rebirth of the MessagePad. But it would make some sense for Apple to target some educational/learning markets, again, with an easy-to-use device. And I’m not just saying this because the rumoured “Tablet Mac” makes me think about the XOXO. Besides, the iPod touch is already being marketed to educational markets through the yearly “Back to school” program which (surprise!) ends on the day before the September press conference.

I’ve been using an iPod touch (1st Generation) for more than a year, now, and I’ve been loving almost every minute of it. Most of the time, I don’t feel the need for a laptop, though I occasionally wish I could buy a cheap one, just for some longer writing sessions in cafés. In fact, a friend recently posted information about some Dell Latitude D600 laptops going for a very low price. That’d be enough for me at this point. Really, my iPod touch suffices for a lot of things.

Sadly, my iPod touch seems to have died, recently, after catching some moisture. If I can’t revive it and if the 2nd Generation iPod touch I bought through Kijiji never materializes, I might end up buying a 3rd Generation iPod touch on September 9, right before I start teaching again. If I can get my hands on a working iPod touch at a good price before that, I may save the money in preparation for an early 2010 release of a new touch device from Apple.

Not that I’m not looking at alternatives. But I’d rather use a device which shares enough with the iPod touch that I could migrate easily, synchronize with iTunes, and keep what I got from the App Store.

There’s a number of things I’d like to get from a new touch device. First among them is a better text entry/input method. Some of the others could be third-party apps and services. For instance, a full-featured sharing app. Or true podcast synchronization with media annotation support (à la Revver or Soundcloud). Or an elaborate, fully-integrated logbook with timestamps, Twitter support, and outlining. Or even a high-quality reference/bibliography manager (think RefWorks/Zotero/Endnote). But getting text into such a device without a hardware keyboard is the main challenge. I keep thinking about all sorts of methods, including MessagEase and Dasher as well as continuous speech recognition (dictation). Apple’s surely thinking about those issues. After all, they have some handwriting recognition systems that they aren’t really putting to any significant use.

Something else which would be quite useful is support for videoconferencing. Before the iPhone came out, I thought Apple may be coming out with iChat Mobile. Though a friend announced the iPhone to me by making reference to this, the position of the camera at the back of the device and the fact that the original iPhone’s camera only supported still pictures (with the official firmware) made this dream die out, for me. But a “Tablet Mac” with an iSight-like camera and some form of iChat would make a lot of sense, as a communication device. Especially since iChat already supports such things as screen-sharing and slides. Besides, if Apple does indeed move in the direction of some social networking features, a touch device with an expanded Address Book could take a whole new dimension through just a few small tweaks.

This last part I’m not so optimistic about. Apple may know that social networking is important, at this point in the game, but it seems to approach it with about the same heart as it approached online services with eWorld, .Mac, and MobileMe. Of course, they have the tools needed to make online services work in a “social networking” context. But it’s possible that their vision is clouded by their corporate culture and some remnants of the NIH problem.

Ah, well…

Crazy App Idea: Happy Meter

I keep getting ideas for apps I’d like to see on Apple’s App Store for iPod touch and iPhone. This one may sound a bit weird but I think it could be fun. An app where you can record your mood and optionally broadcast it to friends. It could become rather sophisticated, actually. And I think it can have interesting consequences.

The idea mostly comes from Philippe Lemay, a psychologist friend of mine and fellow PDA fan. Haven’t talked to him in a while but I was just thinking about something he did, a number of years ago (in the mid-1990s). As part of an academic project, Philippe helped develop a PDA-based research program whereby subjects would record different things about their state of mind at intervals during the day. Apart from the neatness of the data gathering technique, this whole concept stayed with me. As a non-psychologist, I personally get the strong impression that recording your moods frequently during the day can actually be a very useful thing to do in terms of mental health.

And I really like the PDA angle. Since I think of the App Store as transforming Apple’s touch devices into full-fledged PDAs, the connection is rather strong between Philippe’s work at that time and the current state of App Store development.

Since that project of Philippe’s, a number of things have been going on which might help refine the “happy meter” concept.

One is that “lifecasting” became rather big, especially among certain groups of Netizens (typically younger people, but also many members of geek culture). Though the lifecasting concept applies mostly to video streams, there are connections with many other trends in online culture. The connection with vidcasting specifically (and podcasting generally) is rather obvious. But there are other connections. For instance, with mo-, photo-, or microblogging. Or even with all the “mood” apps on Facebook.

Speaking of Facebook as a platform, I think it meshes especially well with touch devices.

So, “happy meter” could be part of a broader app which does other things: updating Facebook status, posting tweets, broadcasting location, sending personal blogposts, listing scores in a Brain Age type game, etc.

Yet I think the “happy meter” could be useful on its own, as a way to track your own mood. “Turns out, my mood was improving pretty quickly on that day.” “Sounds like I didn’t let things affect me too much despite all sorts of things I was going through.”

As a mood-tracker, the “happy meter” should be extremely efficient. Because it’s easy, I’m thinking of sliders. One main slider for general mood and different sliders for different moods and emotions. It would also be possible to extend the “entry form” on occasion, when the user wants to record more data about their mental state.

Of course, everything would be save automatically and “sent to the cloud” on occasion. There could be a way to selectively broadcast some slider values. The app could conceivably send reminders to the user to update their mood at regular intervals. It could even serve as a “break reminder” feature. Though there are limitations on OSX iPhone in terms of interapplication communication, it’d be even neater if the app were able to record other things happening on the touch device at the same time, such as music which is playing or some apps which have been used.

Now, very obviously, there are lots of privacy issues involved. But what social networking services have taught us is that users can have pretty sophisticated notions of privacy management, if they’re given the chance. For instance, adept Facebook users may seem to indiscrimately post just about everything about themselves but are often very clear about what they want to “let out,” in context. So, clearly, every type of broadcasting should be controlled by the user. No opt-out here.

I know this all sounds crazy. And it all might be a very bad idea. But the thing about letting my mind wander is that it helps me remain happy.

Apple's App Store for OSX iPhone Devices

Though it hasn’t been announced on its website, Apple’s App Store for OSX iPhone applications is now online. In fact, enterprising iPhone users can allegedly upgrade their phone’s firmware to 2.0 in order to take advantage of this online software shop. As an iPod touch user, I have no such luxury. As of this moment, the firmware upgrade for iPod touch hasn’t been released. Since that upgrade will be free for iPhone and paid for iPod touch, the discrepancy isn’t surprising.

With those third-party applications, my ‘touch will become more of a PDA and the iPhone will become more of a smartphone.

Still, I was able to access the App Store using iTunes 7.7 (which I downloaded directly from Apple’s iTunes website since it wasn’t showing up in Apple Software Update). Adding the “Applications” item in the left-hand sidebar (available through the “General” tab in iTunes Preferences), I can see a list of applications already downloaded in iTunes (i.e., nothing at first launch). At the bottom of that page, there’s a link to get more applications which leads to the App Store.  There, I can browse applications, get free apps, or buy some of the paid ones (using the payment information stored in my iTunes account). Prices are the same in USD and CAD (since they are pretty much on par, it all makes sense). Searching and browsing for apps follows all the same conventions as with music, movies, podcasts, or iPod games. Application pages appear in searches for application names (say, “Trism“).

I went through a number of apps and eventually downloaded 28 free ones. I also noted a number of apps I would like to try, including Trism, Units, Things, Outliner, OmniFocus, Steps (one of the rare apps available in French), iCalorie, and one of the multiple Sudoku apps. However, I can’t put apps on a wishlist and demos aren’t available directly through iTunes (I’m assuming they’re available from the iPhone or iPod touch).

I’m actually looking forward to trying out all of these apps. I don’t tend to be an early adopter but this is one case for quick adoption, especially with free apps. I guess a small part of this is that, since Apple has sorted through these apps, I’m assuming none of them contains any malware. Not that I ever fully trust an organization or individual, but my level of trust is higher with the App Store than with, say, the usual software download site (VersionTracker.com, Tucows.com, Download.com, Handango.com). And I trust these download sites much more than the developer sites I find through Web searches.

One thing I notice very quickly is how small many of those apps were. After downloading 28 apps, my “Mobile Applications” folder is 31MB. Of course, many PDA apps were typically under 100KB, but given the size of OSX iPhone devices, I’m glad to notice that I can probably fit that I can probably fit a lot of applications in less than 1GB, leaving more room for podcasts, music, and pictures. On the other hand, filesizes are apparently not listed in the “Applications” section on iTunes (they’re specified on the individual apps’ pages).

Overall, there’s a number of obvious apps, many of which are PDA classic: to do lists, phrasebooks, clocks and timers, calculators (including tips calculators), converters, trackers, weather forecasts, and solitaire or other casual games (like sudoku). No surprise with any of these and I’ll probably use many of them. Typically, these can be difficult to select because developers have had very similar ideas but the apps have slightly different features. Typically, with those apps, free wins over extensive feature lists, even for very cheap software.

Speaking of price, I notice that AppEngines is selling 43 different Public Domain books as separate apps for 1$ each. Now, there’s nothing wrong with making money off Public Domain material (after all, there wouldn’t be a Disney Company without Public Domain works). But it seems strange to me that someone would nickel-and-dime readers by charging for access to individual Public Domain titles. Sure, a standalone app is convenient. But a good electronic book reader should probably be more general than book-specific. Not really because of size constraints and such. But because books are easily conceived as part of a library (or bookshelf), instead of being scattered on a handheld device. The BookZ Text Reader seems more relevant, in this case, and it’s compatible with the Project Gutenberg files. Charging 2$ for that text reader seems perfectly legitimate. For its part, Fictionwise has released a free eReader app for use with its proprietary format. Though it won’t transform OSX iPhone devices into a Kindle killer, this eReader app does seem to at least transfer books through WiFi. Since these books are copyrighted ones, the app can be a nice example of a convenient content marketplace.

I’m a bit surprised that the educational software section of the App Store isn’t more elaborate. It does contain 45 separate apps but several of those are language-specific versions of language tools or apps listed in other categories which happen to have some connection to learning. If it were me, I’d classify language tools in a separate category or subcategory and I might more obvious how different educational apps are classified. On the other hand, I’m quite surprised that Molecules isn’t listed in this educational section.

The reason I care so much is that I see touch devices generally as an important part of the future of education. With iPod touches being bundled with Mac sales in the current “Back-to-School” special and with the obvious interest of different people in putting touch devices in the hands of learners and teachers, I would have expected a slew of educational apps. Not to mention that educational apps have long populated lists of software offerings since the fondly remembered Hypercard days to PDAs and smartphones.

Among the interesting educational apps is Faber Acoustical‘s SignalScope. At 25$, it’s somewhat expensive for an OSX iPhone app, but it’s much less expensive than some equivalent apps on other platforms used to be. It’s also one of the more creative apps I’ve seen in the Store. Unfortunately, for apparently obvious reasons (the iPod touch has no embedded audio in), it’s only available on the iPhone.

Speaking of iPhone-only software… There’s already a way to get audio on the iPod touch using a third-party adapter. I understand that Apple isn’t supporting it officially but I wonder if the iPhone-only tag will prevent people from using it. Small point for most people, I guess. But it’d be really nice if I could use my ‘touch as a voice recorder. Would make for a great fieldwork tool.

One thing I wish were available on the App Store is an alternative mode for text entry. Though I’m already getting decent performance from the default virtual keyboard on my ‘touch, I still wish I had Dasher, MessagEase or even Graffiti.

Among the apps I’ve browsed, I see a number of things which could be described as “standalone versions of Web apps.” There’s already a good number of Web apps compatible or even customized for OSX iPhone devices. The standalone versions can be useful, in part because they can be used offline (great for WiFi-less situations, on the iPod touch). But these “standaloned Web apps” also don’t seem to really take full advantage of Apple’s Cocoa Touch. In the perception of value, I’d say that “standaloned Web apps” rate fairly low, especially since most Web apps are free to use (unless tied with an account on a Web service).

I was also surprised to see that a number of apps which are basically simple jokes are put for sale on the App Store. I was amused to see an OSX iPhone version of Freeverse’s classic “Jared, The Butcher of Songs.” But I’m puzzled by the fact that Hottrix is selling its iBeer app for 3$. Sure, it’s just 3$. But I don’t see the app providing with as much pleasure as a single taster of a craft beer. Not to mention that the beer itself looks (by colour, foam, and carbonation) like a bland pilsner and not like a flavourful beer.

Overall, I’d say the Store is well-made. Again, the same principles are used as for the iTunes Store generally. All application pages have screenshots and some of these screenshots give an excellent idea of what the application does, while other screenshots are surprisingly difficult to understand. Browsing the Store, I noticed how important icons seemed to be in terms of catching my attention. Some application developers did a great job at the textual description of their applications, also catching my attention. But others use “marketingspeak” to brag about their product, which has the effect of making the app more difficult to grasp. Given the number of apps already listed and the simplicity of the classification, such details become quite important. Almost (but not nearly as much) as price, in terms of making an app appealing.

It seems pretty clear to me (and to others, including some free market advocates), that price is an important issue. This was obvious to many of us for a while. But the opening of the App Store makes this issue very obvious.

For instance, regardless of his previous work, CNET journalist Don Reisinger is probably on to something when he argues, in essence, that the free apps may outweigh the benefits of the paid apps, on Apple’s App Store. Even though Apple allegedly coaxed developers into charging for their apps, the fact of the matter is that the App Store clearly shows that no-cost software can be a competitive advantage in the marketplace. The same advantage is obvious in many contexts, including in music. But, as a closed environment, the App Store could serve as an efficient case study in “competing with free.” One thing to keep in mind, as I keep saying, is that there are multiple types of no-cost offerings. In the software world (including on the App Store), there’s a large number of examples of successful applications which incurred no purchase on the users’ part. Yes, sometimes you need a bit of imagination to build a business model on top of no-cost software. But I think the commercial ventures enabled by these “alternative” business models are more diverse than people seem to assume.

One thing I noticed in terms of application pricing on the App Store is that there either seems to be a number of sweet spots or pricing schemes come from a force of habit. Sure, Apple only has a finite list of “tiers” for amounts which can be charged for a given app (with preset currency conversions). But I think that some tiers have been used more than others. For instance, 10$ seems fairly common as a threshold between truly inexpensive apps and a category similar to “shareware.” Some apps are actually as expensive as the desktop versions, though it seems that the most expensive app so far is under 100$.

One thing to note is that several developers of those early App Store products have been involved in Mac development for a while (the Omni Group being an obvious example) but there are also several organizations which seem to be entering Cocoa development for the first time. This could be a bigger halo effect in terms of Mac sales than the original iPod or the iPhone. Profit made through OSX iPhone apps (either through software cost, through services, or even through other monetization schemes) could lead them to develop software for OSX Leopard. At least, they already made an investment in the development platform.

It’ll be interesting to observe what happens with software pricing in relation to the “apparent hand” of a constrained market.

But I’m less interested in this market than in the actual apps. When can I install the “iPhone 2.0” firmware on my iPod touch? Is it now?

Selling Myself Long

Been attending sessions by Meri Aaron Walker about online methods to get paid for our expertise. Meri coaches teachers about those issues.

MAWSTOOLBOX.COM

There’s also a LearnHub “course”: Jumpstart Your Online Teaching Career.

Some notes, on my own thinking about monetization of expertise. Still draft-like, but RERO is my battle cry.

Some obstacles to my selling expertise:

  • My “oral personality.”
  • The position on open/free knowledge in academia and elsewhere.
  • My emphasis on friendship and personal rapport.
  • My abilities as an employee instead of a “boss.”
  • Difficulty in assessing the value of my expertise.
  • The fact that other people have the same expertise that I think I have.
  • High stakes (though this can be decreased, in some contexts).
  • My distaste for competition/competitiveness.
  • Difficulty at selling and advertising myself (despite my social capital).
  • Being a creative generalist instead of a specialist.

Despite all these obstacles, I have been thinking about selling my services online.

One reason is that I really do enjoy teaching. As I keep saying, teaching is my hobby (when I get paid, it’s to learn how to interact with other learners and to set up learning contexts).

In fact, I enjoy almost everything in teaching (the major exception being grading/evaluating). From holding office hours and lecturing to facilitating discussions and answering questions through email. Teaching, for me, is deeply satisfying and I think that learning situations which imply the role of a teacher still make a lot of sense. I also like more informal learning situations and I even try to make my courses more similar to informal teaching. But I still find specific value in a “teaching and learning” system.

Some people seem to assume that teaching a course is the same thing as “selling expertise.” My perspective on learning revolves to a large extent on the difference between teaching and “selling expertise.” One part is that I find a difference between selling a product or process and getting paid in a broader transaction which does involve exchange about knowledge but which isn’t restricted to that exchange. Another part is that I don’t see teachers as specialists imparting their wisdom to eager masses. I see knowledge as being constructed in diverse situations, including formal and informal learning. Expertise is often an obstacle in the kind of teaching I’m interested in!

Funnily enough, I don’t tend to think of expertise as something that is easily measurable or transmissible. Those who study expertise have ways to assess something which is related to “being an expert,” especially in the case of observable skills (many of those are about “playing,” actually: chess, baseball, piano…). My personal perspective on expertise tends to be broader, more fluid. Similar to experience, but with more of a conscious approach to learning.

There also seems to be a major difference between “breadth of expertise” and “topics you can teach.” You don’t necessarily need to be very efficient at some task to help someone learn to do it. In fact, in some cases, being proficient in a domain is an obstacle to teaching in that domain, since expertise is so ingrained as to be very difficult to retrieve consciously.

This is close to “do what I say, not what I do.” I even think that it can be quite effective to actually instruct people without direct experience of these instructions. Similar to consulting, actually. Some people easily disagree with this point and some people tease teachers about “doing vs. teaching.” But we teachers do have a number of ways to respond, some of them snarkier than others. And though I disagree with several parts of his attitude, I quite like this short monologue by Taylor Mali about What Teachers Make.

Another reason I might “sell my expertise” is that I genuinely enjoy sharing my expertise. I usually provide it for free, but I can possibly relate to the value argument. I don’t feel so tied to social systems based on market economy (socialist, capitalist, communist…) but I have to make do.

Another link to “selling expertise” is more disciplinary. As an ethnographer, I enjoy being a “cultural translator.” of sorts. And, in some cases, my expertise in some domains is more of a translation from specialized speech into laypeople’s terms. I’m actually not very efficient at translating utterances from one language to another. But my habit of navigating between different “worlds” makes it possible for me to bridge gaps, cross bridges, serve as mediator, explain something fairly “esoteric” to an outsider. Close to popularization.

So, I’ve been thinking about what can be paid in such contexts which give prominence to expertise. Tutoring, homework help, consulting, coaching, advice, recommendation, writing, communicating, producing content…

And, finally, I’ve been thinking about my domains of expertise. As a “Jack of All Trades,” I can list a lot of those. My level of expertise varies greatly between them and I’m clearly a “Master of None.” In fact, some of them are merely from personal experience or even anecdotal evidence. Some are skills I’ve been told I have. But I’d still feel comfortable helping others with all of them.

I’m funny that way.

Domains of  Expertise

French

  • Conversation
  • Reading
  • Writing
  • Culture
  • Literature
  • Regional diversity
  • Chanson appreciation

Bamanan (Bambara)

  • Greetings
  • Conversation

Social sciences

  • Ethnographic disciplines
  • Ethnographic field research
  • Cultural anthropology
  • Linguistic anthropology
  • Symbolic anthropology
  • Ethnomusicology
  • Folkloristics

Semiotics

Language studies

  • Language description
  • Social dimensions of language
  • Language change
  • Field methods

Education

  • Critical thinking
  • Lifelong learning
  • Higher education
  • Graduate school
  • Graduate advising
  • Academia
  • Humanities
  • Social sciences
  • Engaging students
  • Getting students to talk
  • Online teaching
  • Online tools for teaching

Course Management Systems (Learning Management Systems)

  • Oncourse
  • Sakai
  • WebCT
  • Blackboard
  • Moodle

Social networks

  • Network ethnography
  • Network analysis
  • Influence management

Web platforms

  • Facebook
  • MySpace
  • Ning
  • LinkedIn
  • Twitter
  • Jaiku
  • YouTube
  • Flickr

Music

  • Cultural dimensions of music
  • Social dimensions of music
  • Musicking
  • Musical diversity
  • Musical exploration
  • Classical saxophone
  • Basic music theory
  • Musical acoustics
  • Globalisation
  • Business models for music
  • Sound analysis
  • Sound recording

Beer

  • Homebrewing
  • Brewing techniques
  • Recipe formulation
  • Finding ingredients
  • Appreciation
  • Craft beer culture
  • Brewing trends
  • Beer styles
  • Brewing software

Coffee

  • Homeroasting
  • Moka pot brewing
  • Espresso appreciation
  • Coffee fundamentals
  • Global coffee trade

Social media

Blogging

  • Diverse uses of blogging
  • Writing tricks
  • Workflow
  • Blogging platforms

Podcasts

  • Advantages of podcasts
  • Podcasts in teaching
  • Filming
  • Finding podcasts
  • Embedding content

Technology

  • Trends
  • Geek culture
  • Equipment
  • Beta testing
  • Troubleshooting Mac OS X

Online Life

Communities

  • Mailing-lists
  • Generating discussions
  • Entering communities
  • Building a sense of community
  • Diverse types of communities
  • Community dynamics
  • Online communities

Food

  • Enjoying food
  • Cooking
  • Baking
  • Vinaigrette
  • Pizza dough
  • Bread

Places

  • Montreal, Qc
  • Lausanne, VD
  • Bamako, ML
  • Bloomington, IN
  • Moncton, NB
  • Austin, TX
  • South Bend, IN
  • Fredericton, NB
  • Northampton, MA

Pedestrianism

  • Carfree living
  • Public transportation
  • Pedestrian-friendly places

Tools I Use

  • PDAs
  • iPod
  • iTunes
  • WordPress.com
  • Skype
  • Del.icio.us
  • Diigo
  • Blogger (Blogspot)
  • Mac OS X
  • Firefox
  • Flock
  • Internet Explorer
  • Safari
  • Gmail
  • Google Calendar
  • Google Maps
  • Zotero
  • Endnote
  • RefWorks
  • Zoho Show
  • Wikipedia
  • iPod touch
  • SMS
  • Outlining
  • PowerPoint
  • Slideshare
  • Praat
  • Audacity
  • Nero Express
  • Productivity software

Effective Web searches

Socialization

  • Social capital
  • Entering the field
  • Creating rapport
  • Event participation
  • Event hosting

Computer Use

  • Note-taking
  • Working with RSS feeds
  • Basic programing concepts
  • Data manipulations

Research Methods

  • Open-ended interviewing
  • Qualitative data analysis

Personal

  • Hedonism
  • Public speaking
  • GERD
  • Strabismus
  • Moving
  • Cultural awareness

Note-Taking on OSX iPhone

Attended Dan Dennett’s “From Animal to Person : How Culture Makes Up our Minds” talk, yesterday. An event hosted by UQAM’s Cognitive Science Institute. Should blog about this pretty soon. It was entertaining and some parts were fairly stimulating. But what surprised me the most had nothing to do with the talk: I was able to take notes efficiently using the onscreen keyboard on my iPod touch (my ‘touch).

As I blogged yesterday, in French, it took me a while to realize that switching keyboard language on the ‘touch also changed the dictionary used for text prediction. Very sensical but I hadn’t realized it. Writing in English with French dictionary predictions was rather painful. I basically had to click bypass the dictionary predictions on most words. Even “to” was transformed into “go” by the predictive keyboard, and I didn’t necessarily notice all the substitutions done. Really, it was a frustrating experience.

It may seem weird that it would take me a while to realize that I could get an English predictive dictionary in a French interface. One reason for the delay is that I expect some degree of awkwardness in some software features, even with some Apple products. Another reason is that I wasn’t using my ‘touch for much text entry, as I’m pretty much waiting for OSX iPhone 2.0 which should bring me alternative text entry methods such as Graffiti, MessagEase and, one can dream, Dasher. If these sound like excuses for my inattention and absent-mindedness, so be it. 😀

At any rate, I did eventually find out that I could switch back and forth between French and English dictionaries for predictive text entry on my ‘touch’s onscreen keyboard. And I’ve been entering a bit of text through this method, especially answers to a few emails.

But, last night, I thought I’d give my ‘touch a try as a note-taking device. I’ve been using PDAs for a number of years and note-taking has been a major component of my PDA usage pattern. In fact, my taking notes on a PDA has been so conspicuous that some people seem to associate me quite directly with this. It may even have helped garner a gadget-freak reputation, even though my attitude toward gadgets tends to be quite distinct from the gadget-freak pattern.

For perhaps obvious reasons, I’ve typically been able to train myself to efficiently use handheld text entry methods. On my NewtonOS MessagePad 130, I initially “got pretty good” at using the default handwriting recognition. This surprised a lot of people because human beings usually have a very hard deciphering my handwriting. Still on the Newton, switching to Graffiti, I became rather proficient at entering text using this shorthand method. On PalmOS devices (HandSpring Visor and a series of Sony Clié devices), I was usually doubling on Graffiti and MessagEase. In all of these cases, I was typically able to take rather extensive notes during different types of oral presentations or simply when I thought about something. Though I mostly used paper to take notes during classes I’ve attended during most of my academic coursework, PDA text entry was usually efficient enough that I could write down some key things in realtime. In fact, I’ve used PDAs rather extensively to take notes during ethnographic field research.

So, note taking was one of the intended uses for my iPod touch. But, again, I thought I would have to wait for text entry alternatives to the default keyboard before I could do it efficiently. So that’s why I was so surprised, yesterday, when I found out that I was able to efficiently take notes during Dennett’s talk using only the default OSX iPhone onscreen keyboard.

The key, here, is pretty much what someone at Apple was describing during some keynote session (might have been the “iPhone Roadmap” event): you need to trust the predictions. Yes, it sounds pretty “touchy-feely” (we’re talking about “touch devices,” after all 😉 ). But, well, it does work better than you would expect.

The difference is even more striking for me because I really was “fighting” the predictions. I couldn’t trust them because most of them were in the wrong language. But, last night, I noticed how surprisingly accurate the predictions could be, even with a large number of characters being mistyped. Part of it has to do with the proximity part of the algorithm. If I type “xartion,” the algorithm guesses that I’m trying to type “cartoon” because ‘x’ is close to ‘c’ and ‘i’ is close to ‘o’ (not an example from last night but one I just tried). The more confident you are that the onscreen keyboard will accurately predict what you’re trying to type, the more comfortably you can enter text.  The more comfortable you are at entering text, the more efficient you become at typing, which begins a feedback loop.

Because I didn’t care that specifically about the content of Dennett’s talk, it was an excellent occasion to practise entering text on my ‘touch. The stakes of “capturing” text were fairly low. It almost became a game. When you add characters to a string which is bringing up the appropriate suggestion and delete those extra characters, the suggestion is lost. In other words, using the example above, if I type “xartion,” I get “cartoon” as a suggestion and simply need to type a space or any non-alphabetic character to accept that suggestion. But if I go on typing “xartionu” and go back to delete the ‘u,’ the “cartoon” suggestion disappears. So I was playing a kind of game with the ‘touch as I was typing relatively long strings and trying to avoid extra characters. I lost a few accurate suggestions and had to retype these, but the more I trusted the predictive algorithm, the less frequently did I have to retype.

During a 90 minute talk, I entered about 500 words. While it may not sound like much, I would say that it captured the gist of what I was trying to write down. I don’t think I would have written down much more if I had been writing on paper. Some of these words were the same as the ones Dennett uttered but the bulk of those notes were my own thoughts on what Dennett was saying. So there were different cognitive processes going on at the same time, which greatly slows down each specific process. I would still say that I was able to follow the talk rather closely and that my notes are pretty much appropriate for the task.

Now, I still have some issues with entering text using the ‘touch’s onscreen keyboard.

  • While it makes sense to make it the default that all suggestions are accepted, there could be an easier way to refuse suggestions that tapping the box where that suggestion appears.
  • It might also be quite neat (though probably inefficient) if the original characters typed by the user were somehow kept in memory. That way, one could correct inaccurate predictions using the original string.
  • The keyboard is both very small for fingers and quite big for the screen.
  • Switching between alphabetic characters and numbers is somewhat inefficient.
  • While predictions have some of the same effect, the lack of a “spell as you type” feature decreases the assurance in avoiding typos.
  • Dictionary-based predictions are still inefficient in bilingual writing.
  • The lack of copy-paste changes a lot of things about text entry.
  • There’s basically no “command” or “macro” available during text entry.
  • As a fan of outliners, I’m missing the possibility to structure my notes directly as I enter them.
  • A voice recorder could do wonders in conjunction with text entry.
  • I really just wish Dasher were available on OSX iPhone.

All told, taking notes on the iPod touch is more efficient than I thought it’d be but less pleasant than I wish it can become.

Touch Thoughts: Apple's Handheld Strategy

I’m still on the RDF.
Apple‘s March 6, 2008 event was about enterprise and development support for its iPhone and iPod touch lines of handheld devices. Lots to think about.

(For convenience’s sake, I’ll lump together the iPod touch and the iPhone under the name “Touch,” which seems consistent with Apple’s “Cocoa Touch.”)

Been reading a fair bit about this event. Interesting reactions across the board.

My own thoughts on the whole thing.
I appreciate the fact that Phil Schiller began the “enterprise” section of the event with comments about a university. Though universities need not be run like profit-hungry corporations, linking Apple’s long-standing educational focus with its newly invigorated enterprise focus makes sense. And I had a brief drift-off moment as I was thinking about Touch products in educational contexts.

I’m surprised at how enthusiastic I get about the enterprise features. Suddenly, I can see Microsoft’s Exchange make sense.

I get the clear impression that even more things will come into place at the end of June than has been said by Apple. Possibly new Touch models or lines. Probably the famous 3G iPhone. Apple-released apps. Renewed emphasis on server technology (XServe, Mac OS X Server, XSan…). New home WiFi products (AirPort, Time Capsule, Apple TV…). New partnerships. Cool VC-funded startups. New features on the less aptly named “iTunes” store.

Though it was obvious already, the accelerometer is an important feature. It seems especially well-adapted to games and casual gamers like myself are likely to enjoy games this feature makes possible. It can also lead to very interesting applications. In fact, the “Etch and Sketch” demo was rather convincing as a display of some core Touch features. These are exactly the features which help sell products.
Actually, I enjoyed the “wow factor” of the event’s demos. I’m convinced that it will energize developers and administrators, whether or not they plan on using Touch products. Some components of Apple’s Touch strategy are exciting enough that the more problematic aspects of this strategy may matter a bit less. Those of us dreaming about Android, OpenMoko, or even a revived NewtonOS can still find things to get inspired by in Apple’s roadmap.

What’s to come, apart from what was announced? No idea. But I do daydream about all of this.
I’m especially interested in the idea of Apple Touch as “mainstream, WiFi, mobile platform.” There’s a lot of potential for Apple-designed, WiFi-enabled handhelds. Whether or not they include a cellphone.
At this point, Apple only makes five models of Touch products: three iPod touches and two iPhones. Flash memory is the main differentiating factor within a line. It makes it relatively easy to decide which device to get but some product diversity could be interesting. While some people expect/hope that Apple will release radically new form factors for Touch devices (e.g., a tablet subnotebook), it’s quite likely that other features will help distinguish Apple’s Touch hardware.
Among features I’d like to get through software, add-ons, or included in a Touch product? Number of things, some alluded to in the “categories” for this post. Some of these I had already posted.

  • Quality audio recording (to make it the ideal fieldwork audio tool).
  • eBook support (to compete with Amazon’s Kindle).
  • Voice support (including continuous dictation, voice interface…).
  • Enhanced support for podcasting (interacting with podcasts, sending audio/video responses…)
  • Video conferencing (been thinking about this for a while).
  • GPS (location will be big).
  • Mesh networking (a neat feature of OLPC’s XO).
  • Mobile WiMAX (unlikely, but it could be neat).
  • Battery pack (especially for long trips in remote regions).
  • Add-on flash memory (unlikely, but it could be useful, especially for backup).
  • Offline storage of online content (likely, but worth noting).
  • Inexpensive model (especially for “emerging markets”).
  • Access to 3G data networks without cellular “voice plan” (unlikely, but worth a shot).
  • Alternative input methods (MessagEase, Graffiti, adaptive keyboard, speech recognition…).
  • Use as Mac OS X “host” (kind of like a user partition).
  • Bluetooth/WiFi data transfer (no need for cables and docks).
  • MacBook Touch (unlikely, especially with MacBook Air, but it could be fun).
  • Automatic cell to VoIP-over-WiFi switching (saving cell minutes).

Of course, there are many obvious ones which will likely be implemented in software. I’m already impressed by the Omni Group’s pledge to develop a Touch version of their flagship GTD app.

Confessions of a Naïve Tech Enthusiast (Old Draft)

I’m doing a bit of housecleaning. This is an old post I had in my drafts. Moved to Austin in the meantime, blogged about other things…

Dunno that I’ll finish this one. Should have REROed. In a way, it’s the prelude to my ultimate handheld post.

I keep dreaming of different devices which would enhance my personal and professional experience. Not that I’m really a gadget geek. But technology has, to a large extent, been part of improvements in my life.

Though I would hesitate to call “addictive” my relation to computer technology, I certainly tend to depend on it quite a bit.

Some context.

Ok, ok! A lot of context.

Let’s go back. Waaaaay back. To the summer of 1993. I was 21, then, and had already been a Mac-head for more than six years. Without being a complete fanboy of Apple Computers, I guess I was easily impressed by many of its products. During a trip to Cape Cod that summer, I got to read an issue of USA Today. In that issue, I read a review of a new class of computers, the Personal Digital Assistant (PDA). I still remember how I felt. It might not have been my first “tech-induced epiphany” but it was one of the most intense. I not only started drifting off (which was easy enough to do, as I was in the back seat of my mother’s car), I actually started perceiving what my life could be with one of those devices.

Of course, I could afford any of them. Even when it became possible for me to purchase such a device, it remained financially irrational for me to spend that money on a single device, no matter how life-changing it might have been.

Shortly after discovering the existence of PDAs, and still during the summer of 1993, I discovered the existence of the Internet. Actually, it’s all a bit blurry at this point and it’s possible that I may have heard of the Internet before reading that epiphany-inducing USA Today article. Point is, though, that the Internet, not the PDA, changed my life at that point.

Whatever my computing experience had been until that point is hard to remember because the ‘Net changed everything. I know about specific computers I had been using until that point (from a ViC20 to an SE/30). I do remember long evenings spent typing from my handwritten notes taken during lectures. I still get a weird feeling thinking about a few sleepless nights spent playing simple strategy and card games on my father’s old Mac Plus. But I just can’t remember I could live without the ‘Net. I wasn’t thinking the same way.

Not too long after getting my first email account (on Université de Montréal’s Mistral server, running IRIX), the ‘Net helped me land my first real job: research assistant at a speech synthesis lab in Lausanne, Switzerland.

In late 1993 or early 1994, I had sent an email to a prominent ethnomusicologist about applying to the graduate program where she was and mentioned something about computer-based acoustic analysis, having taken a few courses in acoustics. She told me about Signalyze, a demo version of which was available through a Gopher server for that Swiss lab. While looking at that Gopher server, I became interested in the lab’s research projects and contacted Eric Keller, head of that lab and the main developer for Signalyze. I was already planning on spending that summer in Switzerland, working at my father’s cousin’s crêperie, so I thought spending some time in Lausanne interacting with members of Keller’s lab was a good idea. I was just finishing my bachelor’s degree in anthropology at Université de Montréal (with a focus on linguistic anthropology and ethnomusicology). So I was interested in doing something related to sound analysis in musical or speech contexts. Keller asked for my résumé and offered me paid work at his lab for the summer. I ended up spending both that summer and the whole 1994-1995 academic year working at this lab, being paid more than some of my mentors in Montreal.

Technologically-speaking, my life in Switzerland was rather intense. I was spending 15 hours a day in front of a computer, doing acoustic analysis of speech sounds. This computer was a Mac IIvx which had once belonged to UQÀM. A very funny coincidence is that the Mac IIvx I was using had become the source of part of the funding for a fellowship at UQÀM. After I met the incredible woman who became my wife, she received that fellowship.

As this computer had a fast connection to the Internet, I became used to constantly having online access. I was mostly using it to send and receive emails, including messages to and from mailing-lists, but I also got to dabble in HTML a bit and did spend some time on the still burgeoning World Wide Web. I also used a few instant messaging systems but I was still focused on email. In fact, I started using email messages to schedule quick coffee breaks with a friend of mine who was working one floor below me.

This 15-months stay in Switzerland is also when I first got a chance to use a laptop. A friend of my father had lent me his laptop so I could work on a translation contract during weekends. Though this laptop (a PowerBook 170, IIRC) wasn’t very powerful, it did give me a vague idea of what mobile computing might be like.

Coming back to Quebec about my Swiss experience, I began my master’s degree in linguistic anthropology. After looking at different options, I bought a PowerMac 7200 through a friend of mine. That 7200 (and the PowerMac 7300 which followed it) greatly enhanced my stationary computing experience. I probably wasn’t thinking about mobile and handheld devices that much, at that time, but I was still interested in mobile computing.

Things started to change in 1997. At that time, I received a Newton MessagePad 130 through the AECP (Apple Educational Consultant Program). This was a great device. Too big for most pockets. But very nice in almost every other respect. While my handwriting is hard to read by most humans, the Newton’s handwriting did quite a decent job at recognising it. I also became quite adept in Graffiti, Palm Inc.’s handwriting recognition software based on a constructed script from uppercase latin alphabet. I was able to take notes during lectures and conferences. For a while, I carried my Newton anywhere. But it was so bulky that I eventually gave up. I just stopped carrying my Newton around. At one point, I even lent it to a friend who tried it out for a while. But I wasn’t a PDA user anymore. I still needed the perfect PDA. But the Newton wasn’t it.

In early 1998, I went to Mali for the first time. Before I went, I bought a portable cassette recorder to record interviews and some musical performances.

When I moved to Bloomington, IN in September 1998 to do my Ph.D. coursework, I literally had no computer at home. As I had done for a long time during my bachelor’s degree, I spent long hours in computer labs on campus. The computers themselves were quite good (and updated fairly regularly) and IU had one of the best Internet connections available.

In mid-to-late 2001, when rumours of an Apple-branded portable device started surfacing, I was getting ready for my main ethnographic and ethnomusicological fieldwork trip to Mali.

I kept thinking about different tools to use in the field. For some reason, portable equipment for computing and recording was strangely important for me. I still had my Newton MP130. And I was planning on using it in the field. Except if something radically better came along. So I was hoping for the mysterious handheld device Apple was launching to be something of a Newton replacement. Sure, I knew that Steve Jobs had always hated the Newton, apparently for personal reasons. But I secretly hoped that he would come to his senses and allow Apple to revolutionise the handheld market it had spearheaded back in 1993. When I learnt that the device might be related to audio, I thought that it might be both a PDA and an audio device. More importantly for me, I thought that it would have some recording capabilities, making it the ideal field research tool for ethnographers and ethnomusicologists. I was waiting impatiently for the announcement and, like some others, was disappointed by the initial release, especially when I learnt that the iPod didn’t have any recording capabilities. Soon after this, I bought the main devices which would accompany me in my main field trip to Mali: an Apple iBook (Dual USB) laptop with Combo Drive, a HandSpring Visor Deluxe PDA, a Sony MZ-R37 MiniDisc recorder, and a Sony ECM-MS907 microphone. I used all of these extensively throughout my field trip and, though Internet access was spotty, being able to regularly send and receive messages from my iBook was very beneficial for my research practises. I left the MiniDisc recorder and microphone with Yoro Sidibe, the main person with whom I was working in the field, and had to buy other equipment on my way back.

By mid-2004, I bought a used iPod through eBay. I was still living in Montreal but was moving to South Bend, IN, where I was going to spend a year on a teaching fellowship. To make things easier and cheaper, I had the eBay seller send the iPod to my future office in South Bend. When I arrived in South Bend a month or so later, I finally took possession of my first ever iPod. It was an iPod 2G 20GB with FireWire. It came in a rather big box which also included: the original AC adapter, two extra adapters (including a car one), two pouches, the original headphones, and the original remote control.

My iBook (Dual USB) only had a 10GB hard drive so most of my MP3s were on CD-Rs that I had burnt for use with a CD-MP3 player (at the time, a Rio Volt that I had received as a gift a few years prior). I had also brought in my CD collection, in CD Projects (and similar) carrying cases. Hundreds of CDs, a rather heavy and voluminous burden.

I eventually got a good part of my CD collection on the iPod. And I rediscovered music.

Funny to say, for an ethnomusicologist. But pretty realistic. I had lost touch with this type of private music listening. As convenient as it was to use, my Rio Volt didn’t really enable me to connect with music. It merely allowed me to carry some music with me.

Fairly early on, during my first iPod’s career as my main music device, the remote control started acting funny. Sometimes, it would reboot the iPod for no reason. Using the headphones directly (without the remote control), I didn’t have that problem. Though I know very little about electronics, it seemed to me that something was wrong in the connection between the remote control and the jack. I asked the prior owner who said he never had had a problem with the remote control. I resorted to not using the remote control and went on my happy way to iPod happiness for almost two years. Apple was releasing new iPod models and I would have liked to own them, but my finances wouldn’t allow me to purchase one of them and my iPod 2G was still giving me a lot of pleasure.

When Apple introduced podcast support in mid-2005, I became something of a podcast addict. I subscribed to tons of podcasts and was enjoying the iTunes/iPod integration to its fullest potential. A portion of my music MP3 collection was still taking the largest amount of disk space on my iPod but I was spending more time listening to podcasts than listening to MP3s from my personal collection.

In early 2006, I finally transformed my whole CD collection to MP3s thanks to the large hard drive (160GB) in the refurbished emachines H3070 that I had to buy to replace my then-defunct iBook. The complete collection took over 90GB and it took me quite a while to sort it all out. In fact, large chunks of this MP3 collection remain unexplored to this day. My main “active” collection represents about 15GB, which did fit on my iPod’s 20GB hard drive with enough room for lots of podcasts. So, despite being quite outdated by that time, my iPod 2G was giving a lot of pleasure.

Then, in mid-2006, I started having problems with the headphone jack on this iPod. Come to think of it, I probably had problems with that headphone jack before that time, but it was never enough of a problem to detract me from enjoying my iPod. By mid-2006, however, I was frequently losing sound in one headphone because the jack was moving around. My music- and podcast-listening life wasn’t as happy as it had been. And I started looking elsewhere for audio devices.

Which is how I came to buy an iRiver H120, in July, 2006.

Should follow this post up, at some point