Category Archives: speech recognition

Why I Need an iPad

I’m one of those who feel the iPad is the right tool for the job.

This is mostly meant as a reply to this blogthread. But it’s also more generally about my personal reaction to Apple’s iPad announcement.

Some background.

I’m an ethnographer and a teacher. I read a fair deal, write a lot of notes, and work in a variety of contexts. These days, I tend to spend a good amount of time in cafés and other public places where I like to work without being too isolated. I also commute using public transit, listen to lots of podcast, and create my own. I’m also very aural.

I’ve used a number of PDAs, over the years, from a Newton MessagePad 130 (1997) to a variety of PalmOS devices (until 2008). In fact, some people readily associated me with PDA use.

As soon as I learnt about the iPod touch, I needed one. As soon as I’ve heard about the SafariPad, I wanted one. I’ve been an intense ‘touch user since the iPhone OS 2.0 release and I’m a happy camper.

(A major reason I never bought an iPhone, apart from price, is that it requires a contract.)

In my experience, the ‘touch is the most appropriate device for all sorts of activities which are either part of an other activity (reading during a commute) or are simply too short in duration to constitute an actual “computer session.” You don’t “sit down to work at your ‘touch” the way you might sit in front of a laptop or desktop screen. This works great for “looking up stufff” or “checking email.” It also makes a lot of sense during commutes in crowded buses or metros.

In those cases, the iPod touch is almost ideal. Ubiquitous access to Internet would be nice, but that’s not a deal-breaker. Alternative text-input methods would help in some cases, but I do end up being about as fast on my ‘touch as I was with Graffiti on PalmOS.

For other tasks, I have a Mac mini. Sure, it’s limited. But it does the job. In fact, I have no intention of switching for another desktop and I even have an eMachines collecting dust (it’s too noisy to make a good server).

What I miss, though, is a laptop. I used an iBook G3 for several years and loved it. For a little while later, I was able to share a MacBook with somebody else and it was a wonderful experience. I even got to play with the OLPC XO for a few weeks. That one was not so pleasant an experience but it did give me a taste for netbooks. And it made me think about other types of iPhone-like devices. Especially in educational contexts. (As I mentioned, I’m a teacher)

I’ve been laptop-less for a while, now. And though my ‘touch replaces it in many contexts, there are still times when I’d really need a laptop. And these have to do with what I might call “mobile sessions.”

For instance: liveblogging a conference or meeting. I’ve used my ‘touch for this very purpose on a good number of occasions. But it gets rather uncomfortable, after a while, and it’s not very fast. A laptop is better for this, with a keyboard and a larger form factor. But the iPad will be even better because of lower risks of RSI. A related example: just imagine TweetDeck on iPad.

Possibly my favourite example of a context in which the iPad will be ideal: presentations. Even before learning about the prospect of getting iWork on a tablet, presentations were a context in which I really missed a laptop.

Sure, in most cases, these days, there’s a computer (usually a desktop running XP) hooked to a projector. You just need to download your presentation file from Slideshare, show it from Prezi, or transfer it through USB. No biggie.

But it’s not the extra steps which change everything. It’s the uncertainty. Even if it’s often unfounded, I usually get worried that something might just not work, along the way. The slides might not show the same way as you see it because something is missing on that computer or that computer is simply using a different version of the presentation software. In fact, that software is typically Microsoft PowerPoint which, while convenient, fits much less in my workflow than does Apple Keynote.

The other big thing about presentations is the “presenter mode,” allowing you to get more content than (or different content from) what the audience sees. In most contexts where I’ve used someone else’s computer to do a presentation, the projector was mirroring the computer’s screen, not using it as a different space. PowerPoint has this convenient “presenter view” but very rarely did I see it as an available option on “the computer in the room.” I wish I could use my ‘touch to drive presentations, which I could do if I installed software on that “computer in the room.” But it’s not something that is likely to happen, in most cases.

A MacBook solves all of these problems. and it’s an obvious use for laptops. But how, then, is the iPad better? Basically because of interface. Switching slides on a laptop isn’t hard, but it’s more awkward than we realize. Even before watching the demo of Keynote on the iPad, I could simply imagine the actual pleasure of flipping through slides using a touch interface. The fit is “natural.”

I sincerely think that Keynote on the iPad will change a number of things, for me. Including the way I teach.

Then, there’s reading.

Now, I’m not one of those people who just can’t read on a computer screen. In fact, I even grade assignments directly from the screen. But I must admit that online reading hasn’t been ideal, for me. I’ve read full books as PDF files or dedicated formats on PalmOS, but it wasn’t so much fun, in terms of the reading process. And I’ve used my ‘touch to read things through Stanza or ReadItLater. But it doesn’t work so well for longer reading sessions. Even in terms of holding the ‘touch, it’s not so obvious. And, what’s funny, even a laptop isn’t that ideal, for me, as a reading device. In a sense, this is when the keyboard “gets in the way.”

Sure, I could get a Kindle. I’m not a big fan of dedicated devices and, at least on paper, I find the Kindle a bit limited for my needs. Especially in terms of sources. I’d like to be able to use documents in a variety of formats and put them in a reading list, for extended reading sessions. No, not “curled up in bed.” But maybe lying down in a sofa without external lighting. Given my experience with the ‘touch, the iPad is very likely the ideal device for this.

Then, there’s the overall “multi-touch device” thing. People have already been quite creative with the small touchscreen on iPhones and ‘touches, I can just imagine what may be done with a larger screen. Lots has been said about differences in “screen real estate” in laptop or desktop screens. We all know it can make a big difference in terms of what you can display at the same time. In some cases, two screens isn’t even a luxury, for instance when you code and display a page at the same time (LaTeX, CSS…). Certainly, the same qualitative difference applies to multitouch devices. Probably even more so, since the display is also used for input. What Han found missing in the iPhone’s multitouch was the ability to use both hands. With the iPad, Han’s vision is finding its space.

Oh, sure, the iPad is very restricted. For instance, it’s easy to imagine how much more useful it’d be if it did support multitasking with third-party apps. And a front-facing camera is something I was expecting in the first iPhone. It would just make so much sense that a friend seems very disappointed by this lack of videoconferencing potential. But we’re probably talking about predetermined expectations, here. We’re comparing the iPad with something we had in mind.

Then, there’s the issue of the competition. Tablets have been released and some multitouch tablets have recently been announced. What makes the iPad better than these? Well, we could all get in the same OS wars as have been happening with laptops and desktops. In my case, the investment in applications, files, and expertise that I have made in a Mac ecosystem rendered my XP years relatively uncomfortable and me appreciate returning to the Mac. My iPod touch fits right in that context. Oh, sure, I could use it with a Windows machine, which is in fact what I did for the first several months. But the relationship between the iPhone OS and Mac OS X is such that using devices in those two systems is much more efficient, in terms of my own workflow, than I could get while using XP and iPhone OS. There are some technical dimensions to this, such as the integration between iCal and the iPhone OS Calendar, or even the filesystem. But I’m actually thinking more about the cognitive dimensions of recognizing some of the same interface elements. “Look and feel” isn’t just about shiny and “purty.” It’s about interactions between a human brain, a complex sensorimotor apparatus, and a machine. Things go more quickly when you don’t have to think too much about where some tools are, as you’re working.

So my reasons for wanting an iPad aren’t about being dazzled by a revolutionary device. They are about the right tool for the job.

Personal Devices

Still thinking about touch devices, such as the iPod touch and the rumoured “Apple Tablet.”

Thinking out loud. Rambling even more crazily than usual.

Something important about those devices is the need for a real “Personal Digital Assistant.” I put PDAs as a keyword for my previous post because I do use the iPod touch like I was using my PalmOS and even NewtonOS devices. But there’s more to it than that, especially if you think about cloud computing and speech technologies.
I mentioned speech recognition in that previous post. SR tends to be a pipedream of the computing world. Despite all the hopes put into realtime dictation, it still hasn’t taken off in a big way. One reason might be that it’s still somewhat cumbersome to use, in current incarnations. Another reason is that it’s relatively expensive as a standalone product which requires some getting used to. But I get the impression that another set of reasons has to do with the fact that it’s mostly fitting on a personal device. Partly because it needs to be trained. But also because voice itself is a personal thing.

Cloud computing also takes a new meaning with a truly personal device. It’s no surprise that there are so many offerings with some sort of cloud computing feature in the App Store. Not only do Apple’s touch devices have limited file storage space but the notion of accessing your files in the cloud go well with a personal device.
So, what’s the optimal personal device? I’d say that Apple’s touch devices are getting close to it but that there’s room for improvement.

Some perspective…

Originally, the PC was supposed to be a “personal” computer. But the distinction was mostly with mainframes. PCs may be owned by a given person, but they’re not so tied to that person, especially given the fact that they’re often used in a single context (office or home, say). A given desktop PC can be important in someone’s life, but it’s not always present like a personal device should be. What’s funny is that “personal computers” became somewhat more “personal” with the ‘Net and networking in general. Each computer had a name, etc. But those machines remained somewhat impersonal. In many cases, even when there are multiple profiles on the same machine, it’s not so safe to assume who the current user of the machine is at any given point.

On paper, the laptop could have been that “personal device” I’m thinking about. People may share a desktop computer but they usually don’t share their laptop, unless it’s mostly used like a desktop computer. The laptop being relatively easy to carry, it’s common for people to bring one back and forth between different sites: work, home, café, school… Sounds tautological, as this is what laptops are supposed to be. But the point I’m thinking about is that these are still distinct sites where some sort of desk or table is usually available. People may use laptops on their actual laps, but the form factor is still closer to a portable desktop computer than to the kind of personal device I have in mind.

Then, we can go all the way to “wearable computing.” There’s been some hype about wearable computers but it has yet to really be part of our daily lives. Partly for technical reasons but partly because it may not really be what people need.

The original PDAs (especially those on NewtonOS and PalmOS) were getting closer to what people might need, as personal devices. The term “personal digital assistant” seemed to encapsulate what was needed. But, for several reasons, PDAs have been having a hard time. Maybe there wasn’t a killer app for PDAs, outside of “vertical markets.” Maybe the stylus was the problem. Maybe the screen size and bulk of the device weren’t getting to the exact points where people needed them. I was still using a PalmOS device in mid-2008 and it felt like I was among the last PDA users.
One point was that PDAs had been replaced by “smartphones.” After a certain point, most devices running PalmOS were actually phones. RIM’s Blackberry succeeded in a certain niche (let’s use the vague term “professionals”) and is even beginning to expand out of it. And devices using other OSes have had their importance. It may not have been the revolution some readers of Pen Computing might have expected, but the smartphone has been a more successful “personal device” than the original PDAs.

It’s easy to broaden our focus from smartphones and think about cellphones in general. If the 3.3B figure can be trusted, cellphones may already be outnumbering desktop and laptop computers by 3:1. And cellphones really are personal. You bring them everywhere; you don’t need any kind of surface to use them; phone communication actually does seem to be a killer app, even after all this time; there are cellphones in just about any price range; cellphone carriers outside of Canada and the US are offering plans which are relatively reasonable; despite some variation, cellphones are rather similar from one manufacturer to the next… In short, cellphones already were personal devices, even before the smartphone category really emerged.

What did smartphones add? Basically, a few PDA/PIM features and some form of Internet access or, at least, some form of email. “Whoa! Impressive!”

Actually, some PIM features were already available on most cellphones and Internet access from a smartphone is in continuity with SMS and data on regular cellphones.

What did Apple’s touch devices add which was so compelling? Maybe not so much, apart from the multitouch interface, a few games, and integration with desktop/laptop computers. Even then, most of these changes were an evolution over the basic smartphone concept. Still, it seems to have worked as a way to open up personal devices to some new dimensions. People now use the iPhone (or some other multitouch smartphone which came out after the iPhone) as a single device to do all sorts of things. Around the World, multitouch smartphones are still much further from being ubiquitous than are cellphones in general. But we could say that these devices have brought the personal device idea to a new phase. At least, one can say that they’re much more exciting than the other personal computing devices.

But what’s next for personal devices?

Any set of buzzphrases. Cloud computing, speech recognition, social media…

These things can all come together, now. The “cloud” is mostly ready and personal devices make cloud computing more interesting because they’re “always-on,” are almost-wearable, have batteries lasting just about long enough, already serve to keep some important personal data, and are usually single-user.

Speech recognition could go well with those voice-enabled personal devices. For one thing, they already have sound input. And, by this time, people are used to seeing others “talk to themselves” as cellphones are so common. Plus, voice recognition is already understood as a kind of security feature. And, despite their popularity, these devices could use a further killer app, especially in terms of text entry and processing. Some of these devices already have voice control and it’s not so much of a stretch to imagine them having what’s needed for continuous speech recognition.

In terms of getting things onto the device, I’m also thinking about such editing features as a universal rich-text editor (à la TinyMCE), predictive text, macros, better access to calendar/contact data, ubiquitous Web history, multiple pasteboards, data detectors, Automator-like processing, etc. All sorts of things which should come from OS-level features.

“Social media” may seem like too broad a category. In many ways, those devices already take part in social networking, user-generated content, and microblogging, to name a few areas of social media. But what about a unified personal profile based on the device instead of the usual authentication method? Yes, all sorts of security issues. But aren’t people unconcerned about security in the case of social media? Twitter accounts are being hacked left and right yet Twitter doesn’t seem to suffer much. And there could be added security features on a personal device which is meant to really integrate social media. Some current personal devices already work well as a way to keep login credentials to multiple sites. The next step, there, would be to integrate all those social media services into the device itself. We maybe waiting for OpenSocial, OpenID, OAuth, Facebook Connect, Google Connect, and all sorts of APIs to bring us to an easier “social media workflow.” But a personal device could simplify the “social media workflow” even further, with just a few OS-based tweaks.

Unlike my previous, I’m not holding my breath for some specific event which will bring us the ultimate personal device. After all, this is just a new version of my ultimate handheld device blogpost. But, this time, I was focusing on what it means for a device to be “personal.” It’s even more of a drafty draft than my blogposts usually have been ever since I decided to really RERO.

So be it.

Speculating on Apple's Touch Strategy

This is mere speculation on my part, based on some rumours.

I’m quite sure that Apple will come up with a video-enabled iPod touch on September 9, along with iTunes 9 (which should have a few new “social networking” features). This part is pretty clear from most rumour sites.

AppleInsider | Sources: Apple to unveil new iPod lineup on September 9.

Progressively, Apple will be adopting a new approach to marketing its touch devices. Away from the “poorperson’s iPhone” and into the “tiny but capable computer” domain. Because the 9/9 event is supposed to be about music, one might guess that there will be a cool new feature or two relating to music. Maybe lyrics display, karaoke mode, or whatever else. Something which will simultaneously be added to the iPhone but would remind people that the iPod touch is part of the iPod family. Apple has already been marketing the iPod touch as a gaming platform, so it’s not a radical shift. But I’d say the strategy is to make Apple’s touch devices increasingly more attractive, without cannibalizing sales in the MacBook family.

Now, I really don’t expect Apple to even announce the so-called “Tablet Mac” in September. I’m not even that convinced that the other devices Apple is preparing for expansion of its touch devices lineup will be that close to the “tablet” idea. But it seems rather clear, to me, that Apple should eventually come up with other devices in this category. Many rumours point to the same basic notion, that Apple is getting something together which will have a bigger touchscreen than the iPhone or iPod touch. But it’s hard to tell how this device will fit, in the grand scheme of things.

It’s rather obvious that it won’t be a rebirth of the eMate the same way that the iPod touch wasn’t a rebirth of the MessagePad. But it would make some sense for Apple to target some educational/learning markets, again, with an easy-to-use device. And I’m not just saying this because the rumoured “Tablet Mac” makes me think about the XOXO. Besides, the iPod touch is already being marketed to educational markets through the yearly “Back to school” program which (surprise!) ends on the day before the September press conference.

I’ve been using an iPod touch (1st Generation) for more than a year, now, and I’ve been loving almost every minute of it. Most of the time, I don’t feel the need for a laptop, though I occasionally wish I could buy a cheap one, just for some longer writing sessions in cafés. In fact, a friend recently posted information about some Dell Latitude D600 laptops going for a very low price. That’d be enough for me at this point. Really, my iPod touch suffices for a lot of things.

Sadly, my iPod touch seems to have died, recently, after catching some moisture. If I can’t revive it and if the 2nd Generation iPod touch I bought through Kijiji never materializes, I might end up buying a 3rd Generation iPod touch on September 9, right before I start teaching again. If I can get my hands on a working iPod touch at a good price before that, I may save the money in preparation for an early 2010 release of a new touch device from Apple.

Not that I’m not looking at alternatives. But I’d rather use a device which shares enough with the iPod touch that I could migrate easily, synchronize with iTunes, and keep what I got from the App Store.

There’s a number of things I’d like to get from a new touch device. First among them is a better text entry/input method. Some of the others could be third-party apps and services. For instance, a full-featured sharing app. Or true podcast synchronization with media annotation support (à la Revver or Soundcloud). Or an elaborate, fully-integrated logbook with timestamps, Twitter support, and outlining. Or even a high-quality reference/bibliography manager (think RefWorks/Zotero/Endnote). But getting text into such a device without a hardware keyboard is the main challenge. I keep thinking about all sorts of methods, including MessagEase and Dasher as well as continuous speech recognition (dictation). Apple’s surely thinking about those issues. After all, they have some handwriting recognition systems that they aren’t really putting to any significant use.

Something else which would be quite useful is support for videoconferencing. Before the iPhone came out, I thought Apple may be coming out with iChat Mobile. Though a friend announced the iPhone to me by making reference to this, the position of the camera at the back of the device and the fact that the original iPhone’s camera only supported still pictures (with the official firmware) made this dream die out, for me. But a “Tablet Mac” with an iSight-like camera and some form of iChat would make a lot of sense, as a communication device. Especially since iChat already supports such things as screen-sharing and slides. Besides, if Apple does indeed move in the direction of some social networking features, a touch device with an expanded Address Book could take a whole new dimension through just a few small tweaks.

This last part I’m not so optimistic about. Apple may know that social networking is important, at this point in the game, but it seems to approach it with about the same heart as it approached online services with eWorld, .Mac, and MobileMe. Of course, they have the tools needed to make online services work in a “social networking” context. But it’s possible that their vision is clouded by their corporate culture and some remnants of the NIH problem.

Ah, well…

Handhelds for the Rest of Us?

Ok, it probably shouldn’t become part of my habits but this is another repost of a blog comment motivated by the OLPC XO.

This time, it’s a reply to Niti Bhan’s enthusiastic blogpost about the eeePC: Perspective 2.0: The little eeePC that could has become the real “iPod” of personal computing

This time, I’m heavily editing my comments. So it’s less of a repost than a new blogpost. In some ways, it’s partly a follow-up to my “Ultimate Handheld Device” post (which ended up focusing on spatial positioning).

Given the OLPC context, the angle here is, hopefully, a culturally aware version of “a handheld device for the rest of us.”

Here goes…

I think there’s room in the World for a device category more similar to handhelds than to subnotebooks. Let’s call it “handhelds for the rest of us” (HftRoU). Something between a cellphone, a portable gaming console, a portable media player, and a personal digital assistant. Handheld devices exist which cover most of these features/applications, but I’m mostly using this categorization to think about the future of handhelds in a globalised World.

The “new” device category could serve as the inspiration for a follow-up to the OLPC project. One thing about which I keep thinking, in relation to the “OLPC” project, is that the ‘L’ part was too restrictive. Sure, laptops can be great tools for students, especially if these students are used to (or need to be trained in) working with and typing long-form text. But I don’t think that laptops represent the most “disruptive technology” around. If we think about their global penetration and widespread impact, cellphones are much closer to the leapfrog effect about which we all have been writing.

So, why not just talk about a cellphone or smartphone? Well, I’m trying to think both more broadly and more specifically. Cellphones are already helping people empower themselves. The next step might to add selected features which bring them closer to the OLPC dream. Also, since cellphones are widely distributed already, I think it’s important to think about devices which may complement cellphones. I have some ideas about non-handheld tools which could make cellphones even more relevant in people’s lives. But they will have to wait for another blogpost.

So, to put it simply, “handhelds for the rest of us” (HftRoU) are somewhere between the OLPC XO-1 and Apple’s original iPhone, in terms of features. In terms of prices, I dream that it could be closer to that of basic cellphones which are in the hands of so many people across the globe. I don’t know what that price may be but I heard things which sounded like a third of the price the OLPC originally had in mind (so, a sixth of the current price). Sure, it may take a while before such a low cost can be reached. But I actually don’t think we’re in a hurry.

I guess I’m just thinking of the electronics (and global) version of the Ford T. With more solidarity in mind. And cultural awareness.

Google’s Open Handset Alliance (OHA) may produce something more appropriate to “global contexts” than Apple’s iPhone. In comparison with Apple’s iPhone, devices developed by the OHA could be better adapted to the cultural, climatic, and economic conditions of those people who don’t have easy access to the kind of computers “we” take for granted. At the very least, the OHA has good representation on at least three continents and, like the old OLPC project, the OHA is officially dedicated to openness.

I actually care fairly little about which teams will develop devices in this category. In fact, I hope that new manufacturers will spring up in some local communities and that major manufacturers will pay attention.

I don’t care about who does it, I’m mostly interested in what the devices will make possible. Learning, broadly speaking. Communicating, in different ways. Empowering themselves, generally.

One thing I have in mind, and which deviates from the OLPC mission, is that there should be appropriate handheld devices for all age-ranges. I do understand the focus on 6-12 year-olds the old OLPC had. But I don’t think it’s very productive to only sell devices to that age-range. Especially not in those parts of the world (i.e., almost anywhere) where generation gaps don’t imply that children are isolated from adults. In fact, as an anthropologist, I react rather strongly to the thought that children should be the exclusive target of a project meant to empower people. But I digress, as always.

I don’t tend to be a feature-freak but I have been thinking about the main features the prototypical device in this category should have. It’s not a rigid set of guidelines. It’s just a way to think out loud about technology’s integration in human life.

The OS and GUI, which seem like major advantages of the eeePC, could certainly be of the mobile/handheld type instead of the desktop/laptop type. The usual suspects: Symbian, NewtonOS, Android, Zune, PalmOS, Cocoa Touch, embedded Linux, Playstation Portable, WindowsCE, and Nintendo DS. At a certain level of abstraction, there are so many commonalities between all of these that it doesn’t seem very efficient to invent a completely new GUI/OS “paradigm,” like OLPC’s Sugar was apparently trying to do.

The HftRoU require some form of networking or wireless connectivity feature. WiFi (802.11*), GSM, UMTS, WiMAX, Bluetooth… Doesn’t need to be extremely fast, but it should be flexible and it absolutely cannot be cost-prohibitive. IP might make much more sense than, say, SMS/MMS, but a lot can be done with any kind of data transmission between devices. XO-style mesh networking could be a very interesting option. As VoIP has proven, voice can efficiently be transmitted as data so “voice networks” aren’t necessary.

My sense is that a multitouch interface with an accelerometer would be extremely effective. Yes, I’m thinking of Apple’s Touch devices and MacBooks. As well as about the Microsoft Surface, and Jeff Han’s Perceptive Pixel. One thing all of these have shown is how “intuitive” it can be to interact with a machine using gestures. Haptic feedback could also be useful but I’m not convinced it’s “there yet.”

I’m really not sure a keyboard is very important. In fact, I think that keyboard-focused laptops and tablets are the wrong basis for thinking about “handhelds for the rest of us.” Bear in mind that I’m not thinking about devices for would-be office workers or even programmers. I’m thinking about the broadest user base you can imagine. “The Rest of Us” in the sense of, those not already using computers very directly. And that user base isn’t that invested in (or committed to) touch-typing. Even people who are very literate don’t tend to be extremely efficient typists. If we think about global literacy rates, typing might be one thing which needs to be leapfrogged. After all, a cellphone keypad can be quite effective in some hands and there are several other ways to input text, especially if typing isn’t too ingrained in you. Furthermore, keyboards aren’t that convenient in multilingual contexts (i.e., in most parts of the world). I say: avoid the keyboard altogether, make it available as an option, or use a virtual one. People will complain. But it’s a necessary step.

If the device is to be used for voice communication, some audio support is absolutely required. Even if voice communication isn’t part of it (and I’m not completely convinced it’s the one required feature), audio is very useful, IMHO (I’m an aural guy). In some parts of the world, speakers are much favoured over headphones or headsets. But I personally wish that at least some HftRoU could have external audio inputs/outputs. Maybe through USB or an iPod-style connector.

A voice interface would be fabulous, but there still seem to be technical issues with both speech recognition and speech synthesis. I used to work in that field and I keep dreaming, like Bill Gates and others do, that speech will finally take the world by storm. But maybe the time still hasn’t come.

It’s hard to tell what size the screen should be. There probably needs to be a range of devices with varying screen sizes. Apple’s Touch devices prove that you don’t need a very large screen to have an immersive experience. Maybe some HftRoU screens should in fact be larger than that of an iPhone or iPod touch. Especially if people are to read or write long-form text on them. Maybe the eeePC had it right. Especially if the devices’ form factor is more like a big handheld than like a small subnotebook (i.e., slimmer than an eeePC). One reason form factor matters, in my mind, is that it could make the devices “disappear.” That, and the difference between having a device on you (in your pocket) and carrying a bag with a device in it. Form factor was a big issue with my Newton MessagePad 130. As the OLPC XO showed, cost and power consumption are also important issues regarding screen size. I’d vote for a range of screens between 3.5 inch (iPhone) and 8.9 inch (eeePC 900) with a rather high resolution. A multitouch version of the XO’s screen could be a major contribution.

In terms of both audio and screen features, some consideration should be given to adaptive technologies. Most of us take for granted that “almost anyone” can hear and see. We usually don’t perceive major issues in the fact that “personal computing” typically focuses on visual and auditory stimuli. But if these devices truly are “for the rest of us,” they could help empower visually- or hearing-impaired individuals, who are often marginalized. This is especially relevant in the logic of humanitarianism.

HftRoU needs a much autonomy from a power source as possible. Both in terms of the number of hours devices can be operated without needing to be connected to a power source and in terms of flexibility in power sources. Power management is a major technological issue, with portable, handheld, and mobile devices. Engineers are hard at work, trying to find as many solutions to this issue as they can. This was, obviously, a major area of research for the OLPC. But I’m not even sure the solutions they have found are the only relevant ones for what I imagine HftRoU to be.

GPS could have interesting uses, but doesn’t seem very cost-effective. Other “wireless positioning systems” (à la Skyhook) might reprsent a more rational option. Still, I think positioning systems are one of the next big things. Not only for navigation or for location-based targeting. But for a set of “unintended uses” which are the hallmark of truly disruptive technology. I still remember an article (probably in the venerable Wired magazine) about the use of GPS/GIS for research into climate change. Such “unintended uses” are, in my mind, much closer to the constructionist ideal than the OLPC XO’s unified design can ever get.

Though a camera seems to be a given in any portable or mobile device (even the OLPC XO has one), I’m not yet that clear on how important it really is. Sure, people like taking pictures or filming things. Yes, pictures taken through cellphones have had a lasting impact on social and cultural events. But I still get the feeling that the main reason cameras are included on so many devices is for impulse buying, not as a feature to be used so frequently by all users. Also, standalone cameras probably have a rather high level of penetration already and it might be best not to duplicate this type of feature. But, of course, a camera could easily be a differentiating factor between two devices in the same category. I don’t think that cameras should be absent from HftRoU. I just think it’s possible to have “killer apps” without cameras. Again, I’m biased.

Apart from networking/connectivity uses, Bluetooth seems like a luxury. Sure, it can be neat. But I don’t feel it adds that much functionality to HftRoU. Yet again, I could be proven wrong. Especially if networking and other inter-device communication are combined. At some abstract level, there isn’t that much difference between exchanging data across a network and controlling a device with another device.

Yes, I do realize I pretty much described an iPod touch (or an iPhone without camera, Bluetooth, or cellphone fees). I’ve been lusting over an iPod touch since September and it does colour my approach. I sincerely think the iPod touch could serve as an inspiration for a new device type. But, again, I care very little about which company makes that device. I don’t even care about how open the operating system is.

As long as our minds are open.

Free As In Beer: The Case for No-Cost Software

To summarize the situation:

  1. Most of the software for which I paid a fee, I don’t really use.
  2. Most of the software I really use, I haven’t paid a dime for.
  3. I really like no-cost software.
  4. You might want to call me “cheap” but, if you’re developing “consumer software,” you may need to pay attention to the way people like me think about software.

No, I’m not talking about piracy. Piracy is wrong on a very practical level (not to mention legal and moral issues). Piracy and anti-piracy protection are in a dynamic that I don’t particularly enjoy. In some ways, forms of piracy are “ruining it for everyone.” So this isn’t about pirated software.

I’m not talking about “Free/Libre/Open Source Software” (FLOSS) either. I tend to relate to some of the views held by advocates of “Free as in Speech” or “Open” developments but I’ve had issues with FLOSS projects, in the past. I will gladly support FLOSS in my own ways but, to be honest, I ended up losing interest in some of the most promising projects out there. Not saying they’re not worth it. After all, I do rely on many of those projects But in talking about “no-cost software,” I’m not talking about Free, Libre, or Open Source development. At least, not directly.

Basically, I was thinking about the complex equation which, for any computer user, determines the cash value of a software application. Most of the time, this equation is somehow skewed. And I end up frustrated when I pay for software and almost giddy when I find good no-cost software.

An old but representative example of my cost-software frustration: QuickTime Pro. I paid for it a number of years ago, in preparation for a fieldwork trip. It seemed like a reasonable thing to do, especially given the fact that I was going to manipulate media files. When QuickTime was updated, my license stopped working. I was basically never able to use the QuickTime Pro features. And while it’s not a huge amount of money, the frustration of having paid for something I really didn’t need left me surprisingly bitter. It was a bad decision at that time so I’m now less likely to buy software unless I really need it and I really know how I will use it.

There’s an interesting exception to my frustration with cost-software: OmniOutliner (OO). I paid for it and have used it extensively for years. When I was “forced” to switch to Windows XP, OO was possibly the piece of software I missed the most from Mac OS X. And as soon as I was able to come back to the Mac, it’s one of the first applications I installed. But, and this is probably an important indicator, I don’t really use it anymore. Not because it lacks features I found elsewhere. But because I’ve had to adapt my workflow to OO-less conditions. I still wish there were an excellent cross-platform outliner for my needs. And, no, Microsoft OneNote isn’t it.

Now, I may not be a typical user. If the term weren’t so self-aggrandizing, I’d probably call myself a “Power User.” And, as I keep saying, I am not a coder. Therefore, I’m neither the prototypical “end user” nor the stereotypical “code monkey.” I’m just someone spending inordinate amounts of time in front of computers.

One dimension of my computer behavior which probably does put me in a special niche is that I tend to like trying out new things. Even more specifically, I tend to get overly enthusiastic about computer technology to then become disillusioned by said technology. Call me a “dreamer,” if you will. Call me “naïve.” Actually, “you can call me anything you want.” Just don’t call me to sell me things. 😉

Speaking of pressure sales. In a way, if I had truckloads of money, I might be a good target for software sales. But I’d be the most demanding user ever. I’d require things to work exactly like I expect them to work. I’d be exactly what I never am in real life: a dictator.

So I’m better off as a user of no-cost software.

I still end up making feature requests, on occasion. Especially with Open Source and other open development projects. Some developers might think I’m just complaining as I’m not contributing to the code base or offering solutions to a specific usage problem. Eh.

Going back to no-cost software. The advantage isn’t really that we, users, spend less money on the software distribution itself. It’s that we don’t really need to select the perfect software solution. We can just make do with what we have. Which is a huge “value-add proposition” in terms of computer technology, as counter-intuitive as this may sound to some people.

To break down a few no-cost options.

  • Software that came with your computer. With an Eee PC, iPhone, XO, or Mac, it’s actually an important part of the complete computing experience. Sure, there are always ways to expand the software offering. But the included software may become a big part of the deal. After all, the possibilities are already endless. Especially if you have ubiquitous Internet access.
  • Software which comes through a volume license agreement. This often works for Microsoft software, at least at large educational institutions. Even if you don’t like it so much, you end up using Microsoft Office because you have it on your computer for free and it does most of the things you want to do.
  • Software coming with a plan or paid service. Including software given by ISPs. These tend not to be “worth it.” Yet the principle (or “business model,” depending on which end of the deal you’re on) isn’t so silly. You already pay for a plan of some kind, you might as well get everything you need from that plan. Nobody (not even AT&T) has done it yet in such a way that it would be to everyone’s advantage. But it’s worth a thought.
  • “Webware” and other online applications. Call it “cloud computing” if you will (it was a buzzphrase, a few days ago). And it changes a lot of things. Not only does it simplify things like backup and migration, but it often makes for a seamless computer experience. When it works really well, the browser effectively disappears and you just work in a comfortable environment where everything you need (content, tools) is “just there.” This category is growing rather rapidly at this point but many tech enthusiasts were predicting its success a number of years ago. Typical forecasting, I guess.
  • Light/demo versions. These are actually less common than they once were, especially in terms of feature differentiation. Sure, you may still play the first few levels of a game in demo version and some “express” or “lite” versions of software are still distributed for free as teaser versions of more complete software. But, like the shareware model, demo and light software may seem to have become much less prominent a part of the typical computer user’s life than just a few years ago.
  • Software coming from online services. I’m mostly thinking about Skype but it’s a software category which would include any program with a desktop component (a “download”) and an online component, typically involving some kind of individual account (free or paid). Part subscription model, part “Webware companion.” Most of Google’s software would qualify (Sketchup, Google Earth…). If the associated “retail software” were free, I wouldn’t hesitate to put WoW in this category.
  • Actual “freeware.” Much freeware could be included in other categories but there’s still an idea of a “freebie,” in software terms. Sometimes, said freeware is distributed in view of getting people’s attention. Sometimes the freeware is just the result of a developer “scratching her/his own itch.” Sometimes it comes from lapsed shareware or even lapsed commercial software. Sometimes it’s “donationware” disguised as freeware. But, if only because there’s a “freeware” category in most software catalogs, this type of no-cost software needs to be mentioned.
  • “Free/Libre/Open Source Software.” Sure, I said earlier this was not what I was really talking about. But that was then and this is now. 😉 Besides, some of the most useful pieces of software I use do come from Free Software or Open Source. Mozilla Firefox is probably the best example. But there are many other worthy programs out there, including BibDesk, TeXShop, and FreeCiv. Though, to be honest, Firefox and Flock are probably the ones I use the most.
  • Pirated software (aka “warez”). While software piracy can technically let some users avoid the cost of purchasing a piece of software, the concept is directly tied with commercial software licenses. (It’s probably not piracy if the software distribution is meant to be open.) Sure, pirates “subvert” the licensing system for commercial software. But the software category isn’t “no-cost.” To me, there’s even a kind of “transaction cost” involved in the piracy. So even if the legal and ethical issues weren’t enough to exclude pirated software from my list of no-cost software options, the very practicalities of piracy put pirated software in the costly column, not in the “no-cost” one.

With all but the last category, I end up with most (but not all) of the software solutions I need. In fact, there are ways in which I’m better served now with no-cost software than I have ever been with paid software. I should probably make a list of these, at some point, but I don’t feel like it.

I mostly felt like assessing my needs, as a computer user. And though there always are many things I wish I could do but currently can’t, I must admit that I don’t really see the need to pay for much software.

Still… What I feel I need, here, is the “ultimate device.” It could be handheld. But I’m mostly thinking about a way to get ideas into a computer-friendly format. A broad set of issues about a very basic thing.

The spark for this blog entry was a reflection about dictation software. Not only have I been interested in speech technology for quite a while but I still bet that speech (recognition/dictation and “text-to-speech”) can become the killer app. I just think that speech hasn’t “come true.” It’s there, some people use it, the societal acceptance for it is likely (given cellphone penetration most anywhere). But its moment hasn’t yet come.

No-cost “text-to-speech” (TTS) software solutions do exist but are rather impractical. In the mid-1990s, I spent fifteen months doing speech analysis for a TTS research project in Switzerland. One of the best periods in my life. Yet, my enthusiasm for current TTS systems has been dampened. I wish I could be passionate about TTS and other speech technology again. Maybe the reason I’m notis that we don’t have a “voice desktop,” yet. But, for this voice desktop (voicetop?) to happen, we need high quality, continuous speech recognition. IOW, we need a “personal dictation device.” So, my latest 2008 prediction: we will get a voice device (smartphone?) which adapts to our voices and does very efficient and very accurate transcription of our speech. (A correlated prediction: people will complain about speech technology for a while before getting used to the continuous stream of public soliloquy.)

Dictation software is typically quite costly and complicated. Most users don’t see a need for dictation software so they don’t see a need for speech technology in computing. Though I keep thinking that speech could improve my computing life, I’ve never purchased a speech processing package. Like OCR (which is also dominated by Nuance, these days) it seems to be the kind of thing which could be useful to everyone but ends up being limited to “vertical markets.” (As it so happens, I did end up being an OCR program at some point and kept hoping my life would improve as the result of being able to transform hardcopies into searchable files. But I almost never used OCR (so my frustration with cost-software continues).)

Ah, well…

Uses for PDAs

Been thinking about blogging on my use of Personal Digital Assistants (PDAs) for a little while. Here’s my chance:

There’s simply no market these days for the traditional PDA, as even basic mobile phones can do everything a PDA can do, just with more style. Report: Apple developing OS X minitablet | One More Thing – CNET News.com

Uh-oh! No you didn’t! Well, Steve Jobs made a similar statement a long time ago so it’s not like you’re the first one to say it. But you’re still wrong!

(This blog entry will be choppier than usual. Should have posted this as a comment. But this is getting longer than I expected and I prefer trackbacks anyway.)

Not exactly sure where people in the Bay Area get the impression that there is no market for the traditional PDA. In my mind, the potential market for “the traditional PDA” is underestimated because the ideal PDA has yet to be released. No, the current crop of smartphones aren’t it.

Having said this, I do think cellphones have the brightest future of pretty much any other electronic device type, but I don’t agree that any cellphone currently does what a PDA can do. So, while I think the ideal portable device would likely be a cellphone, I’d like to focus on what a PDA really is.

While it’s clear that PDAs have had a tortuous history since the first Newton and Magic Cap devices were released, other tools haven’t completely obliterated the need for PDAs. Hence the “cult following” for Newton Message Pads and the interest in new generations of PDA-like devices.

One thing to keep in mind is that PDAs are not merely PIMs (personal information managers, typically focusing on contacts and calendars). Instead of a glorified organiser, a PDA is a complete computer with a focus on personal data. And people do care about personal data in computing.

Now, a disclaimer of sorts: I’ve been an active PDA user for a number of years. When I don’t have a PDA, I almost feel like something is missing from my life.

I have been taken by the very concept of PDAs the first time I saw an article on Apple’s Newton devices in a mainstream U.S. newspaper, way back in the early 1990s. I was dreaming of all the possibilities. And longed for my own Newton MessagePad.

I received a MessagePad 130 from Apple a few years later, having done some work for them. Used that Newton for a while and really enjoyed the experience. While human beings find my handwriting extremely difficult to read, my Newton MP130 did a fairly good job at recognising it. And having installed a version of Graffiti, I was able to write rather quickly on the device. The main issue I had with Newton devices was size. The MP was simply too bulky for me to carry around everywhere. I eventually stopped using the MP after a while, but was missing the convenience of my MP130.

I started using that Newton again in 2001, as I was preparing for fieldwork. Because I didn’t have a parallel port on the iBook I was getting for fieldwork, I also bought a Handspring Visor Deluxe. The Visor became a very valuable tool during my fieldwork trip to Mali, in 2002. IIRC, this model had already been discontinued but I had no trouble using it or finding new software for it. I used the Visor to take copious amounts of data which I was able to periodically transfer to my iBook. The fact that the Visor ran on standard batteries was definitely an asset in the field but I did lose data on occasion because, unlike Newton devices, Palm devices didn’t have persistent memory storage.

Coming back from Mali, I bought my first Sony Clié. I pretty much stuck with Cliés ever since and have been quite happy with them. Cliés have a few advantages over other PalmOS devices like MemorySticks and the jog dial. The form factor and screen resolution of an entry-level Clié were much better than those of my old Visor. Sony has discontinued sales of its Clié devices outside of Japan. Some used Cliés go for 30$ on eBay.

So, what do I do with a PDA? Actually, the main thing really is taking notes. Reading notes, research notes, lecture notes, conference notes, etc. I’ve taken notes on coffee I’ve tried, on things I’d like to learn, on moments I want to write more extensively about… Though my fingers are rather small, typing on a small QWERTY keyboard has never been a real option for me. I tried using the keyboard on a Clié NX70V and it wasn’t nearly as efficient as using Graffiti. In fact, I’ve become quite adept at MessagEase. I can usually take elaborate notes in real time and organise them as I wish. Some notes remain as snippets while other notes become part of bigger pieces, including much of what I’ve written in the past ten years.

I also use my PDA for a number of “simpler” things like converting units (volume and temperature, especially), playing games (while waiting for something or while listening to podcasts), setting different timers, planning trips on public transportation systems, etc. I used to try and use more PDA applications than I do now but I still find third party applications an important component of any real PDA.

I always wanted to have a WiFi-enabled PDA. It’s probably the main reason behind my original reaction to the iPod touch launch. With a good input system and a semi-ubiquitous WiFi connection, a WiFi-enabled PDA could be a “dream come true,” for me. Especially in terms of email, blogging, and social networking. Not to mention simple Web queries.

I do have a very clear idea in mind as to what would be my ideal PDA. I don’t need it to be an MP3 player, a gaming console, or a phone. I don’t really want it to have a qwerty keyboard or a still camera. I don’t even care so much about it having a colour screen. But it should have an excellent battery life, a small size, good synchronisation features, third party apps, persistent memory, a very efficient input system, and a user community. I dream of it having a high-quality sound recorder, a webcam (think videoconferencing), large amounts of memory, and a complete set of voice features perfectly tuned to its owner’s voice (like voice activation and speaker-dependent, continuous speech recognition). It could act as the perfect unit to store any kind of personal data as a kind of “smart thumbdrive.” It could be synchronised with almost any other machine without any loss of information. It would probably have GPS and location-enabled features. It could be used to drive other systems or act as the ultimate smartcard. And it should be inexpensive.

I personally think price is one of the main reasons the traditional PDA has had such a hard time building/reaching markets. Inexpensive PDAs tended to miss important features. The most interesting PDAs were as expensive as much more powerful computers. Surely, miniaturisation is costly and it never was possible for any company to release a really inexpensive yet full-featured PDA. So it may be accurate to say that the traditional PDA is too expensive for its potential market. I perceive a huge difference between problems associated with costs and the utter lack of any PDA market.

Price does tend to be a very important factor with computer technology. The OLPC project is a good example of this. While the laptop produced through this project has many other features, the one feature which caught most of the media attention was the expected price for the device, around USD$100. All this time, many people are thinking that the project should have been a cellphone project because cellphone penetration is already high and cellphones are already the perfect leapfrog tool.

So it’s unlikely that I will get my dream PDA any time soon. Chances are, I’ll end up having to use a smartphone with very few of the features I really want my PDA to have. But, as is my impression with the OLPC project, we still need to dream and talk about what these devices can be.

iPhone Wishlist

Yeah, everybody’s been talking about the iPhone. It’s last week’s story but it can still generate a fair bit of coverage. People are already thinking about the next models.

Apple has most of the technology to build what would be my dream handheld device but the iPhone isn’t it. Yet.

My wishful thinking for what could in fact be the coolest handheld ever. Of course, the device should have the most often discussed features which the iPhone currently misses (Flash, MMS, chat…). But I’m going much further, here.

  • Good quality audio recording (as with the recording add-ons for the iPod 5G).
  • Disk space (say, 80GB).
  • VoIP support (Skype or other, but as compatible as possible).
  • Video camera which can face the user (for videoconference).
  • Full voice interface: speech recognition and text-to-speech for dialing, commands, and text.
  • Handwriting recognition.
  • Stylus support.
  • Data transfer over Bluetooth.
  • TextEdit.
  • Adaptive technology for word recognition.
  • Not tied to cellular provider contract.
  • UMA Cell-to-WiFi (unlicensed mobile access).
  • GPS.
  • iLife support.
  • Sync with Mac OS X and Windows.
  • Truly international cellular coverage.
  • Outliner.
  • iWork support.
  • Disk mode.
  • Multilingual support.
  • Use as home account on Mac OS X “host.”
  • FrontRow
  • USB and Bluetooth printing.
  • Battery packs with standard batteries.

The key point here isn’t that the iPhone should be a mix between an iPod and a MacBook. I’m mostly thinking about the fact that the “Personal” part of the “PC” and “PDA” concepts has not come to fruition yet. Sure, your PC account has your preferences and some personal data. Your PDA contains your contacts and to-do lists. But you still end up with personal data in different places. Hence the need for Web apps. As we all know, web apps are quite useful but there’s still room for standalone applications, especially on a handheld. It wouldn’t take much for the iPhone to be the ideal tool to serve as a “universal home” where a user can edit and output files. To a musician or podcaster, it could become the ideal portable studio.

But where the logical step needs to be taken is in “personalization.” Apparently, the iPhone’s predictive keyboard doesn’t even learn from the user’s input. Since the iPhone is meant to be used by a single individual, it seems quite strange that it does not, minimally, adapt to typed input. Yet with a device already containing a headset it seems to me that speech technologies could be ideal. Full-text continuous speech recognition already exists and what it requires is exactly what the iPhone could provide: adaptation to a user’s voice and speech patterns. Though it may be awkward for people to use a voice interface in public, cellphones have created a whole group of people who seem to be talking to themselves. 😉

Though very different from speech recognition, text-to-speech could integrate really well with a voice-driven device. Sharing the same “dictionaries” across all applications on the same device, the TTS and SR features could be trained very specifically to a given user. While screens have been important on computers for quite a while, voice-activated computers have been prominent in science-fiction for probably as long. The most common tasks done on computers (writing messages, making appointments, entering data, querying databases…) could all be done quite effectively through a voice interface. And the iPhone could easily serve as a voice interface for other computers.

Yes, I’m nightdreaming. It’s a good way to get some rest.

What Radio Open Source Should Do

I probably think too much. In this case, about a podcast and radio show which has been with me for as long as I started listening to podcasts: Radio Open Source on Public Radio International. The show is hosted by Christopher Lydon and is produced in Cambridge, MA, in collaboration with WGBH Boston. The ROS staff is a full team working on not only the show and the podcast version but on a full-fledged blog (using a WordPress install, hosted by Contegix) with something of a listener community.

I recently decided not to listen to ROS anymore. Nothing personal, it just wasn’t doing it for me anymore. But I spent enough time listening to the show and thinking about it, I even have suggestions about what they should do.

At the risk of sounding opinionated, I’m posting these comments and suggestions. In my mind, honesty is always the best policy. Of course, nothing personal about the excellent work of the ROS team.

Executive summary of my suggestion: a weekly spinoff produced by the same team, as an actual podcast, possibly as a summary of highlights. Other shows do something similar on different radio stations and it fits the podcasting model. Because time-shifting is of the essence with podcasts, a rebroadcast version (instead of a live show) would make a lot of sense. Obviously, it would imply more work for the team as a whole but I sincerely think it would be worth it.

ROS has been one of the first podcasts to which I subscribed and it might be the one that I have maintained in my podcatcher for the longest time. The reason is that several episodes have inspired me in different ways. My perception is that the teamwork “behind the scenes” makes for a large part of the success of the show.

Now, I don’t know anything about the inner workings of the ROS team. But I do get the impression that some important changes are imminent. The two people who left in the last few months, the grant they received, their successful fundraiser, as well as some perceivable changes in the way the show is handled tell me that ROS may be looking for new directions. I’m just an ethnographer and not a media specialist but here are some of my (honest) critical observations.

First, some things which I find quite good about the show (or some reasons I was listening to the show).

  • In-depth discussions. As Siva Vaidhyanathan mentioned it on multiple occasions, ROS is one of few shows in the U.S . during which people can really spend an hour debating a single issue. While intriguing, Siva’s comparison with Canadian shows does seem appropriate according to my own experience with CBC and Radio-Canada. Things I’ve heard in Western Europe and West Africa would also fit this pattern. A show like ROS is somewhat more like The New Yorker than like The New York Times. (Not that these are innocent choices, of course.)
  • Research. A lot of care has been put in preparing for each show and, well, “it shows.” The “behind the scenes” team is obviously doing a great job. I include in this the capacity for the show to entice fascinating guests to come on the show. It takes diplomacy, care, and insight.
  • Podcasting. ROS was one of the first “public radio” shows to be available as a podcast and it’s possibly one of the radio shows for which the podcasting process is the most appropriate. Ease of subscribing, relatively few problems downloading shows, etc.
  • Show notes. Because the show uses a blog format for all of its episodes, it makes for excellent show notes, very convenient and easy to find. Easy to blog. Good trackback.
  • The “Community.” Though it can be troublesome at times, the fact that the show has a number of fans who act as regular commentators on the blog entries has been an intriguing feature of the show. On occasion, there is a sense that listeners can have some impact on the way the show is structured. Few shows on public radio do this and it’s a feature that makes the show, erm, let’s say “podworthy.” (Apologies to those who hate the “pod-” prefix. At least, you got my drift, right?)

On the other hand, there are things with ROS that have kept putting me off, especially as a podcast. A few of those “pet peeves.”

  • “Now the News.” While it’s perfectly natural for a radio show to have to break for news or ads, the disruption is quite annoying on a podcast. The pacing of the show as a whole becomes completely dominated by the breaks. What’s more, the podcast version makes very obvious the fact that discussions started before the break rarely if ever get any resolution after the break. A rebroadcast would allow for seamless editing. In fact, some television shows offer exclusive online content as a way to avoid this problem. Or, more accurately, some television shows use this concept as a way to entice watchers to visit their websites. Neat strategy, powerful concept.
  • Length. While the length of the show (a radio “hour”) allows for in-depth discussions, the usual pacing of the show often implies a rather high level of repetition. One gets the impression that the early part of the show contains most of the “good tidbits” one needs to understand what will be discussed later. I often listen to the first part of the show (before the first break) and end up skipping the rest of the show. This could be alleviated with a “best of ROS” podcast. In fact, it’s much less of an issue when the listener knows what to expect.
  • Host. Nothing personal. Chris Lydon is certainly a fabulous person and I would feel bad to say anything personal about him even though, to make a point, I have used a provocative title in the past which specifically criticised him. (My point was more about the show as a whole.) In fact, Lydon can be very good as a radio host, as I described in the past. Thing is, Lydon’s interviewing style seems to me more appropriate for a typical radio show than for a podcast. Obviously, he is quite knowledgeable of a wide array of subjects enabling him to relate to his guests. Also, he surely has a “good name” in U.S. journalistic milieus. But, to be perfectly honest, I sometimes feel that his respect for guests and other participants (blog commentators and callers when ROS still had them) is quite selective. In my observation, Lydon also tends to do what Larry King described on the Colbert Report as an “I-show” (host talking about her/his own experience, often preventing a guest to follow a thought). It can be endearing on some radio shows but it seems inappropriate for a podcast. What makes this interviewing style even more awkward is the fact that the show is frequently billed as a “conversation.” In conversation analysis, Lydon’s interviews would merit a lot of discussion.
  • Leading questions. While many questions asked on the show do help guests get into interesting issues, many questions sound like “leading” questions. Maybe not to the “how long have you been beating your wife?” extreme, but it does seem that the show is trying to get something specific out of each guest. Appropriate for journalism but awkward for what is billed as a “conversation.” In fact, many “questions” asked on the show are phrased as affirmative utterances instead of actual questions
  • Old School Journalism. It may sound like harsh criticism but what I hear from ROS often makes me think that they still believe that some sources are more worthy than others by mere virtue of being “a trusted source.” I’ve been quite critical of what I think of as “groupthink.” Often characterised by the fact that everybody listens, reads, or watches the same sources of information. In Quebec, it’s often Radio-Canada’s television shows. In the U.S., it typically implies that everyone reads the New York Times and thinks of it as their main “source of information.” IMHO, the ROS-NYT connection is a strong one. To me, critical thinking implies a mistrust of specific sources and an ability to process information regardless of the source. I do understand that the NYT is, to several people, the “paper of record” but the very notion of “paper of record” seems outdated in this so-called “Information Age.” In fact, as an outsider, I often find the NYT even more amenable to critical challenge than some other sources. This impression I got even before the scandals which have been plaguing the NYT. In other words, the NYT is the best example of Old School Journalism. Podcasting is going away from Old School Journalism so a podcast version of ROS should go away from NYT groupthink. Lydon’s NYT background is relevant here but what I describe goes much beyond that print newspaper.
  • The “Wolfpack.” The community around ROS is fascinating. If I had more time, I might want to spend more time “in” it. Every commentator on the show’s entries has interesting things to say and the comments are sometimes more insightful than the show itself. Yet, as contradictory as it may sound, the ROS “fanbase” makes the show less approachable to new listeners. This one is a common feature of open networks with something of a history but it’s heightened by the way the community is handled in the show. It sometimes seems as though some “frequent contributors” are appreciated more than others. The very fact that some people are mentioned as “frequent contributors to the show” makes the “community” sound more like a clique than like an open forum. While Brendan often brought in some questions from the real-time blog commentators, these questions rarely led to real two-way conversations. The overall effect is more like a typical radio talk show than like a community-oriented podcast.
  • Show suggestions. Perhaps because suggestions submitted to the show are quite numerous, very few of these suggestions have been discussed extensively. The “pitch a show idea of your own” concept is helpful but the end-result is that commentators will need to prepare a pitch which might be picked up by a member of the ROS team to be pitched during the team’s meeting. The process is thus convoluted, non-transparent, non-democratic, and cumbersome. To be perfectly honest, it sounds as if it were “lipservice” to the audience instead of being a way to have listeners be part of the show. As a semi-disclaimer, I did pitch several ideas. The one of my ideas which was picked up was completely transformed from my original idea. Nothing wrong with that but it doesn’t make the process feel transparent or open. While a digg-like system for voting on suggestions might be a bit too extreme for a show on public radio, I find myself dreaming for the ROS team working on shows pitched by listeners.
  • Time-sensitiveness. Because the show is broadcast and podcast four days a week, the production cycle is particularly tight. In this context, commentators need to post on an entry in a timely fashion to “get the chance to be heard.” Perfectly normal, but not that podfriendly. It seems that the most dedicated listeners are those who listen to the show live while posting comments on the episode’s blog entry. This alienates the actual podcasting audience. Time-shifting is at the very basis of podcasting and many shows had to adapt to this reality (say, for a contest or to get feedback). The time-sensitive nature of ROS strengthens the idea that it’s a radio show which happens to be podcast, contrary to their claims. A weekly podcast would alleviate this problem.
  • Gender bias. Though I didn’t really count, it seems to me that a much larger proportion of men than women are interviewed as guests on the show. It even seems that women are only interviewed when the show focuses specifically on gender. Women are then interviewed as women instead of being guests who happen to be females. This is especially flagrant when compared to podcasts and radio shows outside of the U.S. mainstream media. Maybe I’m too gender-conscious but a gender-balanced show often produces a dynamic which is, I would dare say, “friendlier.”
  • U.S. focus. While it makes sense that a show produced in Cambridge, MA should focus on the U.S., I naively thought that the ‘I’ in PRI implied a global reach. Many ROS episodes have discussed “international affairs” yet the focus is on “what does it mean for U.S.” This approach is quite far from what I have heard in West Africa, Western Europe, and Canada.

Phew!

Yes, that’s a lot.

Overall, I still enjoyed many things of the show while I was listening to it. I was often compelled to post a blog entry about something I heard on the show which, in itself, is a useful thing about a podcast. But the current format of the show is clearly not what I expect a podcast to be.

Now what? Well, my dream would be a podcast on disparate subjects with the team and clout of ROS but with podcasting in mind, from beginning to end. I imagine the schedule to be more of a weekly wrap-up than a live daily show. As a podcast listener, I tend to prefer weekly shows. In some cases, podcasts serve as a way to incite listeners to listen to the whole show. Makes a lot of sense.

That podcast could include a summary of what was said in the live comments. It could also have guest hosts. And exclusive content. And it could become an excellent place to get insight about a number of things. And I’d listen to it. Carefully.

Some “pie in the sky” wishes.

  • Full transcripts. Yes, it takes time and effort, but it brings audio to the blogosphere more than anything else could. Different transcribing services are available for podcasts and members of the team could make this more efficient.
  • Categorised feeds. The sadly missed DailySonic podcast had excellent customisation feature. If a mainstream radio station could do it, ROS would be a good candidate for categorised feeds.
  • Voting mechanism. Since Slashdot and Digg, voting has probably been the most common form of online participation by people who care about media. Voting on features would make the “pitching” process more than simply finding the right “hook” to make the show relevant. Results are always intriguing in those cases.
  • Community guests. People do want to get involved and the ROS community is fascinating. Bringing some members on the podcast could do a lot to give a voice to actual people. The only attempt I remember on ROS was with a kind of answering machine system. Nothing was played on the show. (What I left was arguably not that fascinating but I was surprised nothing came out of it.)
  • Guest hosts. Not to go too Bakhtin on y’all, but multiple voices in the same discussion makes for interesting stories. Being a guest host could prove how difficult it is be a host.
  • Field assignments. With a wide community of listeners, it could be interesting to have audio from people in other parts of the world, apart from phone interviews. Even an occasional one-minute segment would go a long way to give people exposure to realities outside the United States.
  • Social bookmarking. Someone recently posted an advice for a book club. With social bookmarking features, book recommendations could be part of a wider scheme.
  • Enhanced audio. While the MP3 version is really necessary, podcasts using enhanced features such as chapters and embedded images can be extremely useful, especially for owners of recent iPod/iPhone.
  • Links. ROS is not the only radio show and links are what makes podcasts alive, especially when one is linked to another. In a way, podcasts become an alternate universe through those links.

Ok, I’m getting too far astray from my original ideas about ROS. It must mean that I should leave it at that.

I do sincerely hope that ROS will take an interesting turn. I’ll be watching from my blog aggregator and I might join the ROS community again.

In the meantime, I’ll focus on other podcasts.

Googely Voice

Neat new service.

GOOG-411 offers free directory assistance – Lifehacker

Not available in Montreal, but quite useful. Apparently better than Free-411.

The speech recognition and speech synthesis are quite good. In fact, when I was working in speech, such a service was pretty much the main example we used for the need for speech research. With the prominence of cellphones in many different parts of the world, I still think that speech is a field in which technological advancements can have very interesting effects.

Why Podcasting Doesn't Work (Reason #23)

Was listening to the latest episode of Scientific American’s ScienceTalk podcast (for Januray 3, 2007). As is often the case with some of my favourite podcasts, I wanted to blog about specific issues mentioned in this episode.

Here’s the complete “show notes” for this episode (22:31 running time).

In this episode, journalist Chip Walter, author of Thumbs, Toes and Tears, takes us on a tour of the physical traits that are unique to humans, with special attention to crying, the subject of his article in the current issue of Scientific American MIND. The University of Cambridge’s Gordon Smith discusses the alarming lack of any randomized, controlled trials to determine the efficacy of parachutes. Plus we’ll test your knowledge about some recent science in the news. Websites mentioned on this episode include www.sciammind.com; www.chipwalter.com; www.bmj.com.

AFAICT, there’s a direct link to the relevant MP3 file (which may be downloaded with a default name of “podcast.mp3” through a browser’s “save link as” feature),  an embedded media player to listen to the episode, some links to subscribe to podcast through RSS, My Yahoo, or iTunes, and a menu to browse for episodes by month. Kind of neat.

But what’s wrong with this picture?

In my mind, a few things. And these are pretty common for podcasts.

First, there are no clickable links in the show notes. Sure, anybody can copy/paste the URLs in a browser but there’s something just slightly frustrating about having to do that instead of just clicking on a link directly. In fact, these links are quite generic and would still require that people look for information themselves, instead of pinpointing exactly which scientific articles were featured in the podcast. What’s worse, the Chip Walter article discussed in the podcast isn’t currently found on the main page for the current issue of Scientific American’s Mind. To add insult to injury, the URL for that article is the mnemo-friendly:

http://www.sciammind.com/article.cfm?&articleID=33F8609A-E7F2-99DF-3F12706DF3E30E29

Catchy! 😉

These are common issues with show notes and are easily solved. I should just write SciAm to comment on this. But there are deeper issues.

One reason blogging caught on so well is that it’s very easy to link and quote from one blog to another. In fact, most blogging platforms have bookmarklets and other tools to make it easy to create a blog entry by selecting text and/or images from any web page, clicking on the bookmarklet, adding a few comments, and pressing the “Publish” button. In a matter of seconds, you can have your blog entry ready. If the URL to the original text is static, readers of your blog are able to click on a link accompanying the quote to put it in context. In effect, those blog entries are merely tagging web content. But the implications are deeper. You’re associating something of yourself with that content. You’re applying some basic rules of attribution by providing enough information to identify the source of an idea. You’re making it easy for readers to follow streams of thought. If the original is a trackback-/ping-enabled blog system, you’re telling the original author that you’re refering to her piece. You’re creating new content that can, in itself, serve as the basis for something new. You might even create a pseudo-community of like-minded people. All with a few clicks and types.

Compare with the typical (audio) podcast episode. You listen to it while commuting or while doing some other low-attention activity. You suddenly want to talk about what you heard. Go out and reach someone. You do have a few options. You can go and look at the show notes if they exist and use the same bookmarklet procedure to create a blog entry. Or you can simply tell someone “hey, check out the latest ScienceTalk, from SciAm, it’s got some neat things about common sense and human choking.” If the podcast has a forum, you can go in the forum and post something to listeners of that podcast. If the show notes are in blog form, you may post comments for those who read the show notes. And you could do all sorts of things with the audio recording that you have, including bookmark it (depending on the device you use to listen to audio files). But all of these are quite limited.

You can’t copy/paste an excerpt from the episode. You can’t link to a specific segment of that episode. You can’t realistically expect most of your blog readers to access the whole podcast just to get the original tidbit. Blog readers may not easily process the original information further. In short, podcasts aren’t easily bloggable.

Podcast episodes are often big enough that it’s not convenient to keep them on your computer or media device.  Though it is possible to bookmark audio and video files, there’s no standard method to keep and categorize these bookmarks. Many podcasts make it very hard to find a specific episode. Some podcasts in fact make all but the most recent episodes unavailable for download. Few devices make it convenient to just skim over a podcast. Though speed listening seems to be very effective (like speed reading) at making informative content stick in someone’s head, few solutions exist for speed listening to podcasts. A podcast’s RSS entry may contain a useful summary but there’s no way to scale up or down the amount of information we get about different podcast segments like we can do with text-based RSS feeds in, say, Safari 2.0 and up. Audio files can’t easily be indexed, searched, or automatically summarized. Most data mining procedures don’t work with audio files. Few formats allow for direct linking from the audio file to other online content and those formats that do allow for such linking aren’t ubiquitous. Responding to a podcast with a podcast (or audio/video comment) is doable but is more time-consuming than written reactions to written content. Editing audio/video content is more involving than, say, proofreading a forum comment before sending it. Relatively few people respond in writing to blogs and forums and it’s quite likely that the proportion of people who would feel comfortable responding to podcasts with audio/video recordings is much smaller than blog/forum commenters.

And, of course, video podcasts (a big trend in podcasting) aren’t better than audio podcasts on any of these fronts.

Speech recognition technology and podcast-transcription services like podzinger may make some of these issues moot but they’re all far from perfect, often quite costly, and are certainly not in widespread use. A few podcasts (well, at least one) with very dedicated listeners have listeners effectively transcribe the complete verbal content of every podcast episode and this content can work as blog-ammo. But chances that such a practice may become common are slim to none.

Altogether,  podcasting is more about passive watching/listening than about active engagement in widespread dialogue. Similar to our good old friend (and compatriot) McLuhan described as “hot,” instead of “cool” media. (Always found the distinction counter-intuitive myself, but it fits, to a degree…)

Having said all of this, I recently embarked in my first real podcasting endeavor, having my lectures be distributed in podcast form, within the Moodle course management system. Lecturecasts have been on my mind for a while. So this is an opportunity for me to see, as a limited experiment, whether it can appropriately be integrated in my teaching.

As it turns out, I don’t have much to do to make the lecturecasts possible. Concordia University has a service to set it all up for me. They give me a wireless lapel microphone, record that signal, put the MP3 file on one of their servers, and add that file in Moodle as a tagged podcast episode (Moodle handles the RSS and other technical issues). Neat!

Moodle itself makes most of the process quite easy. And because the podcasts are integrated within the broader course management structure, it might be possible to alleviate some of the previously-mentioned issues.  In this case, the podcast is a complementary/supplementary component of the complete course. It might help students revise the content, spark discussions, invite reflections about the necessity of note-taking, enable neat montages, etc. Or it might have negative impacts on classroom attendance, send the message that note-taking isn’t important, put too much of the spotlight on my performance (or lack thereof) as a speaker, etc.

Still, I like the fact that I can try this out in the limited context of my own classes.