Category Archives: smartphones

Wearable Hub: Getting the Ball Rolling

Statement

After years of hype, wearable devices are happening. What wearable computing lacks is a way to integrate devices into a broader system.

Disclaimer/Disclosure/Warning

  • For the past two months or so, I’ve been taking notes about this “wearable hub” idea (started around CES’s time, as wearable devices like the Pebble and Google Glass were discussed with more intensity). At this point, I have over 3000 words in notes, which probably means that I’d have enough material for a long essay. This post is just a way to release a few ideas and to “think aloud” about what wearables may mean.
  • Some of these notes have to do with the fact that I started using a few wearable devices to monitor my activities, after a health issue pushed me to start doing some exercise.
  • I’m not a technologist nor do I play one on this blog. I’m primarily an ethnographer, with diverse interests in technology and its implications for human beings. I do research on technological appropriation and some of the course I teach relate to the social dimensions of technology. Some of the approaches to technology that I discuss in those courses relate to constructionism and Actor-Network Theory.
  • I consider myself a “geek ethnographer” in the sense that I take part in geek culture (and have come out as a geek) but I’m also an outsider to geekdom.
  • Contrary to the likes of McLuhan, Carr, and Morozov, my perspective on technology and society is non-deterministic. The way I use them, “implication” and “affordance” aren’t about causal effects or, even, about direct connections. I’m not saying that society is causing technology to appear nor am I proposing a line from tools to social impacts. Technology and society are in a complex system.
  • Further, my approach isn’t predictive. I’m not saying what will happen based on technological advances nor am I saying what technology will appear. I’m thinking about the meaning of technology in an intersubjective way.
  • My personal attitude on tools and gadgets is rather ambivalent. This becomes clear as I go back and forth between techno-enthusiastic contexts (where I can almost appear like a Luddite) and techno-skeptical contexts (where some might label me as a gadget freak). I integrate a number of tools in my life but I can be quite wary about them.
  • I’m not wedded to the ideas I’m putting forth, here. They’re just broad musings of what might be. More than anything, I hope to generate thoughtful discussion. That’s why I start this post with a broad statement (not my usual style).
  • Of course, I know that other people have had similar ideas and I know that a concept of “wearable hub” already exists. It’s obvious enough that it’s one of these things which can be invented independently.

From Wearables to Hubs

Back in the 1990s, “wearable computing” became something of a futuristic buzzword, often having to do with articles of clothing. There have been many experiments and prototypes converging on an idea that we would, one day, be able to wear something resembling a full computer. Meanwhile, “personal digital assistants” became something of a niche product and embedded systems became an important dimension of car manufacturing.

Fast-forward to 2007, when a significant shift in the use of smartphones occurred. Smartphones existed before that time, but their usages, meanings, and positions in the public discourse changed quite radically around the time of the iPhone’s release. Not that the iPhone itself “caused a smartphone revolution” or that smartphone adoption suddenly reached a “tipping point”. I conceive of this shift as a complex interplay between society and tools. Not only more Kuhn than Popper, but more Latour than Kurzweil.

Smartphones, it may be argued, “happened”.

Without being described as “wearable devices”, smartphones started playing some of the functions people might have assigned to wearable devices. The move was subtle enough that Limor Fried recently described it as a realization she’s been having. Some tech enthusiasts may be designing location-aware purses and heads-up displays in the form of glasses. Smartphones are already doing a lot of the things wearables were supposed to do. Many people “wear” smartphones at most times during their waking lives and these Internet-connected devices are full of sensors. With the proliferation of cases, one might even perceive some of them as fashion accessories, like watches and sunglasses.

Where smartphones become more interesting, in terms of wearable computing, is as de facto wearable hubs.

My Wearable Devices

Which brings me to mention the four sensors I’ve been using more extensively during the past two months:

Yes, these all have to do with fitness (and there’s quite a bit of overlap between them). And, yes, I started using them a few days after the New Year. But it’s not about holiday gifts or New Year’s resolutions. I’ve had some of these devices for a while and decided to use them after consulting with a physician about hypertension. Not only have they helped me quite a bit in solving some health issues, but these devices got me to think.

(I carry several other things with me at most times. Some of my favourites include Tenqa REMXD Bluetooth headphones and the LiveScribe echo smartpen.)

One aspect is that they’re all about the so-called “quantified self”. As a qualitative researcher, I tend to be skeptical of quants. In this case, though, the stats I’m collecting about myself fit with my qualitative approach. Along with quantitative data from these devices, I’ve started collecting qualitative data about my life. The next step is to integrate all those data points automatically.

These sensors are also connected to “gamification”, a tendency I find worrisome, preferring playfulness. Though game mechanics are applied to the use of these sensors, I choose to rely on my intrinsic motivation, not paying much attention to scores and badges.

But the part which pushed me to start taking the most notes was that all these sensors connect with my iOS ()and Android) devices. And this is where the “wearable hub” comes into play. None of these devices is autonomous. They’re all part of my personal “arsenal”, the equipment I have on my me on most occasions. Though there are many similarities between them, they still serve different purposes, which are much more limited than those “wearable computers” might have been expected to serve. Without a central device serving as a type of “hub”, these sensors wouldn’t be very useful. This “hub” needs not be a smartphone, despite the fact that, by default, smartphones are taken to be the key piece in this kind of setup.

In my personal scenario, I do use a smartphone as a hub. But I also use tablets. And I could easily use an existing device of another type (say, an iPod touch), or even a new type of device meant to serve as a wearable hub. Smartphones’ “hub” affordances aren’t exclusive.

From Digital Hub to Wearable Hub

Most of the devices which would likely serve as hubs for wearable sensors can be described as “Post-PC”. They’re clearly “personal” and they’re arguably “computers”. Yet they’re significantly different from the “Personal Computers” which have been so important at the end of last century (desktop and laptop computers not used as servers, regardless of the OS they run).

Wearability is a key point, here. But it’s not just a matter of weight or form factor. A wearable hub needs to be wireless in at least two important ways: independent from a power source and connected to other devices through radio waves. The fact that they’re worn at all times also implies a certain degree of integration with other things carried throughout the day (wallets, purses, backpacks, pockets…). These devices may also be more “personal” than PCs because they may be more apparent and more amenable to customization than PCs.

Smartphones fit the bill as wearable hubs. Their form factors and battery life make them wearable enough. Bluetooth (or ANT+, Nike+, etc.) has been used to pair them wirelessly with sensors. Their connectivity to GPS and cellular networking as well as their audio and visual i/o can have interesting uses (mapping a walk, data updates during a commute, voice feedback…). And though they’re far from ubiquitous, smartphones have become quite common in key markets.

Part of the reason I keep thinking about “hubs” has to do with comments made in 2001 by then Apple CEO Steve Jobs about the “digital lifestyle” age in “PC evolution” (video of Jobs’s presentation; as an anthropologist, I’ll refrain from commenting on the evolutionary analogies):

We believe the PC, or more… importantly, the Mac can become the “digital hub” of our emerging digital lifestyle, with the ability to add tremendous value to … other digital devices.

… like camcorders, portable media players, cellphones, digital cameras, handheld organizers, etc. (Though they weren’t mentioned, other peripherals like printers and webcams also connect to PCs.)

The PC was thus going to serve as a hub, “not only adding value to these devices but interconnecting them, as well”.

At the time, key PC affordances which distinguished them from those other digital devices:

  • Big screen affording more complex user interfaces
  • Large, inexpensive hard disk storage
  • Burning DVDs and CDs
  • Internet connectivity, especially broadband
  • Running complex applications (including media processing software like the iLife suite)

Though Jobs pinpointed iLife applications as the basis for this “digital hub” vision, it sounds like FireWire was meant to be an even more important part of this vision. Of course, USB has supplanted FireWire in most use cases. It’s interesting, then, to notice that Apple only recently started shipping Macs with USB 3. In fact, DVD burning is absent from recent Macs. In 2001, the Mac might have been at the forefront of this “digital lifestyle” age. In 2013, the Mac has moved away from its role as “digital hub”.

In the meantime, the iPhone has become one of the best known examples of what I’m calling “wearable hubs”. It has a small screen and small, expensive storage (by today’s standards). It also can’t burn DVDs. But it does have nearly-ubiquitous Internet connectivity and can run fairly complex applications, some of which are adapted from the iLife suite. And though it does have wired connectivity (through Lightning or the “dock connector”), its main hub affordances have to do with Bluetooth.

It’s interesting to note that the same Steve Jobs, who used the “digital hub” concept to explain that the PC wasn’t dead in 2001, is partly responsible for popularizing the concept of “post-PC devices” six years later. One might perceive hypocrisy in this much delayed apparent flip-flop. On the other hand, Steve Jobs’s 2007 comments (video) were somewhat nuanced, as to the role of post-PC devices. What’s more interesting, though, is to think about the implications of the shift between two views of digital devices, regardless of Apple’s position through that shift.

Some post-PC devices (including the iPhone, until quite recently) do require a connection to a PC. In this sense, a smartphone might maintain its position with regards to the PC as digital hub. Yet, some of those devices are used independently of PCs, including by some people who never owned PCs.

Post-Smartphone Hubs

It’s possible to imagine a wearable hub outside of the smartphone (and tablet) paradigm. While smartphones are a convenient way to interconnect wearables, their hub-related affordances still sound limited: they lack large displays and their storage space is quite expensive. Their battery life may also be something to consider in terms of serving as hubs. Their form factors make some sense, when functioning as phones. Yet they have little to do with their use as hubs.

Part of the realization, for me, came from the fact that I’ve been using a tablet as something of an untethered hub. Since I use Bluetooth headphones, I can listen to podcasts and music while my tablet is in my backpack without being entangled in a cable. Sounds trivial but it’s one of these affordances I find quite significant. Delegating music playing functions to my tablet relates in part to battery life and use of storage. The tablet’s display has no importance in this scenario. In fact, given some communication between devices, my smartphone could serve as a display for my tablet. So could a “smartwatch” or “smartglasses”.

The Body Hub

Which led me to think about other devices which would work as wearable hubs. I originally thought about backpackable and pocketable devices.

But a friend had a more striking idea:

Under Armour’s Recharge Energy Suit may be an extreme version of this, one which would fit nicely among things Cathi Bond likes to discuss with Nora Young on The Sniffer. Nora herself has been discussing wearables on her blog as well as on her radio show. Sure, part of this concept is quite futuristic. But a sensor mesh undershirt is a neat idea for several reasons.

  • It’s easy to think of various sensors it may contain.
  • Given its surface area, it could hold enough battery power to supplement other devices.
  • It can be quite comfortable in cold weather and might even help diffuse heat in warmer climates.
  • Though wearable, it needs not be visible.
  • Thieves would probably have a hard time stealing it.
  • Vibration and haptic feedback on the body can open interesting possibilities.

Not that it’s the perfect digital hub and I’m sure there are multiple objections to a connected undershirt (including issues with radio signals). But I find the idea rather fun to think, partly because it’s so far away from the use of phones, glasses, and watches as smart devices.

Another thing I find neat, and it may partly be a coincidence, is the very notion of a “mesh”.

The Wearable Mesh

Mesh networking is a neat concept, which generates more hype than practical uses. As an alternative to WiFi access points and cellular connectivity, it’s unclear that it may “take the world by storm”. But as a way to connect personal devices, it might have some potential. After all, as Bernard Benhamou recently pointed out on France Culture’s Place de la toile, the Internet of Things may not require always-on full-bandwith connectivity. Typically, wearable sensors use fairly little bandwidth or only use it for limited amounts of time. A wearable mesh could connect wearable devices to one another while also exchanging data through the Internet itself.

Or with local devices. Smart cities, near field communication, and digital appliances occupy interesting positions among widely-discussed tendencies in the tech world. They may all have something to do with wearable devices. For instance, data exchanged between transit systems and their users could go through wearable devices. And while mobile payment systems can work through smartphones and other cellphones, wallet functions can also be fulfilled by other wearable devices.

Alternative Futures

Which might provide an appropriate segue into the ambivalence I feel toward the “wearable hub” concept I’m describing. Though I propose these ideas as if I were enthusiastic about them, they all give me pause. As a big fan of critical thinking, I like to think about “what might be” to generate questions and discussions exposing a diversity of viewpoints about the future.

Mass media discussions about these issues tend to focus on such things as privacy, availability, norms, and usefulness. Google Glass has generated quite a bit of buzz about all four. Other wearables may mainly raise issues for one or two of these broad dimensions. But the broad domain of wearable computing raises a lot more issues.

Technology enthusiasts enjoy discussing issues through the dualism between dystopia and utopia. An obvious issue with this dualism is that humans disagree about the two categories. Simply put, one person’s dystopia can be another person’s utopia, not to mention the nuanced views of people who see complex relationships between values and social change.

In such a context, a sociologist’s reflex may be to ask about the implications of these diverse values and opinions. For instance:

  • How do people construct these values?
  • Who decides which values are more important?
  • How might social groups cope with changes in values?

Discussing these issues and more, in a broad frame, might be quite useful. Some of the trickiest issues are raised after some changes in technology have already happened. From writing to cars, any technological context has unexpected implications. An ecological view of these implications could broaden the discussion.

I tend to like the concept of the “drift-off moment”, during which listeners (or readers) start thinking about the possibilities afforded a new tool (or concept). In the context of a sales pitch, the idea is that these possibilities are positive, a potential buyer is thinking about the ways she might use a newfangled device. But I also like the deeper process of thinking about all sorts of implications, regardless of their value.

So…

What might be the implications of a wearable hub?

Future of Learning Content

If indeed Apple plans to announce not just more affordable textbook options for students, but also more interactive, immersive ebook experiences…

Forecasting next week’s Apple education event (Dan Moren and Lex Friedman for Macworld)

I’m still in catchup mode (was sick during the break), but it’s hard to let this pass. It’s exactly the kind of thing I like to blog about: wishful thinking and speculation about education. Sometimes, my crazy predictions are fairly accurate. But my pleasure at blogging these things has little to do with the predictions game. I’m no prospectivist. I just like to build wishlists.

In this case, I’ll try to make it short. But I’m having drift-off moments just thinking about the possibilities. I do have a lot to say about this but we’ll see how things go.

Overall, I agree with the three main predictions in that MacWorld piece: Apple might come out with eBook creation tools, office software, and desktop reading solutions. I’m interested in all of these and have been thinking about the implications.

That MacWorld piece, like most media coverage of textbooks, these days, talks about the weight of physical textbooks as a major issue. It’s a common refrain and large bookbags/backpacks have symbolized a key problem with “education”. Moren and Friedman finish up with a zinger about lecturing. Also a common complaint. In fact, I’ve been on the record (for a while) about issues with lecturing. Which is where I think more reflection might help.

For one thing, alternative models to lecturing can imply more than a quip about the entertainment value of teaching. Inside the teaching world, there’s a lot of talk about the notion that teaching is a lot more than providing access to content. There’s a huge difference between reading a book and taking a class. But it sounds like this message isn’t heard and that there’s a lot of misunderstanding about the role of teaching.

It’s quite likely that Apple’s announcement may make things worse.

I don’t like textbooks but I do use them. I’m not the only teacher who dislikes textbook while still using them. But I feel the need to justify myself. In fact, I’ve been on the record about this. So, in that context, I think improvements in textbooks may distract us from a bigger issue and even lead us in the wrong direction. By focusing even more on content-creation, we’re commodifying education. What’s more, we’re subsuming education to a publishing model. We all know how that’s going. What’s tragic, IMHO, is that textbook publishers themselves are going in the direction of magazines! If, ten years from now, people want to know when we went wrong with textbook publishing, it’ll probably be a good idea for them to trace back from now. In theory, magazine-style textbooks may make a lot of sense to those who perceive learning to be indissociable from content consumption. I personally consider these magazine-style textbooks to be the most egregious of aberrations because, in practice, learning is radically different from content consumption.

So… If, on Thursday, Apple ends up announcing deals with textbook publishers to make it easier for them to, say, create and distribute free ad-supported magazine-style textbooks, I’ll be going through a large range of very negative emotions. Coming out of it, I might perceive a silverlining in the fact that these things can fairly easily be subverted. I like this kind of technological subversion and it makes me quite enthusiastic.

In fact, I’ve had this thought about iAd producer (Apple’s tool for creating mobile ads). Never tried it but, when I heard about it, it sounded like something which could make it easy to produce interactive content outside of mobile advertising. I don’t think the tool itself is restricted to Apple’s iAd, but I could see how the company might use the same underlying technology to create some content-creation tool.

“But,” you say, “you just said that you think learning isn’t about content.” Quite so. I’m not saying that I think these tools should be the future of learning. But creating interactive content can be part of something wider, which does relate to learning.

The point isn’t that I don’t like content. The point is that I don’t think content should be the exclusive focus of learning. To me, allowing textbook publishers to push more magazine-style content more easily is going in the wrong direction. Allowing diverse people (including learners and teachers) to easily create interactive content might in fact be a step in the right direction. It’s nothing new, but it’s an interesting path.

In fact, despite my dislike of a content emphasis in learning, I’m quite interested in “learning objects”. In fact, I did a presentation about them during the Spirit of Inquiry conference at Concordia, a few years ago (PDF).

A neat (but Flash-based) example of a learning object was introduced to me during that same conference: Mouse Party. The production value is quite high, the learning content seems relatively high, and it’s easily accessible.

But it’s based on Flash.

Which leads me to another part of the issue: formats.

I personally try to avoid Flash as much as possible. While a large number of people have done amazing things with Flash, it’s my sincere (and humble) opinion that Flash’s time has come and gone. I do agree with Steve Jobs on this. Not out of fanboism (I’m no Apple fanboi), not because I have something against Adobe (I don’t), not because I have a vested interested in an alternative technology. I just think that mobile Flash isn’t going anywhere and that. Even on the desktop, I think Flash-free is the way to go. Never installed Flash on my desktop computer, since I bought it in July. I do run Chrome for the occasional Flash-only video. But Flash isn’t the only video format out there and I almost never come across interesting content which actually relies on something exclusive to Flash. Flash-based standalone apps (like Rdio and Machinarium) are a different issue as Flash was more of a development platform for them and they’re available as Flash-free apps on Apple’s own iOS.

I wouldn’t be surprised if Apple’s announcements had something to do with a platform for interactive content as an alternative to Adobe Flash. In fact, I’d be quite enthusiastic about that. Especially given Apple’s mobile emphasis. We might be getting further in “mobile computing for the rest of us”.

Part of this may be related to HTML5. I was quite enthusiastic when Tumult released its “Hype” HTML5-creation tool. I only used it to create an HTML5 version of my playfulness talk. But I enjoyed it and can see a lot of potential.

Especially in view of interactive content. It’s an old concept and there are many tools out there to create interactive content (from Apple’s own QuickTime to Microsoft PowerPoint). But the shift to interactive content has been slower than many people (including educational technologists) would have predicted. In other words, there’s still a lot to be done with interactive content. Especially if you think about multitouch-based mobile devices.

Which eventually brings me back to learning and teaching.

I don’t “teach naked”, I do use slides in class. In fact, my slides are mostly bullet points, something presentation specialists like to deride. Thing is, though, my slides aren’t really meant for presentation and, while they sure are “content”, I don’t really use them as such. Basically, I use them as a combination of cue cards, whiteboard, and coursenotes. Though I may sound defensive about this, I’m quite comfortable with my use of slides in the classroom.

Yet, I’ve been looking intently for other solutions.

For instance, I used to create outlines in OmniOutliner that I would then send to LaTeX to produce both slides and printable outlines (as PDFs). I’ve thought about using S5, but it doesn’t really fit in my workflow. So I end up creating Keynote files on my Mac, uploading them (as PowerPoint) before class, and using them in the classroom using my iPad. Not ideal, but rather convenient.

(Interestingly enough, the main thing I need to do today is create PowerPoint slides as ancillary material for a textbook.)

In all of these cases, the result isn’t really interactive. Sure, I could add buttons and interactive content to the slides. But the basic model is linear, not interactive. The reason I don’t feel bad about it is that my teaching is very interactive (the largest proportion of classtime is devoted to open discussions, even with 100-plus students). But I still wish I could have something more appropriate.

I have used other tools, especially whiteboarding and mindmapping ones. Basically, I elicit topics and themes from students and we discuss them in a semi-structured way. But flow remains an issue, both in terms of workflow and in terms of conversation flow.

So if Apple were to come up with tools making it easy to create interactive content, I might integrate them in my classroom work. A “killer feature” here is if interaction could be recorded during class and then uploaded as an interactive podcast (à la ProfCast).

Of course, content-creation tools might make a lot of sense outside the classroom. Not only could they help distribute the results of classroom interactions but they could help in creating learning material to be used ahead of class. These could include the aforementioned learning objects (like Mouse Party) as well as interactive quizzes (like Hot Potatoes) and even interactive textbooks (like Moglue) and educational apps (plenty of these in the App Store).

Which brings me back to textbooks, the alleged focus of this education event.

One of my main issues with textbooks, including online ones, is usability. I read pretty much everything online, including all the material for my courses (on my iPad) but I find CourseSmart and its ilk to be almost completely unusable. These online textbooks are, in my experience, much worse than scanned and OCRed versions of the same texts (in part because they don’t allow for offline access but also because they make navigation much more difficult than in GoodReader).

What I envision is an improvement over PDFs.

Part of the issue has to do with PDF itself. Despite all its benefits, Adobe’s “Portable Document Format” is the relic of a bygone era. Sure, it’s ubiquitous and can preserve formatting. It’s also easy to integrate in diverse tools. In fact, if I understand things correctly, PDF replaced Display PostScript as the basis for Quartz 2D, a core part of Mac OS X’s graphics rendering. But it doesn’t mean that it can’t be supplemented by something else.

Part of the improvement has to do with flexibility. Because of its emphasis on preserving print layouts, PDF tends to enforce print-based ideas. This is where EPUB is at a significant advantage. In a way, EPUB textbooks might be the first step away from the printed model.

From what I can gather, EPUB files are a bit like Web archives. Unlike PDFs, they can be reformatted at will, just like webpages can. In fact, iBooks and other EPUB readers (including Adobe’s, IIRC) allow for on-the-fly reformatting, which puts the reader in control of a much greater part of the reading experience. This is exactly the kind of thing publishers fail to grasp: readers, consumers, and users want more control on the experience. EPUB textbooks would thus be easier to read than PDFs.

EPUB is the basis for Apple’s iBooks and iBookstore and people seem to be assuming that Thursday’s announcement will be about iBooks. Makes sense and it’d be nice to see an improvement over iBooks. For one thing, it could support EPUB 3. There are conversion tools but, AFAICT, iBooks is stuck with EPUB 2.0. An advantage there is that EPUBs can possibly include scripts and interactivity. Which could make things quite interesting.

Interactive formats abound. In fact, PDFs can include some interactivity. But, as mentioned earlier, there’s a lot of room for improvement in interactive content. In part, creation tools could be “democratized”.

Which gets me thinking about recent discussions over the fate of HyperCard. While I understand John Gruber’s longstanding position, I find room for HyperCard-like tools. Like some others, I even had some hopes for ATX-based TileStack (an attempt to bring HyperCard stacks back to life, online). And I could see some HyperCard thinking in an alternative to both Flash and PDF.

“Huh?”, you ask?

Well, yes. It may sound strange but there’s something about HyperCard which could make sense in the longer term. Especially if we get away from the print model behind PDFs and the interaction model behind Flash. And learning objects might be the ideal context for this.

Part of this is about hyperlinking.  It’s no secret that HyperCard was among HTML precursors. As the part of HTML which we just take for granted, hyperlinking is among the most undervalued features of online content. Sure, we understand the value of sharing links on social networking systems. And there’s a lot to be said about bookmarking. In fact, I’ve been thinking about social bookmarking and I have a wishlist about sharing tools, somewhere. But I’m thinking about something much more basic: hyperlinking is one of the major differences between online and offline wriiting.

Think about the differences between, say, a Wikibook and a printed textbook. My guess is that most people would focus on the writing style, tone, copy-editing, breadth, reviewing process, etc. All of these are relevant. In fact, my sociology classes came up with variations on these as disadvantages of the Wikibook over printed textbooks. Prior to classroom discussion about these differences, however, I mentioned several advantages of the Wikibook:

  • Cover bases
  • Straightforward
  • Open Access
  • Editable
  • Linked

(Strangely enough, embedded content from iWork.com isn’t available and I can’t log into my iWork.com account. Maybe it has to do with Thursday’s announcement?)

That list of advantages is one I’ve been using since I started to use this Wikibook… excerpt for the last one. And this is one which hit me, recently, as being more important than the others.

So, in class, I talked about the value of links and it’s been on my mind quite a bit. Especially in view of textbooks. And critical thinking.

See, academic (and semi-academic) writing is based on references, citations, quotes. English-speaking academics are likely to be the people in the world of publishing who cite the most profusely. It’s not rare for a single paragraph of academic writing in English to contain ten citations or more, often stringed in parentheses (Smith 1999, 2005a, 2005b; Smith and Wesson 1943, 2010). And I’m not talking about Proust-style paragraphs either. I’m convinced that, with some quick searches, I could come up with a paragraph of academic writing which has less “narrative content” than citation.

Textbooks aren’t the most egregious example of what I’d consider over-citing. But they do rely on citations quite a bit. As I work more specifically on textbook content, I notice even more clearly the importance of citations. In fact, in my head, I started distinguishing some patterns in textbook content. For instance, there are sections which mostly contain direct explanations of key concepts while other sections focus on personal anecdotes from the authors or extended quotes from two sides of the debate. But one of the most obvious sections are summaries from key texts.

For instance (hypothetical example):

As Nora Smith explained in her 1968 study Coming Up with Something to Say, the concept of interpretation has a basis in cognition.

Smith (1968: 23) argued that Pierce’s interpretant had nothing to do with theatre.

These citations are less conspicuous than they’d be in peer-reviewed journals. But they’re a central part of textbook writing. One of their functions should be to allow readers (undergraduate students, mostly) to learn more about a topic. So, when a student wants to know more about Nora Smith’s reading of Pierce, she “just” have to locate Smith’s book, go to the right page, scan the text for the read for the name “Pierce”, and read the relevant paragraph. Nothing to it.

Compare this to, say, a blogpost. I only cite one text, here. But it’s linked instead of being merely cited. So readers can quickly know more about the context for what I’m discussing before going to the library.

Better yet, this other blogpost of mine is typical of what I’ve been calling a linkfest, a post containing a large number of links. Had I put citations instead of links, the “narrative” content of this post would be much less than the citations. Basically, the content was a list of contextualized links. Much textbook content is just like that.

In my experience, online textbooks are citation-heavy and take almost no benefit from linking. Oh, sure, some publisher may replace citations with links. But the result would still not be the same as writing meant for online reading because ex post facto link additions are quite different from link-enhanced writing. I’m not talking about technological determinism, here. I’m talking about appropriate tool use. Online texts can be quite different from printed ones and writing for an online context could benefit greatly from this difference.

In other words, I care less about what tools publishers are likely to use to create online textbooks than about a shift in the practice of online textbooks.

So, if Apple comes out with content-creation tools on Thursday (which sounds likely), here are some of my wishes:

  • Use of open standards like HTML5 and EPUB (possibly a combination of the two).
  • Completely cross-platform (should go without saying, but Apple’s track record isn’t that great, here).
  • Open Access.
  • Link library.
  • Voice support.
  • Mobile creation tools as powerful as desktop ones (more like GarageBand than like iWork).
  • HyperCard-style emphasis on hyperlinked structures (à la “mini-site” instead of web archives).
  • Focus on rich interaction (possibly based on the SproutCore web framework).
  • Replacement for iWeb (which is being killed along with MobileMe).
  • Ease creation of lecturecasts.
  • Deep integration with iTunes U.
  • Combination of document (à la Pages or Word), presentation (à la Keynote or PowerPoint), and standalone apps (à la The Elements or even Myst).
  • Full support for course management systems.
  • Integration of textbook material and ancillary material (including study guides, instructor manuals, testbanks, presentation files, interactive quizzes, glossaries, lesson plans, coursenotes, etc.).
  • Outlining support (more like OmniOutliner or even like OneNote than like Keynote or Pages).
  • Mindmapping support (unlikely, but would be cool).
  • Whiteboard support (both in-class and online).
  • Collaboration features (à la Adobe Connect).
  • Support for iCloud (almost a given, but it opens up interesting possibilities).
  • iWork integration (sounds likely, but still in my wishlist).
  • Embeddable content (à la iWork.com).
  • Stability, ease of use, and low-cost (i.e., not Adobe Flash or Acrobat).
  • Better support than Apple currently provides for podcast production and publishing.
  • More publisher support than for iBooks.
  • Geared toward normal users, including learners and educators.

The last three are probably where the problem lies. It’s likely that Apple has courted textbook publishers and may have convinced them that they should up their game with online textbooks. It’s clear to me that publishers risk to fall into oblivion if they don’t wake up to the potential of learning content. But I sure hope the announcement goes beyond an agreement with publishers.

Rumour has it that part of the announcement might have to do with bypassing state certification processes, in the US. That would be a big headline-grabber because the issue of state certification is something of wedge issue. Could be interesting, especially if it means free textbooks (though I sure hope they won’t be ad-supported). But that’s much less interesting than what could be done with learning content.

User-generated content” may be one of the core improvements in recent computing history, much of which is relevant for teaching. As fellow anthro Mike Wesch has said:

We’ll  need to rethink a few things…

And Wesch sure has been thinking about learning.

Problem is, publishers and “user-generated content” don’t go well together. I’m guessing that it’s part of the reason for Apple’s insufficient support for “user-generated content”. For better or worse, Apple primarily perceives its users as consumers. In some cases, Apple sides with consumers to make publishers change their tune. In other cases, it seems to be conspiring with publishers against consumers. But in most cases, Apple fails to see its core users as content producers. In the “collective mind of Apple”, the “quality content” that people should care about is produced by professionals. What normal users do isn’t really “content”. iTunes U isn’t an exception, those of us who give lectures aren’t Apple’s core users (even though the education market as a whole has traditionally being an important part of Apple’s business). The fact that Apple courts us underlines the notion that we, teachers and publishers (i.e. non-students), are the ones creating the content. In other words, Apple supports the old model of publishing along with the old model of education. Of course, they’re far from alone in this obsolete mindframe. But they happen to have several of the tools which could be useful in rethinking education.

Thursday’s events is likely to focus on textbooks. But much more is needed to shift the balance between publishers and learners. Including a major evolution in podcasting.

Podcasting is especially relevant, here. I’ve often thought about what Apple could do to enhance podcasting for learning. Way beyond iTunes U. Into something much more interactive. And I don’t just mean “interactive content” which can be manipulated seamless using multitouch gestures. I’m thinking about the back-and-forth of learning and teaching, the conversational model of interactivity which clearly distinguishes courses from mere content.

Why I Need an iPad

I’m one of those who feel the iPad is the right tool for the job.

This is mostly meant as a reply to this blogthread. But it’s also more generally about my personal reaction to Apple’s iPad announcement.

Some background.

I’m an ethnographer and a teacher. I read a fair deal, write a lot of notes, and work in a variety of contexts. These days, I tend to spend a good amount of time in cafés and other public places where I like to work without being too isolated. I also commute using public transit, listen to lots of podcast, and create my own. I’m also very aural.

I’ve used a number of PDAs, over the years, from a Newton MessagePad 130 (1997) to a variety of PalmOS devices (until 2008). In fact, some people readily associated me with PDA use.

As soon as I learnt about the iPod touch, I needed one. As soon as I’ve heard about the SafariPad, I wanted one. I’ve been an intense ‘touch user since the iPhone OS 2.0 release and I’m a happy camper.

(A major reason I never bought an iPhone, apart from price, is that it requires a contract.)

In my experience, the ‘touch is the most appropriate device for all sorts of activities which are either part of an other activity (reading during a commute) or are simply too short in duration to constitute an actual “computer session.” You don’t “sit down to work at your ‘touch” the way you might sit in front of a laptop or desktop screen. This works great for “looking up stufff” or “checking email.” It also makes a lot of sense during commutes in crowded buses or metros.

In those cases, the iPod touch is almost ideal. Ubiquitous access to Internet would be nice, but that’s not a deal-breaker. Alternative text-input methods would help in some cases, but I do end up being about as fast on my ‘touch as I was with Graffiti on PalmOS.

For other tasks, I have a Mac mini. Sure, it’s limited. But it does the job. In fact, I have no intention of switching for another desktop and I even have an eMachines collecting dust (it’s too noisy to make a good server).

What I miss, though, is a laptop. I used an iBook G3 for several years and loved it. For a little while later, I was able to share a MacBook with somebody else and it was a wonderful experience. I even got to play with the OLPC XO for a few weeks. That one was not so pleasant an experience but it did give me a taste for netbooks. And it made me think about other types of iPhone-like devices. Especially in educational contexts. (As I mentioned, I’m a teacher)

I’ve been laptop-less for a while, now. And though my ‘touch replaces it in many contexts, there are still times when I’d really need a laptop. And these have to do with what I might call “mobile sessions.”

For instance: liveblogging a conference or meeting. I’ve used my ‘touch for this very purpose on a good number of occasions. But it gets rather uncomfortable, after a while, and it’s not very fast. A laptop is better for this, with a keyboard and a larger form factor. But the iPad will be even better because of lower risks of RSI. A related example: just imagine TweetDeck on iPad.

Possibly my favourite example of a context in which the iPad will be ideal: presentations. Even before learning about the prospect of getting iWork on a tablet, presentations were a context in which I really missed a laptop.

Sure, in most cases, these days, there’s a computer (usually a desktop running XP) hooked to a projector. You just need to download your presentation file from Slideshare, show it from Prezi, or transfer it through USB. No biggie.

But it’s not the extra steps which change everything. It’s the uncertainty. Even if it’s often unfounded, I usually get worried that something might just not work, along the way. The slides might not show the same way as you see it because something is missing on that computer or that computer is simply using a different version of the presentation software. In fact, that software is typically Microsoft PowerPoint which, while convenient, fits much less in my workflow than does Apple Keynote.

The other big thing about presentations is the “presenter mode,” allowing you to get more content than (or different content from) what the audience sees. In most contexts where I’ve used someone else’s computer to do a presentation, the projector was mirroring the computer’s screen, not using it as a different space. PowerPoint has this convenient “presenter view” but very rarely did I see it as an available option on “the computer in the room.” I wish I could use my ‘touch to drive presentations, which I could do if I installed software on that “computer in the room.” But it’s not something that is likely to happen, in most cases.

A MacBook solves all of these problems. and it’s an obvious use for laptops. But how, then, is the iPad better? Basically because of interface. Switching slides on a laptop isn’t hard, but it’s more awkward than we realize. Even before watching the demo of Keynote on the iPad, I could simply imagine the actual pleasure of flipping through slides using a touch interface. The fit is “natural.”

I sincerely think that Keynote on the iPad will change a number of things, for me. Including the way I teach.

Then, there’s reading.

Now, I’m not one of those people who just can’t read on a computer screen. In fact, I even grade assignments directly from the screen. But I must admit that online reading hasn’t been ideal, for me. I’ve read full books as PDF files or dedicated formats on PalmOS, but it wasn’t so much fun, in terms of the reading process. And I’ve used my ‘touch to read things through Stanza or ReadItLater. But it doesn’t work so well for longer reading sessions. Even in terms of holding the ‘touch, it’s not so obvious. And, what’s funny, even a laptop isn’t that ideal, for me, as a reading device. In a sense, this is when the keyboard “gets in the way.”

Sure, I could get a Kindle. I’m not a big fan of dedicated devices and, at least on paper, I find the Kindle a bit limited for my needs. Especially in terms of sources. I’d like to be able to use documents in a variety of formats and put them in a reading list, for extended reading sessions. No, not “curled up in bed.” But maybe lying down in a sofa without external lighting. Given my experience with the ‘touch, the iPad is very likely the ideal device for this.

Then, there’s the overall “multi-touch device” thing. People have already been quite creative with the small touchscreen on iPhones and ‘touches, I can just imagine what may be done with a larger screen. Lots has been said about differences in “screen real estate” in laptop or desktop screens. We all know it can make a big difference in terms of what you can display at the same time. In some cases, two screens isn’t even a luxury, for instance when you code and display a page at the same time (LaTeX, CSS…). Certainly, the same qualitative difference applies to multitouch devices. Probably even more so, since the display is also used for input. What Han found missing in the iPhone’s multitouch was the ability to use both hands. With the iPad, Han’s vision is finding its space.

Oh, sure, the iPad is very restricted. For instance, it’s easy to imagine how much more useful it’d be if it did support multitasking with third-party apps. And a front-facing camera is something I was expecting in the first iPhone. It would just make so much sense that a friend seems very disappointed by this lack of videoconferencing potential. But we’re probably talking about predetermined expectations, here. We’re comparing the iPad with something we had in mind.

Then, there’s the issue of the competition. Tablets have been released and some multitouch tablets have recently been announced. What makes the iPad better than these? Well, we could all get in the same OS wars as have been happening with laptops and desktops. In my case, the investment in applications, files, and expertise that I have made in a Mac ecosystem rendered my XP years relatively uncomfortable and me appreciate returning to the Mac. My iPod touch fits right in that context. Oh, sure, I could use it with a Windows machine, which is in fact what I did for the first several months. But the relationship between the iPhone OS and Mac OS X is such that using devices in those two systems is much more efficient, in terms of my own workflow, than I could get while using XP and iPhone OS. There are some technical dimensions to this, such as the integration between iCal and the iPhone OS Calendar, or even the filesystem. But I’m actually thinking more about the cognitive dimensions of recognizing some of the same interface elements. “Look and feel” isn’t just about shiny and “purty.” It’s about interactions between a human brain, a complex sensorimotor apparatus, and a machine. Things go more quickly when you don’t have to think too much about where some tools are, as you’re working.

So my reasons for wanting an iPad aren’t about being dazzled by a revolutionary device. They are about the right tool for the job.

Personal Devices

Still thinking about touch devices, such as the iPod touch and the rumoured “Apple Tablet.”

Thinking out loud. Rambling even more crazily than usual.

Something important about those devices is the need for a real “Personal Digital Assistant.” I put PDAs as a keyword for my previous post because I do use the iPod touch like I was using my PalmOS and even NewtonOS devices. But there’s more to it than that, especially if you think about cloud computing and speech technologies.
I mentioned speech recognition in that previous post. SR tends to be a pipedream of the computing world. Despite all the hopes put into realtime dictation, it still hasn’t taken off in a big way. One reason might be that it’s still somewhat cumbersome to use, in current incarnations. Another reason is that it’s relatively expensive as a standalone product which requires some getting used to. But I get the impression that another set of reasons has to do with the fact that it’s mostly fitting on a personal device. Partly because it needs to be trained. But also because voice itself is a personal thing.

Cloud computing also takes a new meaning with a truly personal device. It’s no surprise that there are so many offerings with some sort of cloud computing feature in the App Store. Not only do Apple’s touch devices have limited file storage space but the notion of accessing your files in the cloud go well with a personal device.
So, what’s the optimal personal device? I’d say that Apple’s touch devices are getting close to it but that there’s room for improvement.

Some perspective…

Originally, the PC was supposed to be a “personal” computer. But the distinction was mostly with mainframes. PCs may be owned by a given person, but they’re not so tied to that person, especially given the fact that they’re often used in a single context (office or home, say). A given desktop PC can be important in someone’s life, but it’s not always present like a personal device should be. What’s funny is that “personal computers” became somewhat more “personal” with the ‘Net and networking in general. Each computer had a name, etc. But those machines remained somewhat impersonal. In many cases, even when there are multiple profiles on the same machine, it’s not so safe to assume who the current user of the machine is at any given point.

On paper, the laptop could have been that “personal device” I’m thinking about. People may share a desktop computer but they usually don’t share their laptop, unless it’s mostly used like a desktop computer. The laptop being relatively easy to carry, it’s common for people to bring one back and forth between different sites: work, home, café, school… Sounds tautological, as this is what laptops are supposed to be. But the point I’m thinking about is that these are still distinct sites where some sort of desk or table is usually available. People may use laptops on their actual laps, but the form factor is still closer to a portable desktop computer than to the kind of personal device I have in mind.

Then, we can go all the way to “wearable computing.” There’s been some hype about wearable computers but it has yet to really be part of our daily lives. Partly for technical reasons but partly because it may not really be what people need.

The original PDAs (especially those on NewtonOS and PalmOS) were getting closer to what people might need, as personal devices. The term “personal digital assistant” seemed to encapsulate what was needed. But, for several reasons, PDAs have been having a hard time. Maybe there wasn’t a killer app for PDAs, outside of “vertical markets.” Maybe the stylus was the problem. Maybe the screen size and bulk of the device weren’t getting to the exact points where people needed them. I was still using a PalmOS device in mid-2008 and it felt like I was among the last PDA users.
One point was that PDAs had been replaced by “smartphones.” After a certain point, most devices running PalmOS were actually phones. RIM’s Blackberry succeeded in a certain niche (let’s use the vague term “professionals”) and is even beginning to expand out of it. And devices using other OSes have had their importance. It may not have been the revolution some readers of Pen Computing might have expected, but the smartphone has been a more successful “personal device” than the original PDAs.

It’s easy to broaden our focus from smartphones and think about cellphones in general. If the 3.3B figure can be trusted, cellphones may already be outnumbering desktop and laptop computers by 3:1. And cellphones really are personal. You bring them everywhere; you don’t need any kind of surface to use them; phone communication actually does seem to be a killer app, even after all this time; there are cellphones in just about any price range; cellphone carriers outside of Canada and the US are offering plans which are relatively reasonable; despite some variation, cellphones are rather similar from one manufacturer to the next… In short, cellphones already were personal devices, even before the smartphone category really emerged.

What did smartphones add? Basically, a few PDA/PIM features and some form of Internet access or, at least, some form of email. “Whoa! Impressive!”

Actually, some PIM features were already available on most cellphones and Internet access from a smartphone is in continuity with SMS and data on regular cellphones.

What did Apple’s touch devices add which was so compelling? Maybe not so much, apart from the multitouch interface, a few games, and integration with desktop/laptop computers. Even then, most of these changes were an evolution over the basic smartphone concept. Still, it seems to have worked as a way to open up personal devices to some new dimensions. People now use the iPhone (or some other multitouch smartphone which came out after the iPhone) as a single device to do all sorts of things. Around the World, multitouch smartphones are still much further from being ubiquitous than are cellphones in general. But we could say that these devices have brought the personal device idea to a new phase. At least, one can say that they’re much more exciting than the other personal computing devices.

But what’s next for personal devices?

Any set of buzzphrases. Cloud computing, speech recognition, social media…

These things can all come together, now. The “cloud” is mostly ready and personal devices make cloud computing more interesting because they’re “always-on,” are almost-wearable, have batteries lasting just about long enough, already serve to keep some important personal data, and are usually single-user.

Speech recognition could go well with those voice-enabled personal devices. For one thing, they already have sound input. And, by this time, people are used to seeing others “talk to themselves” as cellphones are so common. Plus, voice recognition is already understood as a kind of security feature. And, despite their popularity, these devices could use a further killer app, especially in terms of text entry and processing. Some of these devices already have voice control and it’s not so much of a stretch to imagine them having what’s needed for continuous speech recognition.

In terms of getting things onto the device, I’m also thinking about such editing features as a universal rich-text editor (à la TinyMCE), predictive text, macros, better access to calendar/contact data, ubiquitous Web history, multiple pasteboards, data detectors, Automator-like processing, etc. All sorts of things which should come from OS-level features.

“Social media” may seem like too broad a category. In many ways, those devices already take part in social networking, user-generated content, and microblogging, to name a few areas of social media. But what about a unified personal profile based on the device instead of the usual authentication method? Yes, all sorts of security issues. But aren’t people unconcerned about security in the case of social media? Twitter accounts are being hacked left and right yet Twitter doesn’t seem to suffer much. And there could be added security features on a personal device which is meant to really integrate social media. Some current personal devices already work well as a way to keep login credentials to multiple sites. The next step, there, would be to integrate all those social media services into the device itself. We maybe waiting for OpenSocial, OpenID, OAuth, Facebook Connect, Google Connect, and all sorts of APIs to bring us to an easier “social media workflow.” But a personal device could simplify the “social media workflow” even further, with just a few OS-based tweaks.

Unlike my previous, I’m not holding my breath for some specific event which will bring us the ultimate personal device. After all, this is just a new version of my ultimate handheld device blogpost. But, this time, I was focusing on what it means for a device to be “personal.” It’s even more of a drafty draft than my blogposts usually have been ever since I decided to really RERO.

So be it.

Speculating on Apple's Touch Strategy

This is mere speculation on my part, based on some rumours.

I’m quite sure that Apple will come up with a video-enabled iPod touch on September 9, along with iTunes 9 (which should have a few new “social networking” features). This part is pretty clear from most rumour sites.

AppleInsider | Sources: Apple to unveil new iPod lineup on September 9.

Progressively, Apple will be adopting a new approach to marketing its touch devices. Away from the “poorperson’s iPhone” and into the “tiny but capable computer” domain. Because the 9/9 event is supposed to be about music, one might guess that there will be a cool new feature or two relating to music. Maybe lyrics display, karaoke mode, or whatever else. Something which will simultaneously be added to the iPhone but would remind people that the iPod touch is part of the iPod family. Apple has already been marketing the iPod touch as a gaming platform, so it’s not a radical shift. But I’d say the strategy is to make Apple’s touch devices increasingly more attractive, without cannibalizing sales in the MacBook family.

Now, I really don’t expect Apple to even announce the so-called “Tablet Mac” in September. I’m not even that convinced that the other devices Apple is preparing for expansion of its touch devices lineup will be that close to the “tablet” idea. But it seems rather clear, to me, that Apple should eventually come up with other devices in this category. Many rumours point to the same basic notion, that Apple is getting something together which will have a bigger touchscreen than the iPhone or iPod touch. But it’s hard to tell how this device will fit, in the grand scheme of things.

It’s rather obvious that it won’t be a rebirth of the eMate the same way that the iPod touch wasn’t a rebirth of the MessagePad. But it would make some sense for Apple to target some educational/learning markets, again, with an easy-to-use device. And I’m not just saying this because the rumoured “Tablet Mac” makes me think about the XOXO. Besides, the iPod touch is already being marketed to educational markets through the yearly “Back to school” program which (surprise!) ends on the day before the September press conference.

I’ve been using an iPod touch (1st Generation) for more than a year, now, and I’ve been loving almost every minute of it. Most of the time, I don’t feel the need for a laptop, though I occasionally wish I could buy a cheap one, just for some longer writing sessions in cafés. In fact, a friend recently posted information about some Dell Latitude D600 laptops going for a very low price. That’d be enough for me at this point. Really, my iPod touch suffices for a lot of things.

Sadly, my iPod touch seems to have died, recently, after catching some moisture. If I can’t revive it and if the 2nd Generation iPod touch I bought through Kijiji never materializes, I might end up buying a 3rd Generation iPod touch on September 9, right before I start teaching again. If I can get my hands on a working iPod touch at a good price before that, I may save the money in preparation for an early 2010 release of a new touch device from Apple.

Not that I’m not looking at alternatives. But I’d rather use a device which shares enough with the iPod touch that I could migrate easily, synchronize with iTunes, and keep what I got from the App Store.

There’s a number of things I’d like to get from a new touch device. First among them is a better text entry/input method. Some of the others could be third-party apps and services. For instance, a full-featured sharing app. Or true podcast synchronization with media annotation support (à la Revver or Soundcloud). Or an elaborate, fully-integrated logbook with timestamps, Twitter support, and outlining. Or even a high-quality reference/bibliography manager (think RefWorks/Zotero/Endnote). But getting text into such a device without a hardware keyboard is the main challenge. I keep thinking about all sorts of methods, including MessagEase and Dasher as well as continuous speech recognition (dictation). Apple’s surely thinking about those issues. After all, they have some handwriting recognition systems that they aren’t really putting to any significant use.

Something else which would be quite useful is support for videoconferencing. Before the iPhone came out, I thought Apple may be coming out with iChat Mobile. Though a friend announced the iPhone to me by making reference to this, the position of the camera at the back of the device and the fact that the original iPhone’s camera only supported still pictures (with the official firmware) made this dream die out, for me. But a “Tablet Mac” with an iSight-like camera and some form of iChat would make a lot of sense, as a communication device. Especially since iChat already supports such things as screen-sharing and slides. Besides, if Apple does indeed move in the direction of some social networking features, a touch device with an expanded Address Book could take a whole new dimension through just a few small tweaks.

This last part I’m not so optimistic about. Apple may know that social networking is important, at this point in the game, but it seems to approach it with about the same heart as it approached online services with eWorld, .Mac, and MobileMe. Of course, they have the tools needed to make online services work in a “social networking” context. But it’s possible that their vision is clouded by their corporate culture and some remnants of the NIH problem.

Ah, well…

Crazy App Idea: Happy Meter

I keep getting ideas for apps I’d like to see on Apple’s App Store for iPod touch and iPhone. This one may sound a bit weird but I think it could be fun. An app where you can record your mood and optionally broadcast it to friends. It could become rather sophisticated, actually. And I think it can have interesting consequences.

The idea mostly comes from Philippe Lemay, a psychologist friend of mine and fellow PDA fan. Haven’t talked to him in a while but I was just thinking about something he did, a number of years ago (in the mid-1990s). As part of an academic project, Philippe helped develop a PDA-based research program whereby subjects would record different things about their state of mind at intervals during the day. Apart from the neatness of the data gathering technique, this whole concept stayed with me. As a non-psychologist, I personally get the strong impression that recording your moods frequently during the day can actually be a very useful thing to do in terms of mental health.

And I really like the PDA angle. Since I think of the App Store as transforming Apple’s touch devices into full-fledged PDAs, the connection is rather strong between Philippe’s work at that time and the current state of App Store development.

Since that project of Philippe’s, a number of things have been going on which might help refine the “happy meter” concept.

One is that “lifecasting” became rather big, especially among certain groups of Netizens (typically younger people, but also many members of geek culture). Though the lifecasting concept applies mostly to video streams, there are connections with many other trends in online culture. The connection with vidcasting specifically (and podcasting generally) is rather obvious. But there are other connections. For instance, with mo-, photo-, or microblogging. Or even with all the “mood” apps on Facebook.

Speaking of Facebook as a platform, I think it meshes especially well with touch devices.

So, “happy meter” could be part of a broader app which does other things: updating Facebook status, posting tweets, broadcasting location, sending personal blogposts, listing scores in a Brain Age type game, etc.

Yet I think the “happy meter” could be useful on its own, as a way to track your own mood. “Turns out, my mood was improving pretty quickly on that day.” “Sounds like I didn’t let things affect me too much despite all sorts of things I was going through.”

As a mood-tracker, the “happy meter” should be extremely efficient. Because it’s easy, I’m thinking of sliders. One main slider for general mood and different sliders for different moods and emotions. It would also be possible to extend the “entry form” on occasion, when the user wants to record more data about their mental state.

Of course, everything would be save automatically and “sent to the cloud” on occasion. There could be a way to selectively broadcast some slider values. The app could conceivably send reminders to the user to update their mood at regular intervals. It could even serve as a “break reminder” feature. Though there are limitations on OSX iPhone in terms of interapplication communication, it’d be even neater if the app were able to record other things happening on the touch device at the same time, such as music which is playing or some apps which have been used.

Now, very obviously, there are lots of privacy issues involved. But what social networking services have taught us is that users can have pretty sophisticated notions of privacy management, if they’re given the chance. For instance, adept Facebook users may seem to indiscrimately post just about everything about themselves but are often very clear about what they want to “let out,” in context. So, clearly, every type of broadcasting should be controlled by the user. No opt-out here.

I know this all sounds crazy. And it all might be a very bad idea. But the thing about letting my mind wander is that it helps me remain happy.

Visualizing Touch Devices in Education

Took me a while before I watched this concept video about iPhone use on campus.

Connected: The Movie – Abilene Christian University

Sure, it’s a bit campy. Sure, some features aren’t available on the iPhone yet. But the basic concepts are pretty much what I had in mind.

Among things I like in the video:

  • The very notion of student empowerment runs at the centre of it.
  • Many of the class-related applications presented show an interest in the constructivist dimensions of learning.
  • Material is made available before class. Face-to-face time is for engaging in the material, not rehashing it.
  • The technology is presented as a way to ease the bureaucratic aspects of university life, relieving a burden on students (and, presumably, on everyone else involved).
  • The “iPhone as ID” concept is simple yet powerful, in context.
  • Social networks (namely Facebook and MySpace, in the video) are embedded in the campus experience.
  • Blended learning (called “hybrid” in the video) is conceived as an option, not as an obligation.
  • Use of the technology is specifically perceived as going beyond geek culture.
  • The scenarios (use cases) are quite realistic in terms of typical campus life in the United States.
  • While “getting an iPhone” is mentioned as a perk, it’s perfectly possible to imagine technology as a levelling factor with educational institutions, lowering some costs while raising the bar for pedagogical standards.
  • The shift from “eLearning” to “mLearning” is rather obvious.
  • ACU already does iTunes U.
  • The video is released under a Creative Commons license.

Of course, there are many directions things can go, from here. Not all of them are in line with the ACU dream scenario. But I’m quite hope judging from some apparently random facts: that Apple may sell iPhones through universities, that Apple has plans for iPhone use on campuses,  that many of the “enterprise features” of iPhone 2.0 could work in institutions of higher education, that the Steve Jobs keynote made several mentions of education, that Apple bundles iPod touch with Macs, that the OLPC XOXO is now conceived more as a touch handheld than as a laptop, that (although delayed) Google’s Android platform can participate in the same usage scenarios, and that browser-based computing apparently has a bright future.

Handhelds for the Rest of Us?

Ok, it probably shouldn’t become part of my habits but this is another repost of a blog comment motivated by the OLPC XO.

This time, it’s a reply to Niti Bhan’s enthusiastic blogpost about the eeePC: Perspective 2.0: The little eeePC that could has become the real “iPod” of personal computing

This time, I’m heavily editing my comments. So it’s less of a repost than a new blogpost. In some ways, it’s partly a follow-up to my “Ultimate Handheld Device” post (which ended up focusing on spatial positioning).

Given the OLPC context, the angle here is, hopefully, a culturally aware version of “a handheld device for the rest of us.”

Here goes…

I think there’s room in the World for a device category more similar to handhelds than to subnotebooks. Let’s call it “handhelds for the rest of us” (HftRoU). Something between a cellphone, a portable gaming console, a portable media player, and a personal digital assistant. Handheld devices exist which cover most of these features/applications, but I’m mostly using this categorization to think about the future of handhelds in a globalised World.

The “new” device category could serve as the inspiration for a follow-up to the OLPC project. One thing about which I keep thinking, in relation to the “OLPC” project, is that the ‘L’ part was too restrictive. Sure, laptops can be great tools for students, especially if these students are used to (or need to be trained in) working with and typing long-form text. But I don’t think that laptops represent the most “disruptive technology” around. If we think about their global penetration and widespread impact, cellphones are much closer to the leapfrog effect about which we all have been writing.

So, why not just talk about a cellphone or smartphone? Well, I’m trying to think both more broadly and more specifically. Cellphones are already helping people empower themselves. The next step might to add selected features which bring them closer to the OLPC dream. Also, since cellphones are widely distributed already, I think it’s important to think about devices which may complement cellphones. I have some ideas about non-handheld tools which could make cellphones even more relevant in people’s lives. But they will have to wait for another blogpost.

So, to put it simply, “handhelds for the rest of us” (HftRoU) are somewhere between the OLPC XO-1 and Apple’s original iPhone, in terms of features. In terms of prices, I dream that it could be closer to that of basic cellphones which are in the hands of so many people across the globe. I don’t know what that price may be but I heard things which sounded like a third of the price the OLPC originally had in mind (so, a sixth of the current price). Sure, it may take a while before such a low cost can be reached. But I actually don’t think we’re in a hurry.

I guess I’m just thinking of the electronics (and global) version of the Ford T. With more solidarity in mind. And cultural awareness.

Google’s Open Handset Alliance (OHA) may produce something more appropriate to “global contexts” than Apple’s iPhone. In comparison with Apple’s iPhone, devices developed by the OHA could be better adapted to the cultural, climatic, and economic conditions of those people who don’t have easy access to the kind of computers “we” take for granted. At the very least, the OHA has good representation on at least three continents and, like the old OLPC project, the OHA is officially dedicated to openness.

I actually care fairly little about which teams will develop devices in this category. In fact, I hope that new manufacturers will spring up in some local communities and that major manufacturers will pay attention.

I don’t care about who does it, I’m mostly interested in what the devices will make possible. Learning, broadly speaking. Communicating, in different ways. Empowering themselves, generally.

One thing I have in mind, and which deviates from the OLPC mission, is that there should be appropriate handheld devices for all age-ranges. I do understand the focus on 6-12 year-olds the old OLPC had. But I don’t think it’s very productive to only sell devices to that age-range. Especially not in those parts of the world (i.e., almost anywhere) where generation gaps don’t imply that children are isolated from adults. In fact, as an anthropologist, I react rather strongly to the thought that children should be the exclusive target of a project meant to empower people. But I digress, as always.

I don’t tend to be a feature-freak but I have been thinking about the main features the prototypical device in this category should have. It’s not a rigid set of guidelines. It’s just a way to think out loud about technology’s integration in human life.

The OS and GUI, which seem like major advantages of the eeePC, could certainly be of the mobile/handheld type instead of the desktop/laptop type. The usual suspects: Symbian, NewtonOS, Android, Zune, PalmOS, Cocoa Touch, embedded Linux, Playstation Portable, WindowsCE, and Nintendo DS. At a certain level of abstraction, there are so many commonalities between all of these that it doesn’t seem very efficient to invent a completely new GUI/OS “paradigm,” like OLPC’s Sugar was apparently trying to do.

The HftRoU require some form of networking or wireless connectivity feature. WiFi (802.11*), GSM, UMTS, WiMAX, Bluetooth… Doesn’t need to be extremely fast, but it should be flexible and it absolutely cannot be cost-prohibitive. IP might make much more sense than, say, SMS/MMS, but a lot can be done with any kind of data transmission between devices. XO-style mesh networking could be a very interesting option. As VoIP has proven, voice can efficiently be transmitted as data so “voice networks” aren’t necessary.

My sense is that a multitouch interface with an accelerometer would be extremely effective. Yes, I’m thinking of Apple’s Touch devices and MacBooks. As well as about the Microsoft Surface, and Jeff Han’s Perceptive Pixel. One thing all of these have shown is how “intuitive” it can be to interact with a machine using gestures. Haptic feedback could also be useful but I’m not convinced it’s “there yet.”

I’m really not sure a keyboard is very important. In fact, I think that keyboard-focused laptops and tablets are the wrong basis for thinking about “handhelds for the rest of us.” Bear in mind that I’m not thinking about devices for would-be office workers or even programmers. I’m thinking about the broadest user base you can imagine. “The Rest of Us” in the sense of, those not already using computers very directly. And that user base isn’t that invested in (or committed to) touch-typing. Even people who are very literate don’t tend to be extremely efficient typists. If we think about global literacy rates, typing might be one thing which needs to be leapfrogged. After all, a cellphone keypad can be quite effective in some hands and there are several other ways to input text, especially if typing isn’t too ingrained in you. Furthermore, keyboards aren’t that convenient in multilingual contexts (i.e., in most parts of the world). I say: avoid the keyboard altogether, make it available as an option, or use a virtual one. People will complain. But it’s a necessary step.

If the device is to be used for voice communication, some audio support is absolutely required. Even if voice communication isn’t part of it (and I’m not completely convinced it’s the one required feature), audio is very useful, IMHO (I’m an aural guy). In some parts of the world, speakers are much favoured over headphones or headsets. But I personally wish that at least some HftRoU could have external audio inputs/outputs. Maybe through USB or an iPod-style connector.

A voice interface would be fabulous, but there still seem to be technical issues with both speech recognition and speech synthesis. I used to work in that field and I keep dreaming, like Bill Gates and others do, that speech will finally take the world by storm. But maybe the time still hasn’t come.

It’s hard to tell what size the screen should be. There probably needs to be a range of devices with varying screen sizes. Apple’s Touch devices prove that you don’t need a very large screen to have an immersive experience. Maybe some HftRoU screens should in fact be larger than that of an iPhone or iPod touch. Especially if people are to read or write long-form text on them. Maybe the eeePC had it right. Especially if the devices’ form factor is more like a big handheld than like a small subnotebook (i.e., slimmer than an eeePC). One reason form factor matters, in my mind, is that it could make the devices “disappear.” That, and the difference between having a device on you (in your pocket) and carrying a bag with a device in it. Form factor was a big issue with my Newton MessagePad 130. As the OLPC XO showed, cost and power consumption are also important issues regarding screen size. I’d vote for a range of screens between 3.5 inch (iPhone) and 8.9 inch (eeePC 900) with a rather high resolution. A multitouch version of the XO’s screen could be a major contribution.

In terms of both audio and screen features, some consideration should be given to adaptive technologies. Most of us take for granted that “almost anyone” can hear and see. We usually don’t perceive major issues in the fact that “personal computing” typically focuses on visual and auditory stimuli. But if these devices truly are “for the rest of us,” they could help empower visually- or hearing-impaired individuals, who are often marginalized. This is especially relevant in the logic of humanitarianism.

HftRoU needs a much autonomy from a power source as possible. Both in terms of the number of hours devices can be operated without needing to be connected to a power source and in terms of flexibility in power sources. Power management is a major technological issue, with portable, handheld, and mobile devices. Engineers are hard at work, trying to find as many solutions to this issue as they can. This was, obviously, a major area of research for the OLPC. But I’m not even sure the solutions they have found are the only relevant ones for what I imagine HftRoU to be.

GPS could have interesting uses, but doesn’t seem very cost-effective. Other “wireless positioning systems” (à la Skyhook) might reprsent a more rational option. Still, I think positioning systems are one of the next big things. Not only for navigation or for location-based targeting. But for a set of “unintended uses” which are the hallmark of truly disruptive technology. I still remember an article (probably in the venerable Wired magazine) about the use of GPS/GIS for research into climate change. Such “unintended uses” are, in my mind, much closer to the constructionist ideal than the OLPC XO’s unified design can ever get.

Though a camera seems to be a given in any portable or mobile device (even the OLPC XO has one), I’m not yet that clear on how important it really is. Sure, people like taking pictures or filming things. Yes, pictures taken through cellphones have had a lasting impact on social and cultural events. But I still get the feeling that the main reason cameras are included on so many devices is for impulse buying, not as a feature to be used so frequently by all users. Also, standalone cameras probably have a rather high level of penetration already and it might be best not to duplicate this type of feature. But, of course, a camera could easily be a differentiating factor between two devices in the same category. I don’t think that cameras should be absent from HftRoU. I just think it’s possible to have “killer apps” without cameras. Again, I’m biased.

Apart from networking/connectivity uses, Bluetooth seems like a luxury. Sure, it can be neat. But I don’t feel it adds that much functionality to HftRoU. Yet again, I could be proven wrong. Especially if networking and other inter-device communication are combined. At some abstract level, there isn’t that much difference between exchanging data across a network and controlling a device with another device.

Yes, I do realize I pretty much described an iPod touch (or an iPhone without camera, Bluetooth, or cellphone fees). I’ve been lusting over an iPod touch since September and it does colour my approach. I sincerely think the iPod touch could serve as an inspiration for a new device type. But, again, I care very little about which company makes that device. I don’t even care about how open the operating system is.

As long as our minds are open.

Touch Thoughts: Apple's Handheld Strategy

I’m still on the RDF.
Apple‘s March 6, 2008 event was about enterprise and development support for its iPhone and iPod touch lines of handheld devices. Lots to think about.

(For convenience’s sake, I’ll lump together the iPod touch and the iPhone under the name “Touch,” which seems consistent with Apple’s “Cocoa Touch.”)

Been reading a fair bit about this event. Interesting reactions across the board.

My own thoughts on the whole thing.
I appreciate the fact that Phil Schiller began the “enterprise” section of the event with comments about a university. Though universities need not be run like profit-hungry corporations, linking Apple’s long-standing educational focus with its newly invigorated enterprise focus makes sense. And I had a brief drift-off moment as I was thinking about Touch products in educational contexts.

I’m surprised at how enthusiastic I get about the enterprise features. Suddenly, I can see Microsoft’s Exchange make sense.

I get the clear impression that even more things will come into place at the end of June than has been said by Apple. Possibly new Touch models or lines. Probably the famous 3G iPhone. Apple-released apps. Renewed emphasis on server technology (XServe, Mac OS X Server, XSan…). New home WiFi products (AirPort, Time Capsule, Apple TV…). New partnerships. Cool VC-funded startups. New features on the less aptly named “iTunes” store.

Though it was obvious already, the accelerometer is an important feature. It seems especially well-adapted to games and casual gamers like myself are likely to enjoy games this feature makes possible. It can also lead to very interesting applications. In fact, the “Etch and Sketch” demo was rather convincing as a display of some core Touch features. These are exactly the features which help sell products.
Actually, I enjoyed the “wow factor” of the event’s demos. I’m convinced that it will energize developers and administrators, whether or not they plan on using Touch products. Some components of Apple’s Touch strategy are exciting enough that the more problematic aspects of this strategy may matter a bit less. Those of us dreaming about Android, OpenMoko, or even a revived NewtonOS can still find things to get inspired by in Apple’s roadmap.

What’s to come, apart from what was announced? No idea. But I do daydream about all of this.
I’m especially interested in the idea of Apple Touch as “mainstream, WiFi, mobile platform.” There’s a lot of potential for Apple-designed, WiFi-enabled handhelds. Whether or not they include a cellphone.
At this point, Apple only makes five models of Touch products: three iPod touches and two iPhones. Flash memory is the main differentiating factor within a line. It makes it relatively easy to decide which device to get but some product diversity could be interesting. While some people expect/hope that Apple will release radically new form factors for Touch devices (e.g., a tablet subnotebook), it’s quite likely that other features will help distinguish Apple’s Touch hardware.
Among features I’d like to get through software, add-ons, or included in a Touch product? Number of things, some alluded to in the “categories” for this post. Some of these I had already posted.

  • Quality audio recording (to make it the ideal fieldwork audio tool).
  • eBook support (to compete with Amazon’s Kindle).
  • Voice support (including continuous dictation, voice interface…).
  • Enhanced support for podcasting (interacting with podcasts, sending audio/video responses…)
  • Video conferencing (been thinking about this for a while).
  • GPS (location will be big).
  • Mesh networking (a neat feature of OLPC’s XO).
  • Mobile WiMAX (unlikely, but it could be neat).
  • Battery pack (especially for long trips in remote regions).
  • Add-on flash memory (unlikely, but it could be useful, especially for backup).
  • Offline storage of online content (likely, but worth noting).
  • Inexpensive model (especially for “emerging markets”).
  • Access to 3G data networks without cellular “voice plan” (unlikely, but worth a shot).
  • Alternative input methods (MessagEase, Graffiti, adaptive keyboard, speech recognition…).
  • Use as Mac OS X “host” (kind of like a user partition).
  • Bluetooth/WiFi data transfer (no need for cables and docks).
  • MacBook Touch (unlikely, especially with MacBook Air, but it could be fun).
  • Automatic cell to VoIP-over-WiFi switching (saving cell minutes).

Of course, there are many obvious ones which will likely be implemented in software. I’m already impressed by the Omni Group’s pledge to develop a Touch version of their flagship GTD app.

I Want It All: The Ultimate Handheld Device?

In a way, this is a short version of a couple of posts I’ve been planning. RERO‘s better than keeping drafts.

So, what do I want in the ultimate handheld device? Basically, everything. More specifically, I’ve been thinking about the advantages of merging technologies.

At first, I was mostly thinking about “wireless” in general. Something which could bring together WiFi (802.11), WiMAX, and (3G) cellular networks. The idea being that you can get the advantages from all of these so that the device can be online pretty much all the time. It’s a pipedream, of course, but it’s a fun dream to have.

And then, the release of location services on the iPhone and iPod touch made me think about some kind of hybrid positioning system, using GPS, Google’s cellphone-based positioning, and Skyhook‘s Wi-Fi Positioning System (WPS).

A recent article in USA Today explains Skyhook’s strategy:

Jobs, iPhone have Skyhook pointed in right direction – USATODAY.com

And the Skyhook site itself has some interesting scenarios for WPS use in navigation, social networking, content management, location-specific marketing, gaming, and tracking. It seems rather clear to me that positioning systems in general have a rather bright future. I also don’t really see a reason for one positioning system to exclude the others (apart from technological and financial issues).

Positioning will be especially useful if it ever becomes really commonplace. Part network effect, part glocalization.

Of course, there are still several issues to solve. Including privacy and safety concerns. But a good system would make it possible for the user to control her/his positioning information (when and where the user’s coordinates are made available, and how precise they are allowed to be). Even without positioning systems, many of us have been using online mapping services (including Google Maps) to reveal some details about our movements. Typically, we’re fine with even perfect strangers knowing that we’ve been through a public space in the past yet we may only provide precise and up-to-date location details to people we trust. There’s no reason a positioning system on a handheld device should only work in one situation.

Now, I’m not saying that positioning is the “ultimate handheld device’s killer app.” But positioning is the kind of feature which opens up all sorts of possibilities.

And, actually, I’ve been thinking about GPS devices for quite a while. Unfortunately, most of them are either quite expensive or meant almost exclusively for car navigation or for outdoor activities. As a non-wealthy compulsive pedestrian who hasn’t been doing much outdoors in recent years, a dedicated GPS device never seemed that reasonable a purchase.

But as a semi-nomadic ethnographer, I often wished I had an easy way to record where I was. In fact, a positioning-enabled handheld device could be quite useful in ethnographic fieldwork. Several things could be made easier if we were able to geotag field material (including fieldnotes, still pictures, and audio recordings). And, of course, colleagues in archeology have been using GPS and GIS for quite a while.

Of course, any smartphone with a positioning system could help. Apple’s iPhone is one and we already know that smartphones compatible with Google’s Android will be able to have location-based functionalities. Given Google’s lead in terms of maps and cellphone-based positioning, those Android devices do sound rather close to the ultimate handheld device.