Category Archives: qualitative research

Wearable Hub: Getting the Ball Rolling

Statement

After years of hype, wearable devices are happening. What wearable computing lacks is a way to integrate devices into a broader system.

Disclaimer/Disclosure/Warning

  • For the past two months or so, I’ve been taking notes about this “wearable hub” idea (started around CES’s time, as wearable devices like the Pebble and Google Glass were discussed with more intensity). At this point, I have over 3000 words in notes, which probably means that I’d have enough material for a long essay. This post is just a way to release a few ideas and to “think aloud” about what wearables may mean.
  • Some of these notes have to do with the fact that I started using a few wearable devices to monitor my activities, after a health issue pushed me to start doing some exercise.
  • I’m not a technologist nor do I play one on this blog. I’m primarily an ethnographer, with diverse interests in technology and its implications for human beings. I do research on technological appropriation and some of the course I teach relate to the social dimensions of technology. Some of the approaches to technology that I discuss in those courses relate to constructionism and Actor-Network Theory.
  • I consider myself a “geek ethnographer” in the sense that I take part in geek culture (and have come out as a geek) but I’m also an outsider to geekdom.
  • Contrary to the likes of McLuhan, Carr, and Morozov, my perspective on technology and society is non-deterministic. The way I use them, “implication” and “affordance” aren’t about causal effects or, even, about direct connections. I’m not saying that society is causing technology to appear nor am I proposing a line from tools to social impacts. Technology and society are in a complex system.
  • Further, my approach isn’t predictive. I’m not saying what will happen based on technological advances nor am I saying what technology will appear. I’m thinking about the meaning of technology in an intersubjective way.
  • My personal attitude on tools and gadgets is rather ambivalent. This becomes clear as I go back and forth between techno-enthusiastic contexts (where I can almost appear like a Luddite) and techno-skeptical contexts (where some might label me as a gadget freak). I integrate a number of tools in my life but I can be quite wary about them.
  • I’m not wedded to the ideas I’m putting forth, here. They’re just broad musings of what might be. More than anything, I hope to generate thoughtful discussion. That’s why I start this post with a broad statement (not my usual style).
  • Of course, I know that other people have had similar ideas and I know that a concept of “wearable hub” already exists. It’s obvious enough that it’s one of these things which can be invented independently.

From Wearables to Hubs

Back in the 1990s, “wearable computing” became something of a futuristic buzzword, often having to do with articles of clothing. There have been many experiments and prototypes converging on an idea that we would, one day, be able to wear something resembling a full computer. Meanwhile, “personal digital assistants” became something of a niche product and embedded systems became an important dimension of car manufacturing.

Fast-forward to 2007, when a significant shift in the use of smartphones occurred. Smartphones existed before that time, but their usages, meanings, and positions in the public discourse changed quite radically around the time of the iPhone’s release. Not that the iPhone itself “caused a smartphone revolution” or that smartphone adoption suddenly reached a “tipping point”. I conceive of this shift as a complex interplay between society and tools. Not only more Kuhn than Popper, but more Latour than Kurzweil.

Smartphones, it may be argued, “happened”.

Without being described as “wearable devices”, smartphones started playing some of the functions people might have assigned to wearable devices. The move was subtle enough that Limor Fried recently described it as a realization she’s been having. Some tech enthusiasts may be designing location-aware purses and heads-up displays in the form of glasses. Smartphones are already doing a lot of the things wearables were supposed to do. Many people “wear” smartphones at most times during their waking lives and these Internet-connected devices are full of sensors. With the proliferation of cases, one might even perceive some of them as fashion accessories, like watches and sunglasses.

Where smartphones become more interesting, in terms of wearable computing, is as de facto wearable hubs.

My Wearable Devices

Which brings me to mention the four sensors I’ve been using more extensively during the past two months:

Yes, these all have to do with fitness (and there’s quite a bit of overlap between them). And, yes, I started using them a few days after the New Year. But it’s not about holiday gifts or New Year’s resolutions. I’ve had some of these devices for a while and decided to use them after consulting with a physician about hypertension. Not only have they helped me quite a bit in solving some health issues, but these devices got me to think.

(I carry several other things with me at most times. Some of my favourites include Tenqa REMXD Bluetooth headphones and the LiveScribe echo smartpen.)

One aspect is that they’re all about the so-called “quantified self”. As a qualitative researcher, I tend to be skeptical of quants. In this case, though, the stats I’m collecting about myself fit with my qualitative approach. Along with quantitative data from these devices, I’ve started collecting qualitative data about my life. The next step is to integrate all those data points automatically.

These sensors are also connected to “gamification”, a tendency I find worrisome, preferring playfulness. Though game mechanics are applied to the use of these sensors, I choose to rely on my intrinsic motivation, not paying much attention to scores and badges.

But the part which pushed me to start taking the most notes was that all these sensors connect with my iOS ()and Android) devices. And this is where the “wearable hub” comes into play. None of these devices is autonomous. They’re all part of my personal “arsenal”, the equipment I have on my me on most occasions. Though there are many similarities between them, they still serve different purposes, which are much more limited than those “wearable computers” might have been expected to serve. Without a central device serving as a type of “hub”, these sensors wouldn’t be very useful. This “hub” needs not be a smartphone, despite the fact that, by default, smartphones are taken to be the key piece in this kind of setup.

In my personal scenario, I do use a smartphone as a hub. But I also use tablets. And I could easily use an existing device of another type (say, an iPod touch), or even a new type of device meant to serve as a wearable hub. Smartphones’ “hub” affordances aren’t exclusive.

From Digital Hub to Wearable Hub

Most of the devices which would likely serve as hubs for wearable sensors can be described as “Post-PC”. They’re clearly “personal” and they’re arguably “computers”. Yet they’re significantly different from the “Personal Computers” which have been so important at the end of last century (desktop and laptop computers not used as servers, regardless of the OS they run).

Wearability is a key point, here. But it’s not just a matter of weight or form factor. A wearable hub needs to be wireless in at least two important ways: independent from a power source and connected to other devices through radio waves. The fact that they’re worn at all times also implies a certain degree of integration with other things carried throughout the day (wallets, purses, backpacks, pockets…). These devices may also be more “personal” than PCs because they may be more apparent and more amenable to customization than PCs.

Smartphones fit the bill as wearable hubs. Their form factors and battery life make them wearable enough. Bluetooth (or ANT+, Nike+, etc.) has been used to pair them wirelessly with sensors. Their connectivity to GPS and cellular networking as well as their audio and visual i/o can have interesting uses (mapping a walk, data updates during a commute, voice feedback…). And though they’re far from ubiquitous, smartphones have become quite common in key markets.

Part of the reason I keep thinking about “hubs” has to do with comments made in 2001 by then Apple CEO Steve Jobs about the “digital lifestyle” age in “PC evolution” (video of Jobs’s presentation; as an anthropologist, I’ll refrain from commenting on the evolutionary analogies):

We believe the PC, or more… importantly, the Mac can become the “digital hub” of our emerging digital lifestyle, with the ability to add tremendous value to … other digital devices.

… like camcorders, portable media players, cellphones, digital cameras, handheld organizers, etc. (Though they weren’t mentioned, other peripherals like printers and webcams also connect to PCs.)

The PC was thus going to serve as a hub, “not only adding value to these devices but interconnecting them, as well”.

At the time, key PC affordances which distinguished them from those other digital devices:

  • Big screen affording more complex user interfaces
  • Large, inexpensive hard disk storage
  • Burning DVDs and CDs
  • Internet connectivity, especially broadband
  • Running complex applications (including media processing software like the iLife suite)

Though Jobs pinpointed iLife applications as the basis for this “digital hub” vision, it sounds like FireWire was meant to be an even more important part of this vision. Of course, USB has supplanted FireWire in most use cases. It’s interesting, then, to notice that Apple only recently started shipping Macs with USB 3. In fact, DVD burning is absent from recent Macs. In 2001, the Mac might have been at the forefront of this “digital lifestyle” age. In 2013, the Mac has moved away from its role as “digital hub”.

In the meantime, the iPhone has become one of the best known examples of what I’m calling “wearable hubs”. It has a small screen and small, expensive storage (by today’s standards). It also can’t burn DVDs. But it does have nearly-ubiquitous Internet connectivity and can run fairly complex applications, some of which are adapted from the iLife suite. And though it does have wired connectivity (through Lightning or the “dock connector”), its main hub affordances have to do with Bluetooth.

It’s interesting to note that the same Steve Jobs, who used the “digital hub” concept to explain that the PC wasn’t dead in 2001, is partly responsible for popularizing the concept of “post-PC devices” six years later. One might perceive hypocrisy in this much delayed apparent flip-flop. On the other hand, Steve Jobs’s 2007 comments (video) were somewhat nuanced, as to the role of post-PC devices. What’s more interesting, though, is to think about the implications of the shift between two views of digital devices, regardless of Apple’s position through that shift.

Some post-PC devices (including the iPhone, until quite recently) do require a connection to a PC. In this sense, a smartphone might maintain its position with regards to the PC as digital hub. Yet, some of those devices are used independently of PCs, including by some people who never owned PCs.

Post-Smartphone Hubs

It’s possible to imagine a wearable hub outside of the smartphone (and tablet) paradigm. While smartphones are a convenient way to interconnect wearables, their hub-related affordances still sound limited: they lack large displays and their storage space is quite expensive. Their battery life may also be something to consider in terms of serving as hubs. Their form factors make some sense, when functioning as phones. Yet they have little to do with their use as hubs.

Part of the realization, for me, came from the fact that I’ve been using a tablet as something of an untethered hub. Since I use Bluetooth headphones, I can listen to podcasts and music while my tablet is in my backpack without being entangled in a cable. Sounds trivial but it’s one of these affordances I find quite significant. Delegating music playing functions to my tablet relates in part to battery life and use of storage. The tablet’s display has no importance in this scenario. In fact, given some communication between devices, my smartphone could serve as a display for my tablet. So could a “smartwatch” or “smartglasses”.

The Body Hub

Which led me to think about other devices which would work as wearable hubs. I originally thought about backpackable and pocketable devices.

But a friend had a more striking idea:

Under Armour’s Recharge Energy Suit may be an extreme version of this, one which would fit nicely among things Cathi Bond likes to discuss with Nora Young on The Sniffer. Nora herself has been discussing wearables on her blog as well as on her radio show. Sure, part of this concept is quite futuristic. But a sensor mesh undershirt is a neat idea for several reasons.

  • It’s easy to think of various sensors it may contain.
  • Given its surface area, it could hold enough battery power to supplement other devices.
  • It can be quite comfortable in cold weather and might even help diffuse heat in warmer climates.
  • Though wearable, it needs not be visible.
  • Thieves would probably have a hard time stealing it.
  • Vibration and haptic feedback on the body can open interesting possibilities.

Not that it’s the perfect digital hub and I’m sure there are multiple objections to a connected undershirt (including issues with radio signals). But I find the idea rather fun to think, partly because it’s so far away from the use of phones, glasses, and watches as smart devices.

Another thing I find neat, and it may partly be a coincidence, is the very notion of a “mesh”.

The Wearable Mesh

Mesh networking is a neat concept, which generates more hype than practical uses. As an alternative to WiFi access points and cellular connectivity, it’s unclear that it may “take the world by storm”. But as a way to connect personal devices, it might have some potential. After all, as Bernard Benhamou recently pointed out on France Culture’s Place de la toile, the Internet of Things may not require always-on full-bandwith connectivity. Typically, wearable sensors use fairly little bandwidth or only use it for limited amounts of time. A wearable mesh could connect wearable devices to one another while also exchanging data through the Internet itself.

Or with local devices. Smart cities, near field communication, and digital appliances occupy interesting positions among widely-discussed tendencies in the tech world. They may all have something to do with wearable devices. For instance, data exchanged between transit systems and their users could go through wearable devices. And while mobile payment systems can work through smartphones and other cellphones, wallet functions can also be fulfilled by other wearable devices.

Alternative Futures

Which might provide an appropriate segue into the ambivalence I feel toward the “wearable hub” concept I’m describing. Though I propose these ideas as if I were enthusiastic about them, they all give me pause. As a big fan of critical thinking, I like to think about “what might be” to generate questions and discussions exposing a diversity of viewpoints about the future.

Mass media discussions about these issues tend to focus on such things as privacy, availability, norms, and usefulness. Google Glass has generated quite a bit of buzz about all four. Other wearables may mainly raise issues for one or two of these broad dimensions. But the broad domain of wearable computing raises a lot more issues.

Technology enthusiasts enjoy discussing issues through the dualism between dystopia and utopia. An obvious issue with this dualism is that humans disagree about the two categories. Simply put, one person’s dystopia can be another person’s utopia, not to mention the nuanced views of people who see complex relationships between values and social change.

In such a context, a sociologist’s reflex may be to ask about the implications of these diverse values and opinions. For instance:

  • How do people construct these values?
  • Who decides which values are more important?
  • How might social groups cope with changes in values?

Discussing these issues and more, in a broad frame, might be quite useful. Some of the trickiest issues are raised after some changes in technology have already happened. From writing to cars, any technological context has unexpected implications. An ecological view of these implications could broaden the discussion.

I tend to like the concept of the “drift-off moment”, during which listeners (or readers) start thinking about the possibilities afforded a new tool (or concept). In the context of a sales pitch, the idea is that these possibilities are positive, a potential buyer is thinking about the ways she might use a newfangled device. But I also like the deeper process of thinking about all sorts of implications, regardless of their value.

So…

What might be the implications of a wearable hub?

WordPress as Content Directory: Getting Somewhere

{I tend to ramble a bit. If you just want a step-by-step tutorial, you can skip to here.}

Woohoo!

I feel like I’ve reached a milestone in a project I’ve had in mind, ever since I learnt about Custom Post Types in WordPress 3.0: Using WordPress as a content directory.

The concept may not be so obvious to anyone else, but it’s very clear to me. And probably much clearer for anyone who has any level of WordPress skills (I’m still a kind of WP newbie).

Basically, I’d like to set something up through WordPress to make it easy to create, review, and publish entries in content databases. WordPress is now a Content Management System and the type of “content management” I’d like to enable has to do with something of a directory system.

Why WordPress? Almost glad you asked.

These days, several of the projects on which I work revolve around WordPress. By pure coincidence. Or because WordPress is “teh awsum.” No idea how representative my sample is. But I got to work on WordPress for (among other things): an academic association, an adult learners’ week, an institute for citizenship and social change, and some of my own learning-related projects.

There are people out there arguing about the relative value of WordPress and other Content Management Systems. Sometimes, WordPress may fall short of people’s expectations. Sometimes, the pro-WordPress rhetoric is strong enough to sound like fanboism. But the matter goes beyond marketshare, opinions, and preferences.

In my case, WordPress just happens to be a rather central part of my life, these days. To me, it’s both a question of WordPress being “the right tool for the job” and the work I end up doing being appropriate for WordPress treatment. More than a simple causality (“I use WordPress because of the projects I do” or “I do these projects because I use WordPress”), it’s a complex interaction which involves diverse tools, my skillset, my social networks, and my interests.

Of course, WordPress isn’t perfect nor is it ideal for every situation. There are cases in which it might make much more sense to use another tool (Twitter, TikiWiki, Facebook, Moodle, Tumblr, Drupal..). And there are several things I wish WordPress did more elegantly (such as integrating all dimensions in a single tool). But I frequently end up with WordPress.

Here are some things I like about WordPress:

This last one is where the choice of WordPress for content directories starts making the most sense. Not only is it easy for me to use and build on WordPress but the learning curves are such that it’s easy for me to teach WordPress to others.

A nice example is the post editing interface (same in the software and service). It’s powerful, flexible, and robust, but it’s also very easy to use. It takes a few minutes to learn and is quite sufficient to do a lot of work.

This is exactly where I’m getting to the core idea for my content directories.

I emailed the following description to the digital content editor for the academic organization for which I want to create such content directories:

You know the post editing interface? What if instead of editing posts, someone could edit other types of contents, like syllabi, calls for papers, and teaching resources? What if fields were pretty much like the form I had created for [a committee]? What if submissions could be made by people with a specific role? What if submissions could then be reviewed by other people, with another role? What if display of these items were standardised?

Not exactly sure how clear my vision was in her head, but it’s very clear for me. And it came from different things I’ve seen about custom post types in WordPress 3.0.

For instance, the following post has been quite inspiring:

I almost had a drift-off moment.

But I wasn’t able to wrap my head around all the necessary elements. I perused and read a number of things about custom post types, I tried a few things. But I always got stuck at some point.

Recently, a valuable piece of the puzzle was provided by Kyle Jones (whose blog I follow because of his work on WordPress/BuddyPress in learning, a focus I share).

Setting up a Staff Directory using WordPress Custom Post Types and Plugins | The Corkboard.

As I discussed in the comments to this post, it contained almost everything I needed to make this work. But the two problems Jones mentioned were major hurdles, for me.

After reading that post, though, I decided to investigate further. I eventually got some material which helped me a bit, but it still wasn’t sufficient. Until tonight, I kept running into obstacles which made the process quite difficult.

Then, while trying to solve a problem I was having with Jones’s code, I stumbled upon the following:

Rock-Solid WordPress 3.0 Themes using Custom Post Types | Blancer.com Tutorials and projects.

This post was useful enough that I created a shortlink for it, so I could have it on my iPad and follow along: http://bit.ly/RockSolidCustomWP

By itself, it might not have been sufficient for me to really understand the whole process. And, following that tutorial, I replaced the first bits of code with use of the neat plugins mentioned by Jones in his own tutorial: More Types, More Taxonomies, and More Fields.

I played with this a few times but I can now provide an actual tutorial. I’m now doing the whole thing “from scratch” and will write down all steps.

This is with the WordPress 3.0 blogging software installed on a Bluehost account. (The WordPress.com blogging service doesn’t support custom post types.) I use the default Twenty Ten theme as a parent theme.

Since I use WordPress Multisite, I’m creating a new test blog (in Super Admin->Sites, “Add New”). Of course, this wasn’t required, but it helps me make sure the process is reproducible.

Since I already installed the three “More Plugins” (but they’re not “network activated”) I go in the Plugins menu to activate each of them.

I can now create the new “Product” type, based on that Blancer tutorial. To do so, I go to the “More Types” Settings menu, I click on “Add New Post Type,” and I fill in the following information: post type names (singular and plural) and the thumbnail feature. Other options are set by default.

I also set the “Permalink base” in Advanced settings. Not sure it’s required but it seems to make sense.

I click on the “Save” button at the bottom of the page (forgot to do this, the last time).

I then go to the “More Fields” settings menu to create a custom box for the post editing interface.

I add the box title and change the “Use with post types” options (no use in having this in posts).

(Didn’t forget to click “save,” this time!)

I can now add the “Price” field. To do so, I need to click on the “Edit” link next to the “Product Options” box I just created and add click “Add New Field.”

I add the “Field title” and “Custom field key”:

I set the “Field type” to Number.

I also set the slug for this field.

I then go to the “More Taxonomies” settings menu to add a new product classification.

I click “Add New Taxonomy,” and fill in taxonomy names, allow permalinks, add slug, and show tag cloud.

I also specify that this taxonomy is only used for the “Product” type.

(Save!)

Now, the rest is more directly taken from the Blancer tutorial. But instead of copy-paste, I added the files directly to a Twenty Ten child theme. The files are available in this archive.

Here’s the style.css code:

/*
Theme Name: Product Directory
Theme URI: http://enkerli.com/
Description: A product directory child theme based on Kyle Jones, Blancer, and Twenty Ten
Author: Alexandre Enkerli
Version: 0.1
Template: twentyten
*/
@import url("../twentyten/style.css");

The code for functions.php:

<!--?php /**  * ProductDir functions and definitions  * @package WordPress  * @subpackage Product_Directory  * @since Product Directory 0.1  */ /*Custom Columns*/ add_filter("manage_edit-product_columns", "prod_edit_columns"); add_action("manage_posts_custom_column",  "prod_custom_columns"); function prod_edit_columns($columns){ 		$columns = array( 			"cb" =--> "<input type="\&quot;checkbox\&quot;" />",
			"title" => "Product Title",
			"description" => "Description",
			"price" => "Price",
			"catalog" => "Catalog",
		);

		return $columns;
}

function prod_custom_columns($column){
		global $post;
		switch ($column)
		{
			case "description":
				the_excerpt();
				break;
			case "price":
				$custom = get_post_custom();
				echo $custom["price"][0];
				break;
			case "catalog":
				echo get_the_term_list($post->ID, 'catalog', '', ', ','');
				break;
		}
}
?>

And the code in single-product.php:

<!--?php /**  * Template Name: Product - Single  * The Template for displaying all single products.  *  * @package WordPress  * @subpackage Product_Dir  * @since Product Directory 1.0  */ get_header(); ?-->
<div id="container">
<div id="content">
<!--?php the_post(); ?-->

<!--?php 	$custom = get_post_custom($post--->ID);
	$price = "$". $custom["price"][0];

?>
<div id="post-<?php the_ID(); ?><br />">>
<h1 class="entry-title"><!--?php the_title(); ?--> - <!--?=$price?--></h1>
<div class="entry-meta">
<div class="entry-content">
<div style="width: 30%; float: left;">
			<!--?php the_post_thumbnail( array(100,100) ); ?-->
			<!--?php the_content(); ?--></div>
<div style="width: 10%; float: right;">
			Price
<!--?=$price?--></div>
</div>
</div>
</div>
<!-- #content --></div>
<!-- #container -->

<!--?php get_footer(); ?-->

That’s it!

Well, almost..

One thing is that I have to activate my new child theme.

So, I go to the “Themes” Super Admin menu and enable the Product Directory theme (this step isn’t needed with single-site WordPress).

I then activate the theme in Appearance->Themes (in my case, on the second page).

One thing I’ve learnt the hard way is that the permalink structure may not work if I don’t go and “nudge it.” So I go to the “Permalinks” Settings menu:

And I click on “Save Changes” without changing anything. (I know, it’s counterintuitive. And it’s even possible that it could work without this step. But I spent enough time scratching my head about this one that I find it important.)

Now, I’m done. I can create new product posts by clicking on the “Add New” Products menu.

I can then fill in the product details, using the main WYSIWYG box as a description, the “price” field as a price, the “featured image” as the product image, and a taxonomy as a classification (by clicking “Add new” for any tag I want to add, and choosing a parent for some of them).

Now, in the product management interface (available in Products->Products), I can see the proper columns.

Here’s what the product page looks like:

And I’ve accomplished my mission.

The whole process can be achieved rather quickly, once you know what you’re doing. As I’ve been told (by the ever-so-helpful Justin Tadlock of Theme Hybrid fame, among other things), it’s important to get the data down first. While I agree with the statement and its implications, I needed to understand how to build these things from start to finish.

In fact, getting the data right is made relatively easy by my background as an ethnographer with a strong interest in cognitive anthropology, ethnosemantics, folk taxonomies (aka “folksonomies“), ethnography of communication, and ethnoscience. In other words, “getting the data” is part of my expertise.

The more technical aspects, however, were a bit difficult. I understood most of the principles and I could trace several puzzle pieces, but there’s a fair deal I didn’t know or hadn’t done myself. Putting together bits and pieces from diverse tutorials and posts didn’t work so well because it wasn’t always clear what went where or what had to remain unchanged in the code. I struggled with many details such as the fact that Kyle Jones’s code for custom columns wasn’t working first because it was incorrectly copied, then because I was using it on a post type which was “officially” based on pages (instead of posts). Having forgotten the part about “touching” the Permalinks settings, I was unable to get a satisfying output using Jones’s explanations (the fact that he doesn’t use titles didn’t really help me, in this specific case). So it was much harder for me to figure out how to do this than it now is for me to build content directories.

I still have some technical issues to face. Some which are near essential, such as a way to create archive templates for custom post types. Other issues have to do with features I’d like my content directories to have, such as clearly defined roles (the “More Plugins” support roles, but I still need to find out how to define them in WordPress). Yet other issues are likely to come up as I start building content directories, install them in specific contexts, teach people how to use them, observe how they’re being used and, most importantly, get feedback about their use.

But I’m past a certain point in my self-learning journey. I’ve built my confidence (an important but often dismissed component of gaining expertise and experience). I found proper resources. I understood what components were minimally necessary or required. I succeeded in implementing the system and testing it. And I’ve written enough about the whole process that things are even clearer for me.

And, who knows, I may get feedback, questions, or advice..

Transparency and Secrecy

[Started working on this post on December 1st, based on something which happened a few days prior. Since then, several things happened which also connected to this post. Thought the timing was right to revisit the entry and finally publish it. Especially since a friend just teased me for not blogging in a while.]

I’m such a strong advocate of transparency that I have a real problem with secrecy.

I know, transparency is not exactly the mirror opposite of secrecy. But I think my transparency-radical perspective causes some problem in terms of secrecy-management.

“Haven’t you been working with a secret society in Mali?,” you ask. Well, yes, I have. And secrecy hasn’t been a problem in that context because it’s codified. Instead of a notion of “absolute secrecy,” the Malian donsow I’ve been working with have a subtle, nuanced, complex, layered, contextually realistic, elaborate, and fascinating perspective on how knowledge is processed, “transmitted,” managed. In fact, my dissertation research had a lot to do with this form of knowledge management. The term “knowledge people” (“karamoko,” from kalan+mogo=learning+people) truly applies to members of hunter’s associations in Mali as well as to other local experts. These people make a clear difference between knowledge and information. And I can readily relate to their approach. Maybe I’ve “gone native,” but it’s more likely that I was already in that mode before I ever went to Mali (almost 11 years ago).

Of course, a high value for transparency is a hallmark of academia. The notion that “information wants to be free” makes more sense from an academic perspective than from one focused on a currency-based economy. Even when people are clear that “free” stands for “freedom”/«libre» and not for “gratis”/«gratuit» (i.e. “free as in speech, not free as in beer”), there persists a notion that “free comes at a cost” among those people who are so focused on growth and profit. IMHO, most the issues with the switch to “immaterial economies” (“information economy,” “attention economy,” “digital economy”) have to do with this clash between the value of knowledge and a strict sense of “property value.”

But I digress.

Or, do I…?

The phrase “radical transparency” has been used in business circles related to “information and communication technology,” a context in which the “information wants to be free” stance is almost the basis of a movement.

I’m probably more naïve than most people I have met in Mali. While there, a friend told me that he thought that people from the United States were naïve. While he wasn’t referring to me, I can easily acknowledge that the naïveté he described is probably characteristic of my own attitude. I’m North American enough to accept this.

My dedication to transparency was tested by an apparently banal set of circumstances, a few days before I drafted this post. I was given, in public, information which could potentially be harmful if revealed to a certain person. The harm which could be done is relatively small. The person who gave me that information wasn’t overstating it. The effects of my sharing this information wouldn’t be tragic. But I was torn between my radical transparency stance and my desire to do as little harm as humanly possible. So I refrained from sharing this information and decided to write this post instead.

And this post has been sitting in my “draft box” for a while. I wrote a good number of entries in the meantime but I still had this one at the back of my mind. On the backburner. This is where social media becomes something more of a way of life than an activity. Even when I don’t do anything on this blog, I think about it quite a bit.

As mentioned in the preamble, a number of things have happened since I drafted this post which also relate to transparency and secrecy. Including both professional and personal occurrences. Some of these comfort me in my radical transparency position while others help me manage secrecy in a thoughtful way.

On the professional front, first. I’ve recently signed a freelance ethnography contract with Toronto-based consultancy firm Idea Couture. The contract included a non-disclosure agreement (NDA). Even before signing the contract/NDA, I was asking fellow ethnographer and blogger Morgan Gerard about disclosure. Thanks to him, I now know that I can already disclose several things about this contract and that, once the results are public, I’ll be able to talk about this freely. Which all comforts me on a very deep level. This is precisely the kind of information and knowledge management I can relate to. The level of secrecy is easily understandable (inopportune disclosure could be detrimental to the client). My commitment to transparency is unwavering. If all contracts are like this, I’ll be quite happy to be a freelance ethnographer. It may not be my only job (I already know that I’ll be teaching online, again). But it already fits in my personal approach to information, knowledge, insight.

I’ll surely blog about private-sector ethnography. At this point, I’ve mostly been preparing through reading material in the field and discussing things with friends or colleagues. I was probably even more careful than I needed to be, but I was still able to exchange ideas about market research ethnography with people in diverse fields. I sincerely think that these exchanges not only add value to my current work for Idea Couture but position me quite well for the future. I really am preparing for freelance ethnography. I’m already thinking like a freelance ethnographer.

There’s a surprising degree of “cohesiveness” in my life, these days. Or, at least, I perceive my life as “making sense.”

And different things have made me say that 2009 would be my year. I get additional evidence of this on a regular basis.

Which brings me to personal issues, still about transparency and secrecy.

Something has happened in my personal life, recently, that I’m currently unable to share. It’s a happy circumstance and I’ll be sharing it later, but it’s semi-secret for now.

Thing is, though, transparency was involved in that my dedication to radical transparency has already been paying off in these personal respects. More specifically, my being transparent has been valued rather highly and there’s something about this type of validation which touches me deeply.

As can probably be noticed, I’m also becoming more public about some emotional dimensions of my life. As an artist and a humanist, I’ve always been a sensitive person, in-tune with his emotions. Specially positive ones. I now feel accepted as a sensitive person, even if several people in my life tend to push sensitivity to the side. In other words, I’ve grown a lot in the past several months and I now want to share my growth with others. Despite reluctance toward the “touchy-feely,” specially in geek and other male-centric circles, I’ve decided to “let it all loose.” I fully respect those who dislike this. But I need to be myself.

Quest for Expertise

Will at Work Learning: People remember 10%, 20%…Oh Really?.

This post was mentioned on the mailing-list for the Society for Teaching and Learning in Higher Education (STLHE-L).

In that post, Will Thalheimer traces back a well-known claim about learning to shoddy citations. While it doesn’t invalidate the base claim (that people tend to retain more information through certain cognitive processes), Thalheimer does a good job of showing how a graph which has frequently been seen in educational fields was based on faulty interpretation of work by prominent scholars, mixed with some results from other sources.

Quite interesting. IMHO, demystification and critical thinking are among the most important things we can do in academia. In fact, through training in folkloristics, I have become quite accustomed to this specific type of debunking.

I have in mind a somewhat similar claim that I’m currently trying to trace. Preliminary searches seem to imply that citations of original statements have a similar hyperbolic effect on the status of this claim.

The claim is what a type of “rule of thumb” in cognitive science. A generic version could be stated in the following way:

It takes ten years or 10,000 hours to become an expert in any field.

The claim is a rather famous one from cognitive science. I’ve heard it uttered by colleagues with a background in cognitive science. In 2006, I first heard about such a claim from Philip E. Ross, on an episode of Scientific American‘s Science Talk podcast to discuss his article on expertise. I later read a similar claim in Daniel Levitin’s 2006 This Is Your Brain On Music. The clearest statement I could find back in Levitin’s book is the following (p. 193):

The emerging picture from such studies is that ten thousand hours of practice is required to achieve the level of mastery associated with being a world-class expert – in anything.

More recently, during a keynote speech he was giving as part of his latest book tour, I heard a similar claim from presenter extraordinaire Malcolm Gladwell. AFAICT, this claim runs at the centre of Gladwell’s recent book: Outliers: The Story of Success. In fact, it seems that Gladwell uses the same quote from Levitin, on page 40 of Outliers (I just found that out).

I would like to pinpoint the origin for the claim. Contrary to Thalheimer’s debunking, I don’t expect that my search will show that the claim is inaccurate. But I do suspect that the “rule of thumb” versions may be a bit misled. I already notice that most people who set up such claims are doing so without direct reference to the primary literature. This latter comment isn’t damning: in informal contexts, constant referal to primary sources can be extremely cumbersome. But it could still be useful to clear up the issue. Who made this original claim?

I’ve tried a few things already but it’s not working so well. I’m collecting a lot of references, to both online and printed material. Apart from Levitin’s book and a few online comments, I haven’t yet read the material. Eventually, I’d probably like to find a good reference on the cognitive basis for expertise which puts this “rule of thumb” in context and provides more elaborate data on different things which can be done during that extensive “time on task” (including possible skill transfer).

But I should proceed somewhat methodically. This blogpost is but a preliminary step in this process.

Since Philip E. Ross is the first person on record I heard talk about this claim, a logical first step for me is to look through this SciAm article. Doing some text searches on the printable version of his piece, I find a few interesting things including the following (on page 4 of the standard version):

Simon coined a psychological law of his own, the 10-year rule, which states that it takes approximately a decade of heavy labor to master any field.

Apart from the ten thousand (10,000) hours part of the claim, this is about as clear a statement as I’m looking for. The “Simon” in question is Herbert A. Simon, who did research on chess at the Department of Psychology at Carnegie-Mellon University with colleague William G. Chase.  So I dig for diverse combinations of “Herbert Simon,” “ten(10)-year rule,” “William Chase,” “expert(ise),” and/or “chess.” I eventually find two primary texts by those two authors, both from 1973: (Chase and Simon, 1973a) and (Chase and Simon, 1973b).

The first (1973a) is an article from Cognitive Psychology 4(1): 55-81, available for download on ScienceDirect (toll access). Through text searches for obvious words like “hour*,” “year*,” “time,” or even “ten,” it seems that this article doesn’t include any specific statement about the amount of time required to become an expert. The quote which appears to be the most relevant is the following:

Behind this perceptual analysis, as with all skills (cf., Fitts & Posner, 1967), lies an extensive cognitive apparatus amassed through years of constant practice.

While it does relate to the notion that there’s a cognitive basis to practise, the statement is generic enough to be far from the “rule of thumb.”

The second Chase and Simon reference (1973b) is a chapter entitled “The Mind’s Eye in Chess” (pp. 215-281) in the proceedings of the Eighth Carnegie Symposium on Cognition as edited by William Chase and published by Academic Press under the title Visual Information Processing. I borrowed a copy of those proceedings from Concordia and have been scanning that chapter visually for some statements about the “time on task.” Though that symposium occurred in 1972 (before the first Chase and Simon reference was published), the proceedings were apparently published after the issue of Cognitive Psychology since the authors mention that article for background information.

I do find some interesting quotes, but nothing that specific:

By a rough estimate, the amount of time each player has spent playing chess, studying chess, and otherwise staring at chess positions is perhaps 10,000 to 50,000 hours for the Master; 1,000 to 5,000 hours for the Class A player; and less than 100 horus for the beginner. (Chase and Simon 1973b: 219)

or:

T
he organization of the Master’s elaborate repertoire of information takes thousands of hours to build up, and the same is true of any skilled task (e.g., football, music). That is why practice is the major independent variable in the acquisition of skill. (Chase and Simon 1973b: 279, emphasis in the original, last sentences in the text)

Maybe I haven’t scanned these texts properly but those quotes I find seem to imply that Simon hadn’t really devised his “10-year rule” in a clear, numeric version.

I could probably dig for more Herbert Simon wisdom. Before looking (however cursorily) at those 1973 texts, I was using Herbert Simon as a key figure in the origin of that “rule of thumb.” To back up those statements, I should probably dig deeper in the Herbert Simon archives. But that might require more work than is necessary and it might be useful to dig through other sources.

In my personal case, the other main written source for this “rule of thumb” is Dan Levitin. So, using online versions of his book, I look for comments about expertise. (I do own a copy of the book and I’m assuming the Index contains page numbers for references on expertise. But online searches are more efficient and possibly more thorough on specific keywords.) That’s how I found the statement, quoted above. I’m sure it’s the one which was sticking in my head and, as I found out tonight, it’s the one Gladwell used in his first statement on expertise in Outliers.

So, where did Levitin get this? I could possibly ask him (we’ve been in touch and he happens to be local) but looking for those references might require work on his part. A preliminary step would be to look through Levitin’s published references for Your Brain On Music.

Though Levitin is a McGill professor, Your Brain On Music doesn’t follow the typical practise in English-speaking academia of ladling copious citations onto any claim, even the most truistic statements. Nothing strange in this difference in citation practise.  After all, as Levitin explains in his Bibliographic Notes:

This book was written for the non-specialist and not for my colleagues, and so I have tried to simplify topics without oversimplifying them.

In this context, academic-style citation-fests would make the book too heavy. Levitin does, however, provide those “Bibliographic Notes” at the end of his book and on the website for the same book. In the Bibliographic Notes of that site, Levitin adds a statement I find quite interesting in my quest for “sources of claims”:

Because I wrote this book for the general reader, I want to emphasize that there are no new ideas presented in this book, no ideas that have not already been presented in scientific and scholarly journals as listed below.

So, it sounds like going through those references is a good strategy to locate at least solid references on that specific “10,000 hour” claim. Among relevant references on the cognitive basis of expertise (in Chapter 7), I notice the following texts which might include specific statements about the “time on task” to become an expert. (An advantage of the Web version of these bibliographic notes is that Levitin provides some comments on most references; I put Levitin’s comments in parentheses.)

  • Chi, Michelene T.H., Robert Glaser, and Marshall J. Farr, eds. 1988. The Nature of Expertise. Hillsdale, New Jersey: Lawrence Erlbaum Associates. (Psychological studies of expertise, including chess players)
  • Ericsson, K. A., and J. Smith, eds. 1991. Toward a General Theory of Expertise: prospects and limits. New York: Cambridge University Press. (Psychological studies of expertise, including chess players)
  • Hayes, J. R. 1985. Three problems in teaching general skills. In Thinking and Learning Skills: Research and Open Questions, edited by S. F. Chipman, J. W. Segal and R. Glaser. Hillsdale, NJ: Erlbaum. (Source for the study of Mozart’s early works not being highly regarded, and refutation that Mozart didn’t need 10,000 hours like everyone else to become an expert.)
  • Howe, M. J. A., J. W. Davidson, and J. A. Sloboda. 1998. Innate talents: Reality or myth? Behavioral & Brain Sciences 21 (3):399-442. (One of my favorite articles, although I don’t agree with everything in it; an overview of the “talent is a myth” viewpoint.)
  • Sloboda, J. A. 1991. Musical expertise. In Toward a general theory of expertise, edited by K. A. Ericcson (sic) and J. Smith. New York: Cambridge University Press. (Overview of issues and findings in musical expertise literature)

I have yet to read any of those references. I did borrow Ericsson and Smith when I first heard about Levitin’s approach to talent and expertise (probably through a radio and/or podcast appearance). But I had put the issue of expertise on the back-burner. It was always at the back of my mind and I did blog about it, back then. But it took Gladwell’s talk to wake me up. What’s funny, though, is that the “time on task” statements in (Ericsson and Smith,  1991) seem to lead back to (Chase and Simon, 1973b).

At this point, I get the impression that the “it takes a decade and/or 10,000 hours to become an expert”:

  • was originally proposed as a vague hypothesis a while ago (the year 1899 comes up);
  • became an object of some consideration by cognitive psychologists at the end of the 1960s;
  • became more widely accepted in the 1970s;
  • was tested by Benjamin Bloom and others in the 1980s;
  • was precised by Ericsson and others in the late 1980s;
  • gained general popularity in the mid-2000s;
  • is being further popularized by Malcolm Gladwell in late 2008.

Of course, I’ll have to do a fair bit of digging and reading to verify any of this, but it sounds like the broad timeline makes some sense. One thing, though, is that it doesn’t really seem that anybody had the intention of spelling it out as a “rule” or “law” in such a format as is being carried around. If I’m wrong, I’m especially surprised that a clear formulation isn’t easier to find.

As an aside, of sorts… Some people seem to associate the claim with Gladwell, at this point. Not very surprsing, given the popularity of his books, the effectiveness of his public presentations, the current context of his book tour, and the reluctance of the general public to dig any deeper than the latest source.

The problem, though, is that it doesn’t seem that Gladwell himself has done anything to “set the record straight.” He does quote Levitin in Outliers, but I heard him reply to questions and comments as if the research behind the “ten years or ten thousand hours” claim had some association with him. From a popular author like Gladwell, it’s not that awkward. But these situations are perfect opportunities for popularizers like Gladwell to get a broader public interested in academia. As Gladwell allegedly cares about “educational success” (as measured on a linear scale), I would have expected more transparency.

Ah, well…

So, I have some work to do on all of this. It will have to wait but this placeholder might be helpful. In fact, I’ll use it to collect some links.

 

Some relevant blogposts of mine on talent, expertise, effort, and Levitin.

And a whole bunch of weblinks to help me in my future searches (I have yet to really delve in any of this).

Gender and Culture

A friend sent me a link to the following video:

JC Penney: Beware of the Doghouse | Creativity Online.

In that video, a man is “sent to the doghouse” (a kind of prison for insensitive men) because he offered a vacuum cleaner to his wife. It’s part of a marketing campaign through which men are expected to buy diamonds to their wives and girlfriends.

The campaign is quite elaborate and the main website for the campaign makes interesting uses of social media.

For instance, that site makes use of Facebook Connect as a way to tap viewers’ online social network. FC is a relatively new feature (the general release was last week) and few sites have been putting it to the test. In this campaign’s case, a woman can use her Facebook account to connect to her husband or boyfriend and either send him a warning about his insensitivity to her needs (of diamonds) or “put him in the doghouse.” From a social media perspective, it can accurately be described as “neat.”

The site also uses Share This to facilitate the video‘s diffusion  through various social media services, from WordPress.com to Diigo. This tends to be an effective strategy to encourage “viral marketing.” (And, yes, I fully realize that I actively contribute to this campaign’s “viral spread.”)

The campaign could be a case study in social marketing.

But, this time, I’m mostly thinking about gender.

Simply put, I think that this campaign would fare rather badly in Quebec because of its use of culturally inappropriate gender stereotypes.

As I write this post, I receive feedback from Swedish ethnomusicologist Maria Ljungdahl who shares some insight about gender stereotypes. As Maria says, the stereotypes in this ad are “global.” But my sense is that these “global stereotypes” are not that compatible with local culture, at least among Québécois (French-speaking Quebeckers).

See, as a Québécois born and raised as a (male) feminist, I tend to be quite gender-conscious. I might even say that my gender awareness may be somewhat above the Québécois average and gender relationships are frequently used in definitions of Québécois identity.

In Québécois media, advertising campaigns portraying men as naïve and subservient have frequently been discussed. Ten or so years ago, these portrayals were a hot topic (searches for Brault & Martineau, Tim Hortons, and Un gars, une fille should eventually lead to appropriate evidence). Current advertising campaigns seem to me more subtle in terms of male figures, but careful analysis would be warranted as discussions of those portrayals are more infrequent than they have been in the past.

That video and campaign are, to me, very US-specific. Because I spent a significant amount of time in Indiana, Massachusetts, and Texas, my initial reaction while watching the video had more to do with being glad that it wasn’t the typical macrobrewery-style sexist ad. This reaction also has to do with the context for my watching that video as I was unclear as to the gender perspective of the friend who sent me the link (a male homebrewer from the MidWest currently living in Texas).

By the end of the video, however, I reverted to my Québécois sensibility. I also reacted to the obvious commercialism, partly because one of my students has been working on engagement rings in our material culture course.

But my main issue was with the presumed insensitivity of men.

Granted, part of this is personal. I define myself as a “sweet and tendre man” and I’m quite happy about my degree of sensitivity, which may in fact be slightly higher than average, even among Québécois. But my hunch is that this presumption of male insensitivity may not have very positive effects on the perception of such a campaign. Québécois watching this video may not groan but they may not find it that funny either.

There’s a generational component involved and, partly because of a discussion of writing styles in a generational perspective, I have been thinking about “generations” as a useful model for explaining cultural diversity to non-ethnographers.

See, such perceived generational groups as “Baby Boomers” and “Generation X” need not be defined as monolithic, monadic, bounded entities and they have none of the problems associated with notions of “ethnicity” in the general public. “Generations” aren’t “faraway tribes” nor do they imply complete isolation. Some people may tend to use “generational” labels in such terms that they appear clearly defined (“Baby Boomers are those individuals born between such and such years”). And there is some confusion between this use of “historical generations” and what the concept of “generation” means in, say, the study of kinship systems. But it’s still relatively easy to get people to think about generations in cultural terms: they’re not “different cultures” but they still seem to be “culturally different.”

Going back to gender… The JC Penney marketing campaign visibly lumps together people of different ages. The notion seems to be that doghouse-worthy male insensitivity isn’t age-specific or related to inexperience. The one man who was able to leave the doghouse based on his purchase of diamonds is relatively “age-neutral” as he doesn’t really seem to represent a given age. Because this attempt at crossing age divisions seems so obvious, I would assume that it came in the context of perceived differences in gender relationships. Using the logic of those who perceive the second part of the 20th Century as a period of social emancipation, one might presume that younger men are less insensitive than older men (who were “brought up” in a cultural context which was “still sexist”). If there are wide differences in the degree of sensitivity of men of different ages, a campaign aiming at a broad age range needs to diminish the importance of these differences. “The joke needs to be funny to men of all ages.”

The Quebec context is, I think, different. While we do perceive the second part of the 20th Century (and, especially, the 1970s) as a period of social emancipation (known as the “Quiet Revolution” or «Révolution Tranquille»), the degree of sensitivity to gender issues appears to be relatively level, across the population. At a certain point in time, one might have argued that older men were still insensitive (at the same time as divorcées in their forties might have been regarded as very assertive) but it seems difficult to make such a distinction in the current context.

All this to say that the JC Penney commercial is culturally inappropriate for Québécois society? Not quite. Though the example I used was this JC Penney campaign, I’m thinking about broader contexts for Québécois identity (for a variety of personal reasons, including the fact that I have been back in Québec for several months, now).

My claim is…

Ethnographic field research would go a long way to unearth culturally appropriate categories which might eventually help marketers cater to Québécois.

Of course, the agency which produced that JC Penney ad (Saatchi & Saatchi) was targeting the US market (JC Penney doesn’t have locations in Quebec) and I received the link through a friend in the US. But it was an interesting opportunity for me to think and write about a few issues related to the cultural specificity of gender stereotypes.

Enthused Tech

Yesterday, I held a WiZiQ session on the use of online tech in higher education:

Enthusing Higher Education: Getting Universities and Colleges to Play with Online Tools and Services

Slideshare

(Full multimedia recording available here)

During the session, Nellie Deutsch shared the following link:

Diffusion of Innovations, by Everett Rogers (1995)

Haven’t read Rogers’s book but it sounds like a contextually easy to understand version of ideas which have been quite clear in Boasian disciplines (cultural anthropology, folkloristics, cultural ecology…) for a while. But, in this sometimes obsessive quest for innovation, it might in fact be useful to go back to basic ideas about the social mechanisms which can be observed in the adoption of new tools and techniques. It’s in fact the thinking behind this relatively recent blogpost of mine:

Technology Adoption and Active Reading

My emphasis during the WiZiQ session was on enthusiasm. I tend to think a lot about occasions in which, thinking about possibilities afforded technology relates to people getting “psyched up.” In a way, this is exactly how I can define myself as a tech enthusiast: I get easy psyched up in the context of discussions about technology.

What’s funny is that I’m no gadget freak. I don’t care about the tool. I just love to dream up possibilities. And I sincerely think that I’m not alone. We might even guess that a similar dream-induced excitement animates true gadget freaks, who must have the latest tool. Early adopters are a big part of geek culture and, though still small, geek culture is still a niche.

Because I know I’ll keep on talking about these things on other occasions, I can “leave it at that,” for now.

RERO‘s my battle cry.

TBC

Crazy App Idea: Happy Meter

I keep getting ideas for apps I’d like to see on Apple’s App Store for iPod touch and iPhone. This one may sound a bit weird but I think it could be fun. An app where you can record your mood and optionally broadcast it to friends. It could become rather sophisticated, actually. And I think it can have interesting consequences.

The idea mostly comes from Philippe Lemay, a psychologist friend of mine and fellow PDA fan. Haven’t talked to him in a while but I was just thinking about something he did, a number of years ago (in the mid-1990s). As part of an academic project, Philippe helped develop a PDA-based research program whereby subjects would record different things about their state of mind at intervals during the day. Apart from the neatness of the data gathering technique, this whole concept stayed with me. As a non-psychologist, I personally get the strong impression that recording your moods frequently during the day can actually be a very useful thing to do in terms of mental health.

And I really like the PDA angle. Since I think of the App Store as transforming Apple’s touch devices into full-fledged PDAs, the connection is rather strong between Philippe’s work at that time and the current state of App Store development.

Since that project of Philippe’s, a number of things have been going on which might help refine the “happy meter” concept.

One is that “lifecasting” became rather big, especially among certain groups of Netizens (typically younger people, but also many members of geek culture). Though the lifecasting concept applies mostly to video streams, there are connections with many other trends in online culture. The connection with vidcasting specifically (and podcasting generally) is rather obvious. But there are other connections. For instance, with mo-, photo-, or microblogging. Or even with all the “mood” apps on Facebook.

Speaking of Facebook as a platform, I think it meshes especially well with touch devices.

So, “happy meter” could be part of a broader app which does other things: updating Facebook status, posting tweets, broadcasting location, sending personal blogposts, listing scores in a Brain Age type game, etc.

Yet I think the “happy meter” could be useful on its own, as a way to track your own mood. “Turns out, my mood was improving pretty quickly on that day.” “Sounds like I didn’t let things affect me too much despite all sorts of things I was going through.”

As a mood-tracker, the “happy meter” should be extremely efficient. Because it’s easy, I’m thinking of sliders. One main slider for general mood and different sliders for different moods and emotions. It would also be possible to extend the “entry form” on occasion, when the user wants to record more data about their mental state.

Of course, everything would be save automatically and “sent to the cloud” on occasion. There could be a way to selectively broadcast some slider values. The app could conceivably send reminders to the user to update their mood at regular intervals. It could even serve as a “break reminder” feature. Though there are limitations on OSX iPhone in terms of interapplication communication, it’d be even neater if the app were able to record other things happening on the touch device at the same time, such as music which is playing or some apps which have been used.

Now, very obviously, there are lots of privacy issues involved. But what social networking services have taught us is that users can have pretty sophisticated notions of privacy management, if they’re given the chance. For instance, adept Facebook users may seem to indiscrimately post just about everything about themselves but are often very clear about what they want to “let out,” in context. So, clearly, every type of broadcasting should be controlled by the user. No opt-out here.

I know this all sounds crazy. And it all might be a very bad idea. But the thing about letting my mind wander is that it helps me remain happy.