Category Archives: voice

Body and Tech: My Year in Quantified Self

Though I’m a qual, I started quantifying my self a year ago.

Not Even Started Yet

This post is long. You’ve been warned.

This post is about my experience with the Quantified Self (QuantSelf). As such, it may sound quite enthusiastic, as my perspective on my own selfquantification is optimistic. I do have several issues with the Quantified Self notion generally and with the technology associated with selfquantification. Those issues will have to wait until a future blogpost.

While I realize QuantSelf is broader than fitness/wellness/health tracking, my own selfquantification experience focuses on working with my body to improve my health. My future posts on the Quantified Self would probably address the rest more specifically.

You might notice that I frequently link to the DC Rainmaker site, which is a remarkably invaluable source of information and insight about a number of things related to fitness and fitness technology. Honestly, I don’t know how this guy does it. He’s a one-man shop for everything related to sports and fitness gadgets.

Though many QuantSelf devices are already available on the market, very few of them are available in Quebec. On occasion, I think about getting one shipped to someone I know in the US and then manage to pick it up in person, get a friend to bring it to Montreal, or get it reshipped. If there were such a thing as the ideal QuantSelf device, for me, I might do so.

(The title of this post refers to the song Body and Soul, and I perceive something of a broader shift in the mind/body dualism, even leading to post- and transhumanism. But this post is more about my own self.)

Quaint Quant

I can be quite skeptical of quantitative data. Not that quants aren’t adept at telling us very convincing things. But numbers tend to hide many issues, when used improperly. People who are well-versed in quantitative analysis can do fascinating things, leading to genuine insight. But many other people use numbers as a way to “prove” diverse things, sometimes remaining oblivious to methodological and epistemological issues with quantification.

Still, I have been accumulating fairly large amounts of quantitative data about my self. Especially about somatic dimensions of my self.

Started with this a while ago, but it’s really in January 2013 that my Quantified Self ways took prominence in my life.

Start Counting

It all started with the Wahoo Fitness fisica key and soft heartrate strap. Bought those years ago (April 2011), after thinking about it for months (December 2010).

Had tried different exercise/workout/fitness regimens over the years, but kept getting worried about possible negative effects. For instance, some of the exercises I’d try in a gym would quickly send my heart racing to the top of my healthy range. Though, in the past, I had been in a more decent shape than people might have surmised, I was in bad enough shape at that point that it was better for me to exercise caution while exercising.

At least, that’s the summary of what happened which might make sense to a number of people. Though I was severely overweight for most of my life, I had long periods of time during which I was able to run up long flights of stairs without getting out of breath. This has changed in the past several years, along with other health issues. The other health issues are much more draining and they may not be related to weight, but weight is the part on which people tend to focus, because it’s so visible. For instance, doctors who meet me for a few minutes, only once, will spend more time talking about weight than a legitimate health concern I have. It’s easy for me to lose weight, but I wanted to do it in the best possible way. Cavalier attitudes are discouraging.

Habits, Old and New

Something I like about my (in this case not-so-sorry) self is that I can effortlessly train myself into new habits. I’m exactly the opposite of someone who’d get hooked on almost anything. I never smoked or took drugs, so I’ve never had to kick one of those trickiest of habits. But I often stop drinking coffee or alcohol with no issue whatsoever. Case in point: I’m fairly well-known as a coffee geek yet I drank less than two full cups of coffee during the last two months.

Getting new habits is as easy for me as kicking new ones. Not that it’s perfect, of course. I occasionally forget to bring down the lid on the toilet seat. But if I put my mind to something, I can usually undertake it. Willpower, intrinsic motivation, and selfdiscipline are among my strengths.

My health is a significant part of this. What I started a year ago is an exercise and fitness habit that I’ve been able to maintain and might keep up for a while, if I decide to do so.

Part of it is a Pilates-infused yoga habit that I brought to my life last January and which became a daily routine in February or March. As is the case with other things in my life, I was able to add this routine to my life after getting encouragement from experts. In this case, yoga and Pilates instructors. Though it may be less impressive than other things I’ve done, this routine has clearly had a tremendous impact on my life.

Spoiler alert: I also took on a workout schedule with an exercise bike. Biked 2015 miles between January 16, 2013 and January 15, 2014.

But I’m getting ahead of myself.

So Close, Yet So Far

Flashback to March, 2011. Long before I really got into QuantSelf.

At the time, I had the motivation to get back into shape, but I had to find a way to do it safely. The fact that I didn’t have access to a family physician played a part in that.

So I got the Wahoo key, a dongle which allows an iOS device to connect to ANT+ equipment, such as heartrate straps (including the one I bought at the same time as the key). Which means that I was able to track my heartrate during exercise using my iPod touch and iPad (I later got an iPhone).

Used that setup on occasion. Including at the gym. Worked fairly well as a way to keep track of my workouts, but I had some difficulty fitting gym workouts in my schedule. Not only does it take a lot of time to go to a gym (even one connected to my office by a tunnel), but my other health issues made it very difficult to do any kind of exercise for several hours after any meal. In fact, those other health issues made most exercise very unpleasant. I understand the notion of pushing your limits, getting out of your comfort zone. I’m fine with some types of discomfort and I don’t feel the need to prove to anyone that I can push my limits. But the kind of discomfort I’m talking about was more discouraging than anything else. For one thing, I wasn’t feeling anything pleasant at any point during or after exercising.

So, although I had some equipment to keep track of my workouts, I wasn’t working out on that regular a basis.

I know, typical, right? But that’s before I really started in QuantSelf.

Baby Steps

In the meantime (November, 2011), I got a Jawbone UP wristband. First generation.

That device was my first real foray into “Quantified Self”, as it’s normally understood. It allowed me to track my steps and my sleep. Something about this felt good. Turns out that, under normal circumstances, my stepcount can be fairly decent, which is in itself encouraging. And connecting to this type of data had the effect of helping me notice some correlations between my activity and my energy levels. There have been times when I’ve felt like I hadn’t walked much and then noticed that I had been fairly active. And vice-versa. I wasn’t getting into such data that intensely, but I had started accumulating some data on my steps.

Gotta start somewhere.

Sleepwalking

My sleep was more interesting, as I was noticing some difficult nights. An encouraging thing, to me, is that it usually doesn’t take me much time to get to sleep (about 10 minutes, according to the UP). Neat stuff, but not earth-shattering.

Obviously, the UP stopped working. Got refunded, and all, but it was still “a bummer”. My experience with the first generation UP had given me a taste of QuantSelf, but the whole thing was inconclusive.

Feeling Pressure

Fastforward to late December, 2012 and early January, 2013. The holiday break was a very difficult time for me, physically. I was getting all sorts of issues, compounding one another. One of them was a series of intense headaches. I had been getting those on occasion since Summer, 2011. By late 2012, my headaches were becoming more frequent and longer-lasting. On occasion, physicians at walk-in clinics had told me that my headaches probably had to do with blood pressure and they had encouraged me to take my pressure at the pharmacy, once in a while. While my pressure had been normal-to-optimal (110/80) for a large part of my life, it was becoming clear that my blood pressure had increased and was occasionally getting into more dangerous territory. So I eventually decided to buy a bloodpressure monitor.

Which became my first selfquantification method. Since my bloodpressure monitor is a basic no-frills model, it doesn’t sync to anything or send data anywhere. But I started manually tracking my bloodpressure by taking pictures and putting the data in a spreadsheet. Because the monitor often gives me different readings (especially depending on which arm I got them from), I would input lowest and highest readings from each arm in my spreadsheet.

Tensio

My first bloodpressure reading, that first evening (January 3, 2013), was enough of a concern that a nurse at Quebec’s phone health consultation service recommended that I consult with a physician at yet another walk-in clinic. (Can you tell not having a family physician was an issue? I eventually got one, but that’s another post.) Not that it was an emergency, but it was a good idea to take this seriously.

So, on January 4, 2013, I went to meet Dr. Anthony Rizzuto, a general practitioner at a walk-in clinic in my neighbourhood.

Getting Attention

At the clinic, I was diagnosed with hypertension (high bloodpressure). Though that health issue was less troublesome to me than the rest, it got me the attention of that physician who gave me exactly the right kind of support. Thanks to that doctor, a bit of medication, and all sorts of efforts on my part, that issue was soon under control and I’m clearly out of the woods on this one. I’ve documented the whole thing in my previous blogpost. Summary version of that post (it’s in French, after all): more than extrinsic motivation, the right kind of encouragement can make all the difference in the world. (In all honesty, I already had all the intrinsic motivation I needed. No worries there!)

Really, that bloodpressure issue wasn’t that big of a deal. Sure, it got me a bit worried, especially about risks of getting a stroke. But I had been more worried and discouraged by other health issues, so that bloodpressure issue wasn’t the main thing. The fact that hypertension got me medical attention is the best part, though. Some things I was unable to do on my own. I needed encouragement, of course, but I also needed professional advice. More specifically, I felt that I needed a green light. A license to exercise.

Y’know how, in the US especially, “they” keep saying that you should “consult a physician” before doing strenuous exercise? Y’know, the fine print on exercise programs, fitness tools, and the like? Though I don’t live in the US anymore and we don’t have the same litigation culture here, I took that admonition to heart. So I was hesitant to take on a full fitness/training/exercise routine before I could consult with a physician. I didn’t have a family doctor, so it was difficult.

But, a year ago, I got the medical attention I needed. Since we’re not in the US, questions about the possibility to undertake exercise are met with some surprise. Still, I was able to get “approval” on doing more exercise. In fact, exercise was part of a solution to the hypertension issue which had brought this (minimal level of) medical attention to my case.

So I got exactly what I needed. A nod from a licensed medical practitioner. “Go ahead.”

Weight, Weight! Don’t Tell Me![1]

Something I got soon after visiting the clinic was a scale. More specifically, I got a Conair WW54C Weight Watchers Body Analysis Digital Precision Scale. I would weigh myself everyday (more than once a day, in fact) and write down the measures for total weight, body water percentage, and body fat percentage. As with the bloodpressure monitor, I was doing this by hand, since my scale wasn’t connected in any way to another device or to a network.

Weighing My Options

I eventually bought a second scale, a Starfrit iFit. That one is even more basic than the Weight Watchers scale, as it doesn’t do any “body analysis” beyond weight. But having two scales makes me much more confident about the readings I get. For reasons I don’t fully understand, I keep getting significant discrepancies in my readings. On a given scale, I would weigh myself three times and keep the average. The delta between the highest and lowest readings on that same scale would often be 200g or half a pound. The delta between the two scales can be as much as 500g or over one pound. Unfortunately, these discrepancies aren’t regular: it’s not that one scale is offset from the other by a certain amount. One day, the Weight Watchers has the highest readings and the Starfrit has the highest readings. I try to position myself the same way on each scale every time and I think both of them are on as flat a surface as I can get. But I keep getting different readings. I was writing down averages from both scales in my spreadsheet. As I often weighed myself more than once a day and would get a total of six readings every time, that was a significant amount of time spent on getting the most basic of data.

Food for Thought

At the same time, I started tracking my calories intake. I had done so in the past, including with the USDA National Nutrient Database on PalmOS devices (along with the Eat Watch app from the Hacker’s Diet). Things have improved quite a bit since that time. Not that tracking calories has become effortless, far from it. It’s still a chore, an ordeal, a pain in the neck, and possibly a relatively bad idea. Still, it’s now easier to input food items in a database which provides extensive nutritional data on most items. Because these databases are partly crowdsourced, it’s possible to add values for items which are specific to Canada, for instance. It’s also become easier to get nutritional values for diverse items online, including meals at restaurant chains. Though I don’t tend to eat at chain restaurants, tracking my calories encouraged me to do so, however insidiously.

But I digress.

Nutritional data also became part of my QuantSelf spreadsheet. Along with data from my bloodpressure monitor and body composition scale, I would copy nutritional values (protein, fat, sodium, carbohydrates…) from a database. At one point, I even started calculating my estimated and actual weightloss in that spreadsheet. Before doing so, I needed to know my calories expenditure.

Zipping

One of the first things I got besides the bloodpressure monitor and scale(s) was a fitbit Zip. Two months earlier (November, 2012), I had bought a fitbit One. But I lost it. The Zip was less expensive and, though it lacks some of the One’s features (tracking elevation, for instance), it was good enough for my needs at the time.

In fact, I prefer the Zip over the One, mostly because it uses a coin battery, so it doesn’t need to be recharged. I’ve been carrying it for a year and my fitbit profile has some useful data about my activity. Sure, it’s just a “glorified pedometer”. But the glorification is welcomed, as regular synchronization over Bluetooth is very useful a feature. My Zip isn’t a big deal, for me. It’s as much of a part of my life as my glasses, though I wear it more often (including during my sleep, though it doesn’t track sleep data).

Stepping UP

I also bought a new Jawbone UP. Yep, despite issues I had with the first generation one. Unfortunately, the UP isn’t really that much more reliable now than it was at the time. But they keep replacing it. A couple or weeks ago, my UP stopped working and I got a replacement. I think it’s the fifth one.

Despite its unreliability, I really like the UP for its sleeptracking and “gentle waking” features. If it hadn’t been for the UP, I probably wouldn’t have realized the importance of sleep as deeply as I have. In other words, the encouragement to sleep more is something I didn’t realize I needed. Plus, it’s really neat to wake up to a gentle buzz, at an appropriate point in my sleep cycle. I probably wouldn’t have gotten the UP just for this, but it’s something I miss every time my UP stops working. And there’s been several of those times.

My favourite among UP’s features is one they added, through firmware, after a while (though it might have been in the current UP from the start). It’s the ability to take “smart naps”. Meaning that I can set an alarm to wake me up after a certain time or after I’ve slept a certain amount of time. The way I set it up, I can take a 20 minute nap and I’ll be awaken by the UP after a maximum of 35 minutes. Without this alarm, I’d oversleep and likely feel more messed up after the nap than before. The alarm is also reassuring in that it makes the nap fit neatly my schedule. I don’t nap everyday, but naps are one of these underrated things I feel could be discussed more. Especially when it comes to heavy work sessions such as writing reports or grading papers. My life might shift radically in the near future and it’s quite possible that naps will be erased from my workweek indefinitely. But chances are that my workweek will also become much more manageable once I stop freelancing.

The UP also notifies me when I’ve been inactive for a certain duration (say, 45 minutes). It only does so a few times a month, on average, because I don’t tend to be that inactive. Exceptions are during long stretches of writing, so it’s a useful reminder to take a break. In fact, the UP just buzzed while I was writing this post so I should go and do my routine.

(It’s fun to write on my iPad while working out. Although, I tend to remain in the aerobic/endurance or even in the fitness/fatburning zone. I should still reach mile 2100 during this workout.)

Contrary to the fitbit Zip, the UP does require a charge on a regular basis. In fact, it seems that the battery is a large part of the reliability issue. So, after a while, I got into the habit of plugging my UP to the wall during my daily yoga/Pilates routine. My routine usually takes over half an hour and the UP is usually charged after 20 minutes.

Back UP

It may seem strange to have two activity trackers with complete feature overlap (there’s nothing the fitbit Zip does that the Jawbone UP doesn’t do). I probably wouldn’t have planned it this way, had I been able to get a Jawbone UP right at first. If I were to do it now, I might get a different device from either fitbit or Jawbone (the Nike+ FuelBand is offputting, to me).

I do find it useful to have two activity trackers. For one thing, the UP is unreliable enough that the Zip is useful as a backup. The Zip also stopped working once, so there’s been six periods of time during the past year during which I only had one fitness tracker. Having two trackers means that there’s no hiatus in my tracking, which has a significant impact in the routine aspect of selfquantification. Chances are slim that I would have completely given up on QuantSelf during such a hiatus. But I would probably have been less encouraged by selfquantification had I been forced to depend on one device.

Having two devices also helps me get a more accurate picture of my activities. Though the Zip and UP allegedly track the same steps, there’s usually some discrepancy between the two, as is fairly common among activity trackers. For some reasons, the discrepancy has actually decreased after a few months (and after I adapted my UP usage to my workout). But it’s useful to have two sources of data points.

Especially when I do an actual workout.

Been Working Out, Haven’t You?

In January, last year, I also bought an exercise bike, for use in my apartment. I know, sounds like a cliché, right? Getting an exercise bike after New Year? Well, it wasn’t a New Year’s resolution but, had it been one, I could be proud to say I kept it (my hypothetical resolution; I know, weird structure; you get what I mean, right?).

Right away, I started doing bike workouts on a very regular basis. From three to five times a week, during most weeks. Contrary to going to a gym, exercising at home is easy to fit in my freelancing schedule. I almost always work out before breakfast, so there’s no digestion issue involved. Since I’m by myself, it means I feel no pressure or judgment from others, a very significant factor in my case. Though I’m an extrovert’s extrovert (86 percentile), gyms are really offputting, to me. Because of my bodyshape, age, and overall appearance, I really feel like I don’t “fit”. It does depend on the gym, and I had a fairly good time at UMoncton’s Ceps back in 2003. But ConU’s gym wasn’t a place where I enjoyed working out.

My home workouts have become a fun part of my week. Not that the effort level is low, but I often do different things while working out, including listen to podcasts and music, reading, and even writing. As many people know, music can be very encouraging during a workout. So can a podcast, as it takes your attention elsewhere and you might accomplish more than you thought, during a podcast. Same thing with reading and writing, and I wrote part of this post while working out.

Sure, I could do most of this in a gym. The convenience factor at home is just too high, though. I can have as many things as I want by my sides, on a table and on a chair, so I just have to reach out when I need any of them. Apart from headphones, a music playing device, and a towel (all things I’d have at a gym) I typically have the following items with me when I do a home workout:

  • Travel mug full of tea
  • Stainless steel water bottle full of herbal tea (proper tea is theft)
  • Britta bottle full of water (I do drink a lot of fluid while working out)
  • three mobile devices (iPhone, iPad, Nexus 7)
  • Small weights,
  • Reading glasses
  • Squeeze balls

Wouldn’t be so easy to bring all of that to a gym. Not to mention that I can wear whatever I want, listen to whatever I want, and make whatever noise I want (I occasionally yell beats to music, as it’s fun and encouraging). I know some athletic people prefer gym workouts over home ones. I’m not athletic. And I know what I prefer.

On Track

Since this post is nominally about QuantSelf, how do I track my workouts, you ask? Well, it turns out that my Zip and UP do help me track them out, though in different ways. To get the UP to track my bike workouts, I have to put it around one of my pedals, a trick which took me a while to figure out.

2014-01-22 18.38.24

The Zip tracks my workouts from its usual position but it often counts way fewer “steps” than the UP does. So that’s one level of tracking. My workouts are part of my stepcounts for the days during which I do them.

Putting My Heart into IT

More importantly, though, my bike workouts have made my heartrate strap very useful. By pairing the strap with Digifit’s iBiker app, I get continuous heartrate monitoring, with full heartrate chart, notifications about “zones” (such as “fat burning”, “aerobic”, and “anaerobic”), and a recovery mode which lets me know how quickly my heartrate decreases after the workout. (I could also control the music app, but I tend to listen to Rdio instead.) The main reason I chose iBiker is that it works natively on the iPad. Early on, I decided I’d use my iPad to track my workouts because the battery lasts longer than on an iPhone or iPod touch, and the large display accommodates more information. The charts iBiker produces are quite neat and all the data can be synced to Digifit’s cloud service, which also syncs with my account on the fitbit cloud service (notice how everything has a cloud service?).

20140103-162048.jpg

Heartrate monitoring is close to essential, for workouts. Sure, it’d be possible to do exercise without it. But the difference it makes is more significant than one might imagine. It’s one of those things that one may only grok after trying it. Since I’m able to monitor my heartrate in realtime, I’m able to pace myself, to manage my exertion. By looking at the chart in realtime, I can tell how long I’ve spent at which intensity level and can decide to remain in a “zone” for as long as I want. Continuous feedback means that I can experiment with adjustment to the workout’s effort level, by pedaling faster or increasing tension. It’s also encouraging to notice that I need increasing intensity levels to reach higher heartrates, since my physical condition has been improving tremendously over the past year. I really value any encouragement I can get.

Now, I know it’s possible to get continuous heartrate monitoring on gym equipment. But I’ve noticed in the past that this monitoring wasn’t that reliable as I would often lose the heartrate signal, probably because of perspiration. On equipment I’ve tried, it wasn’t possible to see a graphical representation of my heartrate through the whole workout so, although I knew my current heartrate, I couldn’t really tell how long I was maintaining it. Not to mention that it wasn’t possible to sync that data to anything. Even though some of that equipment can allegedly be used with a special key to transfer data to a computer, that key wasn’t available.

It’d also be possible to do continuous heartrate monitoring with a “fitness watch”. A big issue with most of these is that data cannot be transferred to another device. Several of the new “wearable devices” do add this functionality. But these devices are quite expensive and, as far as I can see in most in-depth reviews, not necessarily that reliable. Besides, their displays are so small that it’d be impossible to get as complete a heartrate chart as the one I get on iBiker. I got pretty excited about the Neptune Pine, though, and I feel sad I had to cancel my pledge at the very last minute (for financial reasons). Sounds like it can become a rather neat device.

As should be obvious, by now, the bike I got (Marcy Recumbent Mag Cycle ME–709 from Impex) is a no-frills one. It was among the least expensive exercise bikes I’ve seen but it was also one with high ratings on Amazon. It’s as basic as you can get and I’ve been looking into upgrading. But other exercise bikes aren’t that significantly improved over this one. I don’t currently have enough money to buy a highend bike, but money isn’t the only issue. What I’d really like to get is exercise equipment which can be paired with another device, especially a tablet. Have yet to see an exercise bike, rower, treadmill, or elliptical which does. At any price. Sure, I could eventually find ways to hack things together to get more communication between my devices, but that’d be a lot of work for little results. For instance, it might be possible to find a cadence sensor which works on an exercise bike (or tweak one to make it work), thereby giving some indication of pace/speed and distance. However, I doubt that there’s exercise equipment which would allow a tablet to control tension/strength/difficulty. It’d be so neat if it were available. Obviously, it’s far from a requirement. But none of the QuantSelf stuff is required to have a good time while exercising.

Off the Bike

I use iBiker and my heartrate monitor during other activities besides bike workouts. Despite its name, iBiker supports several activity types (including walking and hiking) and has a category for “Other” activities. I occasionally use iBiker on my iPhone when I go on a walk for fitness reasons. Brisk walks do seem to help me in my fitness regime, but I tend to focus on bike workouts instead. I already walk a fair bit and much of that walking is relatively intense, so I feel less of a need to do it as an exercise, these days. And I rarely have my heartrate strap with me when I decide to take a walk. At some point, I had bought a Garmin footpod and kept installing it on shoes I was wearing. I did use it on occasion, including during a trip to Europe (June–July, 2012). It tends to require a bit of time to successfully pair with a mobile app, but it works as advertised. Yet, I haven’t really been quantifying my walks in the same way, so it hasn’t been as useful as I had wished.

More frequently, I use iBiker and my heartrate strap during my yoga/Pilates routine. “Do you really get your heart running fast enough to make it worthwhile”, you ask? No, but that’s kind of the point. Apart from a few peaks, my heartrate charts during such a routine tends to remain in Zone 0, or “Warmup/Cooldown”. The peaks are interesting because they correspond to a few moves and poses which do feel a bit harder (such as pushups or even the plank pose). That, to me, is valuable information and I kind of wish I could see which moves and poses I’ve done for how long using some QuantSelf tool. I even thought about filming myself, but I would then need to label each pose or move by hand, something I’d be very unlikely to do more than once or twice. It sounds like the Atlas might be used in such a way, as it’s supposed to recognize different activities, including custom ones. Not only is it not available, yet, but it’s so targeted at the high performance fitness training niche that I don’t think it could work for me.

One thing I’ve noticed from my iBiker-tracked routine is that my resting heartrate has gone down very significantly. As with my recovery and the amount of effort necessary to increase my heartrate, I interpret this as a positive sign. With other indicators, I could get a fuller picture of my routine’s effectiveness. I mean, I feel its tremendous effectiveness in diverse ways, including sensations I’d have a hard time explaining (such as an “opening of the lungs” and a capacity to kill discomfort in three breaths). The increase in my flexibility is something I could almost measure. But I don’t really have tools to fully quantify my yoga routine. That might be a good thing.

Another situation in which I’ve worn my heartrate strap is… while sleeping. Again, the idea here is clearly not to measure how many calories I burn or to monitor how “strenuous” sleeping can be as an exercise. But it’s interesting to pair the sleep data from my UP with some data from my heart during sleep. Even there, the decrease in my heartrate is quite significant, which signals to me a large improvement in the quality of my sleep. Last summer (July, 2013), I tracked a night during which my average heartrate was actually within Zone 1. More recently (November, 2013), my sleeping heartrate was below my resting heartrate, as it should be.

Using the Wahoo key on those occasions can be quite inconvenient. When I was using it to track my brisk walks, I would frequently lose signal, as the dongle was disconnecting from my iPad or iPhone. For some reason, I would also lose signal while sleeping (though the dongle would remain unmoved).

So I eventually bought a Blue HR, from Wahoo, to replace the key+strap combination. Instead of ANT+, the Blue HR uses Bluetooth LE to connect directly with a phone or tablet, without any need for a dongle. I bought it in part because of the frequent disconnections with the Wahoo key. I rarely had those problems during bike exercises, but I thought having a more reliable signal might encourage me to track my activities. I also thought I might be able to pair the Blue HR with a version of iBiker running on my Nexus 7 (first generation). It doesn’t seem to work and I think the Nexus 7 doesn’t support Bluetooth LE. I was also able to hand down my ANT+ setup (Wahoo key, heartrate strap, and footpod) to someone who might find it useful as a way to track walks. We’ll see how that goes.

‘Figures!

Going back to my QuantSelf spreadsheet. iBiker, Zip, and UP all output counts of burnt calories. Since Digifit iBiker syncs with my fitbit account, I’ve been using the fitbit number.

Inputting that number in the spreadsheet meant that I was able to measure how many extra calories I had burnt as compared to calories I had ingested. That number then allowed me to evaluate how much weight I had lost on a given day. For a while, my average was around 135g, but I had stretches of quicker weightloss (to the point that I was almost scolded by a doctor after losing too much weight in too little time). Something which struck me is that, despite the imprecision of so many things in that spreadsheet, the evaluated weightloss and actual loss of weight were remarkably similar. Not that there was perfect synchronization between the two, as it takes a bit of time to see the results of burning more calories. But I was able to predict my weight with surprising accuracy, and pinpoint patterns in some of the discrepancies. There was a kind of cycle by which the actual number would trail the predicted one, for a few days. My guess is that it had to do with something like water retention and I tried adjusting from the lowest figure (when I seem to be the least hydrated) and the highest one (when I seem to retain the most water in my body).

Obsessed, Much?

ObsessiveSpreadsheet

As is clear to almost anyone, this was getting rather obsessive. Which is the reason I’ve used the past tense with many of these statements. I basically don’t use my QuantSelf spreadsheet, anymore. One reason is that (in March, 2013) I was advised by a healthcare professional (a nutrition specialist) to stop counting my calories intake and focus on eating until I’m satiated while ramping up my exercise, a bit (in intensity, while decreasing frequency). It was probably good advice, but it did have a somewhat discouraging effect. I agree that the whole process had become excessive and that it wasn’t really sustainable. But what replaced it was, for a while, not that useful. It’s only in November, 2013 that a nutritionist/dietician was able to give me useful advice to complement what I had been given. My current approach is much better than any other approach I’ve used, in large part because it allows me to control some of my digestive issues.

So stopping the calories-focused monitoring was a great idea. I eventually stopped updating most columns in my spreadsheet.

What I kept writing down was the set of readings from my two “dumb” scales.

Scaling Up

Abandoning my spreadsheet didn’t imply that I had stopped selfquantifying.

In fact, I stepped up my QuantSelf a bit, about a month ago (December, 2013) by getting a Withings WS–50 Smart Body Analyzer. That WiFi-enabled scale is practically the prototype of QuantSelf and Internet of Things devices. More than I had imagined, it’s “just the thing I needed” in my selfquantified life.

The main advantage it has over my Weight Watchers scale is that it syncs data with my Withings cloud service account. That’s significant because the automated data collection saves me from my obsessive spreadsheet while letting me learn about my weightloss progression. Bingo!

Sure, I could do the same thing by hand, adding my scale readings to any of my other accounts. Not only would it be a chore to do so, but it’d encourage me to dig too deep in those figures. I learnt a lot during my obsessive phase, but I don’t need to go back to that mode. There are many approaches in between that excessive data collection and letting Withings do the work. I don’t even need to explore those intermediary approaches.

There are other things to like about the Withings scale. One is Position Control™, which does seem to increase the accuracy of the measurements. Its weight-tracking graphs (app and Web) are quite reassuring, as they show clear trends, between disparate datapoints. WithingsWeightKg WithingsLeanMassPercent

This Withings scale also measures heartrate, something I find less useful given my usage of a continuous heartrate monitor. Finally, it has sensors for air temperature and CO2 levels, which are features I’d expect in a (pre-Google) Nest product.

Though it does measure body fat percentage, the Withings Smart Body Analyzer doesn’t measure water percentage or bone mass, contrary to my low-end Weight Watchers body composition scale. Funnily enough, it’s around the time I got the Withings that I finally started gaining enough muscle mass to be able to notice the shift on the Weight Watchers. Prior to that, including during my excessive phase, my body fat and body water percentages added up to a very stable number. I would occasionally notice fluctuations of ~0.1%, but no downward trend. I did notice trends in my overall condition when the body water percentage was a bit higher, but it never went very high. Since late November or early December, those percentages started changing for the first time. My body fat percentage decreased by almost 2%, my body water percentage increased by more than 1%, and the total of the two decreased by 0.6%. Since these percentages are now stable and I have other indicators going in the same direction, I think this improvement in fat vs. water is real and my muscle mass did start to increase a bit (contrary to what a friend said can happen with people our age). It may not sound like much but I’ll take whatever encouragement I get, especially in such a short amount of time.

The Ideal QuantSelf Device

On his The Talk Show podcast, Gruber has been dismissing the craze in QuantSelf and fitness devices, qualifying them as a solved problem. I know what he means, but I gather his experience differs from mine.

I feel we’re in the “Rio Volt era” of the QuantSelf story.

The Rio Volt was one of the first CD players which could read MP3 files. I got one, at the time, and it was a significant piece of my music listening experience. I started ripping many of my CDs and creating fairly large compilations that I could bring with me as I traveled. I had a carrying case for the Volt and about 12 CDs, which means that I could carry about 8GB of music (or about 140 hours at the 128kbps bitrate which was so common at the time). Quite a bit less than my whole CD collection (about 150GB), but a whole lot more than what I was used to. As I was traveling and moving frequently, at the time, the Volt helped me get into rather… excessive music listening habits. Maybe not excessive compared to a contemporary teenager in terms of time, but music listening had become quite important to me, at a time when I wasn’t playing music as frequently as before.

There have been many other music players before, during, and after the Rio Volt. The one which really changed things was probably… the Microsoft Zune? Nah, just kidding. The iRiver players were much cooler (I had an iRiver H–120 which I used as a really neat fieldrecording device). Some people might argue that things really took a turn when Apple released the iPod. Dunno about that, I’m no Apple fanboi.

Regardless of which MP3-playing device was most prominent, it’s probably clear to most people that music players have changed a lot since the days of the Creative Nomad and the Rio Volt. Some of these changes could possibly have been predicted, at the time. But I’m convinced that very few people understood the implications of those changes.

Current QuantSelf devices don’t appear very crude. And they’re certainly quite diverse. CES2014 was the site of a number of announcements, demos, and releases having to do with QuantSelf, fitness, Internet of Things, and wearable devices (unsurprisingly, DC Rainmaker has a useful two-part roundup). But despite my interest in some of these devices, I really don’t think we’ve reached the real breakthrough with those devices.

In terms of fitness/wellness/health devices, specifically, I sometimes daydream about features or featuresets. For instance, I really wish a given device would combine the key features of some existing devices, as in the case of body water measurements and the Withings Smart Body Analyzer. A “killer feature”, for me, would be strapless continuous 24/7 heartrate monitoring which could be standalone (keeping the data without transmitting it) yet able to sync data with other devices for display and analysis, and which would work at rest as well as during workouts, underwater as well as in dry contexts.

Some devices (including the Basis B1 and Mio Alpha) seem to come close to this, but they all have little flaws, imperfections, tradeoffs. At an engineering level, it should be an interesting problem so I fully expect that we’ll at least see an incremental evolution from the current state of the market. Some devices measure body temperature and perspiration. These can be useful indicators of activity level and might help one gain insight about other aspects of the physical self. I happen to perspire profusely, so I’d be interested in that kind of data. As is often the case, unexpected usage of such tools could prove very innovative.

How about a device which does some blood analysis, making it easier to gain data on nutrients or cholesterol levels? I often think about the risks of selfdiagnosis and selfmedication. Those issues, related to QuantSelf, will probably come in a future post.

I also daydream about something deeper, though more imprecise. More than a featureset or a “killer feature”, I’m thinking about the potential for QuantSelf as a whole. Yes, I also think about many tricky issues around selfquantification. But I perceive something interesting going on with some of these devices. Some affordances of Quantified Self technology. Including the connections this technology can have with other technologies and domains, including tablets and smartphones, patient-focused medicine, Internet of Things, prosumption, “wearable hubs”, crowdsourced research, 3D printing, postindustrialization, and technological appropriation. These are my issues, in the sense that they’re things about which I care and think. I don’t necessarily mean issues as problems or worries, but things which either give me pause or get me to discuss with others.

Much of this will come in later posts, I guess. Including a series of issues I have with self-quantification, expanded from some of the things I’ve alluded to, here.

Walkthrough

These lines are separated from many of the preceding ones (I don’t write linearly) by a relatively brisk walk from a café to my place. Even without any QuantSelf device, I have quite a bit of data about this walk. For instance, I know it took me 40 minutes because I checked the time before and after. According to Google Maps, it’s between 4,1km and 4,2km from that café to my place, depending on which path one might take (I took an alternative route, but it’s probably close to the Google Maps directions, in terms of distance). It’s also supposed to be a 50 minute walk, so I feel fairly good about my pace (encouraging!). I also know it’s –20°C, outside (–28°C with windchill, according to one source). I could probably get some data about elevation, which might be interesting (I’d say about half of that walk was going up).

With two of my QuantSelf devices (UP and Zip), I get even more data. For instance, I can tell how many steps I took (it looks like it’s close to 5k, but I could get a more precise figure). I also realize the intensity of this activity, as both devices show that I started at a moderate pace followed by an intense pace for most of the duration. These devices also include this walk in measuring calories burnt (2.1Mcal according to UP, 2.7Mcal according to Zip), distance walked (11.2km according to Zip, 12.3km according to UP), active minutes (117’ Zip, 149’ UP), and stepcount (16.4k UP, 15.7k Zip). Not too shabby, considering that it’s still early evening as I write these lines.

2014-01-21 18.47.54 2014-01-21 18.47.48 2014-01-21 18.46.47

Since I didn’t have my heartrate monitor on me and didn’t specifically track this activity, there’s a fair bit of data I don’t have. For instance, I don’t know which part was most strenuous. And I don’t know how quickly I recovered. If I don’t note it down, it’s difficult to compare this activity to other activities. I might remember more or less which streets I took, but I’d need to map it myself. These are all things I could have gotten from a fitness app coupled with my heartrate monitor.

As is the case with cameras, the best QuantSelf device is the one you have with you.

I’m glad I have data about this walk. Chances are I would have taken public transit had it not been for my QuantSelf devices. There weren’t that many people walking across the Mont-Royal park, by this weather.

Would I get fitter more efficiently if I had the ideal tool for selfquantification? I doubt it.

Besides, I’m not in that much of a hurry.


  1. Don’t like my puns? Well, it’s my blogpost and I’ll cry if I want to.  ↩

Future of Learning Content

If indeed Apple plans to announce not just more affordable textbook options for students, but also more interactive, immersive ebook experiences…

Forecasting next week’s Apple education event (Dan Moren and Lex Friedman for Macworld)

I’m still in catchup mode (was sick during the break), but it’s hard to let this pass. It’s exactly the kind of thing I like to blog about: wishful thinking and speculation about education. Sometimes, my crazy predictions are fairly accurate. But my pleasure at blogging these things has little to do with the predictions game. I’m no prospectivist. I just like to build wishlists.

In this case, I’ll try to make it short. But I’m having drift-off moments just thinking about the possibilities. I do have a lot to say about this but we’ll see how things go.

Overall, I agree with the three main predictions in that MacWorld piece: Apple might come out with eBook creation tools, office software, and desktop reading solutions. I’m interested in all of these and have been thinking about the implications.

That MacWorld piece, like most media coverage of textbooks, these days, talks about the weight of physical textbooks as a major issue. It’s a common refrain and large bookbags/backpacks have symbolized a key problem with “education”. Moren and Friedman finish up with a zinger about lecturing. Also a common complaint. In fact, I’ve been on the record (for a while) about issues with lecturing. Which is where I think more reflection might help.

For one thing, alternative models to lecturing can imply more than a quip about the entertainment value of teaching. Inside the teaching world, there’s a lot of talk about the notion that teaching is a lot more than providing access to content. There’s a huge difference between reading a book and taking a class. But it sounds like this message isn’t heard and that there’s a lot of misunderstanding about the role of teaching.

It’s quite likely that Apple’s announcement may make things worse.

I don’t like textbooks but I do use them. I’m not the only teacher who dislikes textbook while still using them. But I feel the need to justify myself. In fact, I’ve been on the record about this. So, in that context, I think improvements in textbooks may distract us from a bigger issue and even lead us in the wrong direction. By focusing even more on content-creation, we’re commodifying education. What’s more, we’re subsuming education to a publishing model. We all know how that’s going. What’s tragic, IMHO, is that textbook publishers themselves are going in the direction of magazines! If, ten years from now, people want to know when we went wrong with textbook publishing, it’ll probably be a good idea for them to trace back from now. In theory, magazine-style textbooks may make a lot of sense to those who perceive learning to be indissociable from content consumption. I personally consider these magazine-style textbooks to be the most egregious of aberrations because, in practice, learning is radically different from content consumption.

So… If, on Thursday, Apple ends up announcing deals with textbook publishers to make it easier for them to, say, create and distribute free ad-supported magazine-style textbooks, I’ll be going through a large range of very negative emotions. Coming out of it, I might perceive a silverlining in the fact that these things can fairly easily be subverted. I like this kind of technological subversion and it makes me quite enthusiastic.

In fact, I’ve had this thought about iAd producer (Apple’s tool for creating mobile ads). Never tried it but, when I heard about it, it sounded like something which could make it easy to produce interactive content outside of mobile advertising. I don’t think the tool itself is restricted to Apple’s iAd, but I could see how the company might use the same underlying technology to create some content-creation tool.

“But,” you say, “you just said that you think learning isn’t about content.” Quite so. I’m not saying that I think these tools should be the future of learning. But creating interactive content can be part of something wider, which does relate to learning.

The point isn’t that I don’t like content. The point is that I don’t think content should be the exclusive focus of learning. To me, allowing textbook publishers to push more magazine-style content more easily is going in the wrong direction. Allowing diverse people (including learners and teachers) to easily create interactive content might in fact be a step in the right direction. It’s nothing new, but it’s an interesting path.

In fact, despite my dislike of a content emphasis in learning, I’m quite interested in “learning objects”. In fact, I did a presentation about them during the Spirit of Inquiry conference at Concordia, a few years ago (PDF).

A neat (but Flash-based) example of a learning object was introduced to me during that same conference: Mouse Party. The production value is quite high, the learning content seems relatively high, and it’s easily accessible.

But it’s based on Flash.

Which leads me to another part of the issue: formats.

I personally try to avoid Flash as much as possible. While a large number of people have done amazing things with Flash, it’s my sincere (and humble) opinion that Flash’s time has come and gone. I do agree with Steve Jobs on this. Not out of fanboism (I’m no Apple fanboi), not because I have something against Adobe (I don’t), not because I have a vested interested in an alternative technology. I just think that mobile Flash isn’t going anywhere and that. Even on the desktop, I think Flash-free is the way to go. Never installed Flash on my desktop computer, since I bought it in July. I do run Chrome for the occasional Flash-only video. But Flash isn’t the only video format out there and I almost never come across interesting content which actually relies on something exclusive to Flash. Flash-based standalone apps (like Rdio and Machinarium) are a different issue as Flash was more of a development platform for them and they’re available as Flash-free apps on Apple’s own iOS.

I wouldn’t be surprised if Apple’s announcements had something to do with a platform for interactive content as an alternative to Adobe Flash. In fact, I’d be quite enthusiastic about that. Especially given Apple’s mobile emphasis. We might be getting further in “mobile computing for the rest of us”.

Part of this may be related to HTML5. I was quite enthusiastic when Tumult released its “Hype” HTML5-creation tool. I only used it to create an HTML5 version of my playfulness talk. But I enjoyed it and can see a lot of potential.

Especially in view of interactive content. It’s an old concept and there are many tools out there to create interactive content (from Apple’s own QuickTime to Microsoft PowerPoint). But the shift to interactive content has been slower than many people (including educational technologists) would have predicted. In other words, there’s still a lot to be done with interactive content. Especially if you think about multitouch-based mobile devices.

Which eventually brings me back to learning and teaching.

I don’t “teach naked”, I do use slides in class. In fact, my slides are mostly bullet points, something presentation specialists like to deride. Thing is, though, my slides aren’t really meant for presentation and, while they sure are “content”, I don’t really use them as such. Basically, I use them as a combination of cue cards, whiteboard, and coursenotes. Though I may sound defensive about this, I’m quite comfortable with my use of slides in the classroom.

Yet, I’ve been looking intently for other solutions.

For instance, I used to create outlines in OmniOutliner that I would then send to LaTeX to produce both slides and printable outlines (as PDFs). I’ve thought about using S5, but it doesn’t really fit in my workflow. So I end up creating Keynote files on my Mac, uploading them (as PowerPoint) before class, and using them in the classroom using my iPad. Not ideal, but rather convenient.

(Interestingly enough, the main thing I need to do today is create PowerPoint slides as ancillary material for a textbook.)

In all of these cases, the result isn’t really interactive. Sure, I could add buttons and interactive content to the slides. But the basic model is linear, not interactive. The reason I don’t feel bad about it is that my teaching is very interactive (the largest proportion of classtime is devoted to open discussions, even with 100-plus students). But I still wish I could have something more appropriate.

I have used other tools, especially whiteboarding and mindmapping ones. Basically, I elicit topics and themes from students and we discuss them in a semi-structured way. But flow remains an issue, both in terms of workflow and in terms of conversation flow.

So if Apple were to come up with tools making it easy to create interactive content, I might integrate them in my classroom work. A “killer feature” here is if interaction could be recorded during class and then uploaded as an interactive podcast (à la ProfCast).

Of course, content-creation tools might make a lot of sense outside the classroom. Not only could they help distribute the results of classroom interactions but they could help in creating learning material to be used ahead of class. These could include the aforementioned learning objects (like Mouse Party) as well as interactive quizzes (like Hot Potatoes) and even interactive textbooks (like Moglue) and educational apps (plenty of these in the App Store).

Which brings me back to textbooks, the alleged focus of this education event.

One of my main issues with textbooks, including online ones, is usability. I read pretty much everything online, including all the material for my courses (on my iPad) but I find CourseSmart and its ilk to be almost completely unusable. These online textbooks are, in my experience, much worse than scanned and OCRed versions of the same texts (in part because they don’t allow for offline access but also because they make navigation much more difficult than in GoodReader).

What I envision is an improvement over PDFs.

Part of the issue has to do with PDF itself. Despite all its benefits, Adobe’s “Portable Document Format” is the relic of a bygone era. Sure, it’s ubiquitous and can preserve formatting. It’s also easy to integrate in diverse tools. In fact, if I understand things correctly, PDF replaced Display PostScript as the basis for Quartz 2D, a core part of Mac OS X’s graphics rendering. But it doesn’t mean that it can’t be supplemented by something else.

Part of the improvement has to do with flexibility. Because of its emphasis on preserving print layouts, PDF tends to enforce print-based ideas. This is where EPUB is at a significant advantage. In a way, EPUB textbooks might be the first step away from the printed model.

From what I can gather, EPUB files are a bit like Web archives. Unlike PDFs, they can be reformatted at will, just like webpages can. In fact, iBooks and other EPUB readers (including Adobe’s, IIRC) allow for on-the-fly reformatting, which puts the reader in control of a much greater part of the reading experience. This is exactly the kind of thing publishers fail to grasp: readers, consumers, and users want more control on the experience. EPUB textbooks would thus be easier to read than PDFs.

EPUB is the basis for Apple’s iBooks and iBookstore and people seem to be assuming that Thursday’s announcement will be about iBooks. Makes sense and it’d be nice to see an improvement over iBooks. For one thing, it could support EPUB 3. There are conversion tools but, AFAICT, iBooks is stuck with EPUB 2.0. An advantage there is that EPUBs can possibly include scripts and interactivity. Which could make things quite interesting.

Interactive formats abound. In fact, PDFs can include some interactivity. But, as mentioned earlier, there’s a lot of room for improvement in interactive content. In part, creation tools could be “democratized”.

Which gets me thinking about recent discussions over the fate of HyperCard. While I understand John Gruber’s longstanding position, I find room for HyperCard-like tools. Like some others, I even had some hopes for ATX-based TileStack (an attempt to bring HyperCard stacks back to life, online). And I could see some HyperCard thinking in an alternative to both Flash and PDF.

“Huh?”, you ask?

Well, yes. It may sound strange but there’s something about HyperCard which could make sense in the longer term. Especially if we get away from the print model behind PDFs and the interaction model behind Flash. And learning objects might be the ideal context for this.

Part of this is about hyperlinking.  It’s no secret that HyperCard was among HTML precursors. As the part of HTML which we just take for granted, hyperlinking is among the most undervalued features of online content. Sure, we understand the value of sharing links on social networking systems. And there’s a lot to be said about bookmarking. In fact, I’ve been thinking about social bookmarking and I have a wishlist about sharing tools, somewhere. But I’m thinking about something much more basic: hyperlinking is one of the major differences between online and offline wriiting.

Think about the differences between, say, a Wikibook and a printed textbook. My guess is that most people would focus on the writing style, tone, copy-editing, breadth, reviewing process, etc. All of these are relevant. In fact, my sociology classes came up with variations on these as disadvantages of the Wikibook over printed textbooks. Prior to classroom discussion about these differences, however, I mentioned several advantages of the Wikibook:

  • Cover bases
  • Straightforward
  • Open Access
  • Editable
  • Linked

(Strangely enough, embedded content from iWork.com isn’t available and I can’t log into my iWork.com account. Maybe it has to do with Thursday’s announcement?)

That list of advantages is one I’ve been using since I started to use this Wikibook… excerpt for the last one. And this is one which hit me, recently, as being more important than the others.

So, in class, I talked about the value of links and it’s been on my mind quite a bit. Especially in view of textbooks. And critical thinking.

See, academic (and semi-academic) writing is based on references, citations, quotes. English-speaking academics are likely to be the people in the world of publishing who cite the most profusely. It’s not rare for a single paragraph of academic writing in English to contain ten citations or more, often stringed in parentheses (Smith 1999, 2005a, 2005b; Smith and Wesson 1943, 2010). And I’m not talking about Proust-style paragraphs either. I’m convinced that, with some quick searches, I could come up with a paragraph of academic writing which has less “narrative content” than citation.

Textbooks aren’t the most egregious example of what I’d consider over-citing. But they do rely on citations quite a bit. As I work more specifically on textbook content, I notice even more clearly the importance of citations. In fact, in my head, I started distinguishing some patterns in textbook content. For instance, there are sections which mostly contain direct explanations of key concepts while other sections focus on personal anecdotes from the authors or extended quotes from two sides of the debate. But one of the most obvious sections are summaries from key texts.

For instance (hypothetical example):

As Nora Smith explained in her 1968 study Coming Up with Something to Say, the concept of interpretation has a basis in cognition.

Smith (1968: 23) argued that Pierce’s interpretant had nothing to do with theatre.

These citations are less conspicuous than they’d be in peer-reviewed journals. But they’re a central part of textbook writing. One of their functions should be to allow readers (undergraduate students, mostly) to learn more about a topic. So, when a student wants to know more about Nora Smith’s reading of Pierce, she “just” have to locate Smith’s book, go to the right page, scan the text for the read for the name “Pierce”, and read the relevant paragraph. Nothing to it.

Compare this to, say, a blogpost. I only cite one text, here. But it’s linked instead of being merely cited. So readers can quickly know more about the context for what I’m discussing before going to the library.

Better yet, this other blogpost of mine is typical of what I’ve been calling a linkfest, a post containing a large number of links. Had I put citations instead of links, the “narrative” content of this post would be much less than the citations. Basically, the content was a list of contextualized links. Much textbook content is just like that.

In my experience, online textbooks are citation-heavy and take almost no benefit from linking. Oh, sure, some publisher may replace citations with links. But the result would still not be the same as writing meant for online reading because ex post facto link additions are quite different from link-enhanced writing. I’m not talking about technological determinism, here. I’m talking about appropriate tool use. Online texts can be quite different from printed ones and writing for an online context could benefit greatly from this difference.

In other words, I care less about what tools publishers are likely to use to create online textbooks than about a shift in the practice of online textbooks.

So, if Apple comes out with content-creation tools on Thursday (which sounds likely), here are some of my wishes:

  • Use of open standards like HTML5 and EPUB (possibly a combination of the two).
  • Completely cross-platform (should go without saying, but Apple’s track record isn’t that great, here).
  • Open Access.
  • Link library.
  • Voice support.
  • Mobile creation tools as powerful as desktop ones (more like GarageBand than like iWork).
  • HyperCard-style emphasis on hyperlinked structures (à la “mini-site” instead of web archives).
  • Focus on rich interaction (possibly based on the SproutCore web framework).
  • Replacement for iWeb (which is being killed along with MobileMe).
  • Ease creation of lecturecasts.
  • Deep integration with iTunes U.
  • Combination of document (à la Pages or Word), presentation (à la Keynote or PowerPoint), and standalone apps (à la The Elements or even Myst).
  • Full support for course management systems.
  • Integration of textbook material and ancillary material (including study guides, instructor manuals, testbanks, presentation files, interactive quizzes, glossaries, lesson plans, coursenotes, etc.).
  • Outlining support (more like OmniOutliner or even like OneNote than like Keynote or Pages).
  • Mindmapping support (unlikely, but would be cool).
  • Whiteboard support (both in-class and online).
  • Collaboration features (à la Adobe Connect).
  • Support for iCloud (almost a given, but it opens up interesting possibilities).
  • iWork integration (sounds likely, but still in my wishlist).
  • Embeddable content (à la iWork.com).
  • Stability, ease of use, and low-cost (i.e., not Adobe Flash or Acrobat).
  • Better support than Apple currently provides for podcast production and publishing.
  • More publisher support than for iBooks.
  • Geared toward normal users, including learners and educators.

The last three are probably where the problem lies. It’s likely that Apple has courted textbook publishers and may have convinced them that they should up their game with online textbooks. It’s clear to me that publishers risk to fall into oblivion if they don’t wake up to the potential of learning content. But I sure hope the announcement goes beyond an agreement with publishers.

Rumour has it that part of the announcement might have to do with bypassing state certification processes, in the US. That would be a big headline-grabber because the issue of state certification is something of wedge issue. Could be interesting, especially if it means free textbooks (though I sure hope they won’t be ad-supported). But that’s much less interesting than what could be done with learning content.

User-generated content” may be one of the core improvements in recent computing history, much of which is relevant for teaching. As fellow anthro Mike Wesch has said:

We’ll  need to rethink a few things…

And Wesch sure has been thinking about learning.

Problem is, publishers and “user-generated content” don’t go well together. I’m guessing that it’s part of the reason for Apple’s insufficient support for “user-generated content”. For better or worse, Apple primarily perceives its users as consumers. In some cases, Apple sides with consumers to make publishers change their tune. In other cases, it seems to be conspiring with publishers against consumers. But in most cases, Apple fails to see its core users as content producers. In the “collective mind of Apple”, the “quality content” that people should care about is produced by professionals. What normal users do isn’t really “content”. iTunes U isn’t an exception, those of us who give lectures aren’t Apple’s core users (even though the education market as a whole has traditionally being an important part of Apple’s business). The fact that Apple courts us underlines the notion that we, teachers and publishers (i.e. non-students), are the ones creating the content. In other words, Apple supports the old model of publishing along with the old model of education. Of course, they’re far from alone in this obsolete mindframe. But they happen to have several of the tools which could be useful in rethinking education.

Thursday’s events is likely to focus on textbooks. But much more is needed to shift the balance between publishers and learners. Including a major evolution in podcasting.

Podcasting is especially relevant, here. I’ve often thought about what Apple could do to enhance podcasting for learning. Way beyond iTunes U. Into something much more interactive. And I don’t just mean “interactive content” which can be manipulated seamless using multitouch gestures. I’m thinking about the back-and-forth of learning and teaching, the conversational model of interactivity which clearly distinguishes courses from mere content.

I Hate Books

In a way, this is a followup to a discussion happening on Facebook after something I posted (available publicly on Twitter): “(Alexandre) wishes physical books a quick and painfree death. / aime la connaissance.”

As I expected, the reactions I received were from friends who are aghast: how dare I dismiss physical books? Don’t I know no shame?

Apparently, no, not in this case.

And while I posted it as a quip, it’s the result of a rather long reflection. It’s not that I’m suddenly anti-books. It’s that I stopped buying several of the “pro-book” arguments a while ago.

Sure, sure. Books are the textbook case of technlogy which needs no improvement. eBooks can’t replace the experience of doing this or that with a book. But that’s what folkloristics defines as a functional shift. Like woven baskets which became objects of nostalgia, books are being maintained as the model for a very specific attitude toward knowledge construction based on monolithic authored texts vetted by gatekeepers and sold as access to information.

An important point, here, is that I’m not really thinking about fiction. I used to read two novel-length works a week (collections of short stories, plays…), for a period of about 10 years (ages 13 to 23). So, during that period, I probably read about 1,000 novels, ranging from Proust’s Recherche to Baricco’s Novecentoand the five books of Rabelais’s Pantagruel series. This was after having read a fair deal of adolescent and young adult fiction. By today’s standards, I might be considered fairly well-read.

My life has changed a lot, since that time. I didn’t exactly stop reading fiction but my move through graduate school eventually shifted my reading time from fiction to academic texts. And I started writing more and more, online and offline.
In the same time, the Web had also been making me shift from pointed longform texts to copious amounts of shortform text. Much more polyvocal than what Bakhtin himself would have imagined.

(I’ve also been shifting from French to English, during that time. But that’s almost another story. Or it’s another part of the story which can reamin in the backdrop without being addressed directly at this point. Ask, if you’re curious.)
The increase in my writing activity is, itself, a shift in the way I think, act, talk… and get feedback. See, the fact that I talk and write a lot, in a variety of circumstances, also means that I get a lot of people to play along. There’s still a risk of groupthink, in specific contexts, but one couldn’t say I keep getting things from the same perspective. In fact, the very Facebook conversation which sparked this blogpost is an example, as the people responding there come from relatively distant backgrounds (though there are similarities) and were not specifically queried about this. Their reactions have a very specific value, to me. Sure, it comes in the form of writing. But it’s giving me even more of something I used to find in writing: insight. The stuff you can’t get through Google.

So, back to books.

I dislike physical books. I wish I didn’t have to use them to read what I want to read. I do have a much easier time with short reading sessions on a computer screen that what would turn into rather long periods of time holding a book in my hands.

Physical books just don’t do it for me, anymore. The printing press is, like, soooo 1454!

Yes, books had “a good run.” No, nothing replaces them. That’s not the way it works. Movies didn’t replace theater, television didn’t replace radio, automobiles didn’t replace horses, photographs didn’t replace paintings, books didn’t replace orality. In fact, the technology itself doesn’t do much by itself. But social contexts recontextualize tools. If we take technology to be the set of both tools and the knowledge surrounding it, technology mostly goes through social processes, since tool repertoires and corresponding knowledge mostly shift in social contexts, not in their mere existence. Gutenberg’s Bible was a “game-changer” for social, as well as technical reasons.

And I do insist on orality. Journalists and other “communication is transmission of information” followers of Shannon&Weaver tend to portray writing as the annihilation of orality. How long after the invention of writing did Homer transfer an oral tradition to the writing media? Didn’t Albert Lord show the vitality of the epic well into the 20th Century? Isn’t a lot of our knowledge constructed through oral means? Is Internet writing that far, conceptually, from orality? Is literacy a simple on/off switch?

Not only did I maintain an interest in orality through the most book-focused moments of my life but I probably care more about orality now than I ever did. So I simply cannot accept the idea that books have simply replaced the human voice. It doesn’t add up.

My guess is that books won’t simply disappear either. There should still be a use for “coffee table books” and books as gifts or collectables. Records haven’t disappeared completely and CDs still have a few more days in dedicated stores. But, in general, we’re moving away from the “support medium” for “content” and more toward actual knowledge management in socially significant contexts.

In these contexts, books often make little sense. Reading books is passive while these contexts are about (hyper-)/(inter-)active.

Case in point (and the reason I felt compelled to post that Facebook/Twitter quip)…
I hear about a “just released” French book during a Swiss podcast. Of course, it’s taken a while to write and publish. So, by the time I heard about it, there was no way to participate in the construction of knowledge which led to it. It was already “set in stone” as an “opus.”

Looked for it at diverse bookstores. One bookstore could eventually order it. It’d take weeks and be quite costly (for something I’m mostly curious about, not depending on for something really important).

I eventually find it in the catalogue at BANQ. I reserve it. It wasn’t on the shelves, yet, so I had to wait until it was. It took from November to February. I eventually get a message that I have a couple of days to pick up my reservation but I wasn’t able to go. So it went back on the “just released” shelves. I had the full call number but books in that section aren’t in their call number sequence. I spent several minutes looking back and forth between eight shelves to eventually find out that there were four more shelves in the “humanities and social sciences” section. The book I was looking was on one of those shelves.

So, I was able to borrow it.

Phew!

In the metro, I browse through it. Given my academic reflex, I look for the back matter first. No bibliography, no index, a ToC with rather obscure titles (at random: «Taylor toujours à l’œuvre»/”Taylor still at work,” which I’m assuming to be a reference to continuing taylorism). The book is written by two separate dudes but there’s no clear indication of who wrote what. There’s a preface (by somebody else) but no “acknowledgments” section, so it’s hard to see who’s in their network. Footnotes include full URLs to rather broad sites as well as “discussion with <an author’s name>.” The back cover starts off with references to French popular culture (including something about “RER D,” which would be difficult to search). Information about both authors fits in less than 40 words (including a list of publication titles).

The book itself is fairly large print, ways almost a pound (422g, to be exact) for 327 pages (including front and back matter). Each page seems to be about 50 characters per line, about 30 lines per page. So, about half a million characters or 3500 tweets (including spaces). At 5+1 characters per word, about 80,000 words (I have a 7500-words blogpost, written in an afternoon). At about 250 words per minute, about five hours of reading. This book is listed at 19€ (about 27CAD).
There’s no direct way to do any “postprocessing” with the text: no speech synthesis for visually impaired, concordance analysis, no machine translation, even a simple search for occurences of “Sarkozy” is impossible. Not to mention sharing quotes with students or annotating in an easy-to-retrieve fashion (à la Diigo).

Like any book, it’s impossible to read in the dark and I actually have a hard time to find a spot where I can read with appropriate lighting.

Flipping through the book, I get the impression that there’s some valuable things to spark discussions, but there’s also a whole lot of redundancy with frequent discussions on the topic (the Future of Journalism, or #FoJ, as a matter of fact). My guesstimate is that, out of 5 hours of reading, I’d get at most 20 pieces of insight that I’d have exactly no way to find elsewhere. Comparable books to which I listened as audiobooks, recently, had much less. In other words, I’d have at most 20 tweets worth of things to say from the book. Almost a 200:1 compression.
Direct discussion with the authors could produce much more insight. The radio interviews with these authors already contained a few insight hints, which predisposed me to look for more. But, so many months later, without the streams of thought which animated me at the time, I end up with something much less valuable than what I wanted to get, back in November.

Bottomline: Books aren’t necessarily “broken” as a tool. They just don’t fit my life, anymore.

Landing On His Feet: Nicolas Chourot

Listening to Nicolas Chourot‘s début album: First Landing (available on iTunes). Now, here’s someone who found his voice.

A few years ago, Nicolas Chourot played with us as part of Madou Diarra & Dakan, a group playing music created for Mali’s hunters’ associations.

Before Chourot joined us, I had been a member of Dakan for several years and my perspective on the group’s music was rather specific. As an ethnomusicologist working on the original context for hunters’ music, I frequently tried to maintain the connection with what makes Malian hunters so interesting, including a certain sense of continuity through widespread changes.

When Nicolas came up with his rather impressive equipment, I began to wonder how it would all fit. A very open-minded, respectful, and personable musician, Nicolas was able to both transform Dakan’s music from within and adapt his playing to a rather distant performance style. Not an easy task for any musician and Nicolas sure was to be commended for such a success.

After a while, Chourot and Dakan’s Madou Diarra parted ways. Still, Nicolas remained a member of the same informal music network as several people who had been in Dakan, including several of my good friends. And though I haven’t seen Nicolas in quite a while, he remains in my mind as someone whose playing and attitude toward music I enjoy.

Unfortunately, I was unable to attend the launch of Nicolas’s launch/show, on August 29. What’s strange is that it took me until today to finally buy Nicolas’s album. Not exactly sure why. Guess my mind was elsewhere. For months.

Ah, well… Désolé Nicolas!

But I did finally get the album. And I’m really glad I did!

When I first heard Nicolas’s playing, I couldn’t help but think about Michel Cusson. I guess it was partly because both have been fusing Jazz and “World” versions of the electric guitar. But there was something else in Nicolas’s playing that I readily associated with Cusson. Never analyzed it. Nor am I planning to analyze it at any point. Despite my music school background and ethnomusicological training, I’ve rarely been one for formal analysis. But there’s something intriguing, there, as a connection. It’s not “imitation as sincerest form of flattery”: Chourot wasn’t copying Cusson. But it seemed like both were “drinking from the same spring,” so to speak.

In First Landing, this interpretation comes back to my mind.

See, not only does Chourot’s playing still have some Cussonisms, but I hear other voices connected to Cusson’s. Including that of Cusson’s former bandmate Alain Caron And even Uzeb itself, the almost mythical band which brought Caron and Cusson together.

For a while, in the 1980s, Uzeb dominated a large part of Quebec’s local Jazz market. At the time, other Jazz players were struggling to get some recognition. As they do now. To an extent, Uzeb was a unique phenomenon in Quebec’s musical history since, despite their diversity and the quality of their work, Quebec’s Jazz musicians haven’t become mainstream again. Which might be a good thing but bears some reflection. What was so special about Uzeb? Why did it disappear? Can’t other Jazz acts fill the space left by Uzeb, after all these years?

I don’t think it’s what Nicolas is trying to do. But if he were, First Landing would be the way to go at it. It doesn’t “have all the ingredients.” That wouldn’t work. But, at the risk of sounding like an old cub scout, it has “the Uzeb spirit.”

Which brings me to other things I hear. Other bands with distinct, if indirect, Uzebian connections.

One is Jazzorange, which was a significant part of Lausanne’s Jazz scene when I was living there.My good friend Vincent Jaton introduced to Jazzorange in 1994 and Uzeb’s alumni Caron and Cusson were definitely on my mind at the time.

Vincent, musician and producer extraordinaire, introduced me to a number of musicians and I owe him a huge debt for helping me along a path to musical (self-)discovery. Vincent’s own playing also shares a few things with what I hear in First Landing, but the connection with Jazzorange is more obvious, to me.

Another band I hear in connection to Chourot’s playing is Sixun. That French band, now 25 years old, is probably among the longest-lasting acts in this category of Jazz. Some Jazz ensembles are older (including one of my favourites, Oregon). But Sixun is a key example of what some people call “Jazz Fusion.”

Which is a term I avoided, as I mentioned diverse musicians. Not because I personally dislike the term. It’s as imprecise as any other term describing a “musical genre” (and as misleading as some of my pet peeves). But I’m not against its use, especially since there is a significant degree of agreement about several of the musicians I mention being classified (at least originally) as “Fusion.” Problem is, the term has also been associated with an attitude toward music which isn’t that conducive to thoughtful discussion. In some ways, “Fusion” is used for dismissal more than as a way to discuss musical similarities.

Still, there are musical features that I appreciate in a number of Jazz Fusion performances, some of which are found in some combination through the playing of several of the musicians I’m mentioning here.

Some things like the interactions between the bass and other instruments, some lyrical basslines, the fact that melodic lines may be doubled by the bass… Basically, much of it has to do with the bass. And, in Jazz, the bass is often key. As Darcey Leigh said to Dale Turner (Lonette McKee and Dexter Gordon’s characters in ‘Round Midnight):

You’re the one who taught me to listen to the bass instead of the drums

Actually, there might be a key point about the way yours truly listens to bass players. Even though I’m something of a “frustrated bassist” (but happy saxophonist), I probably have a limited understanding of bass playing. To me, there’s a large variety of styles of bass playing, of course, but several players seem to sound a bit like one another. It’s not really a full classification that I have in my mind but I can’t help but hear similarities between bass performers. Like clusters.

Sometimes, these links may go outside of the music domain, strictly speaking.  For instance, three of my favourite bassists are from Cameroon: Guy Langue, Richard Bona, and Étienne Mbappe. Not that I heard these musicians together: I noticed Mbappe as a member of ONJ in 1989, I first heard Bona as part of the Zawinul syndicate in 1997, and I’ve been playing with Langue for a number of years (mostly with Madou Diarra & Dakan). Further, as I’m discovering British/Nigerian bass player Michael Olatuja, I get to extend what I hear as the Cameroonian connection to parts of West African music that I know a bit more about. Of course, I might be imagining things. But my imagination goes in certain directions.

Something similar happens to me with “Fusion” players. Alain Caron is known for his fretless bass sound and virtuosic playing, but it’s not really about that, I don’t think. It’s something about the way the bass is embedded in the rest of the band, with something of a Jazz/Rock element but also more connected to lyricism, complex melodic lines, and relatively “clean” playing. The last one may relate, somehow, to the Fusion stereotype of coldness and machine-like precision. But my broad impression of what I might call “Fusion bass” actually involves quite a bit of warmth. And humanness.

Going back to Chourot and other “Jazz Fusion” acts I’ve been thinking about, it’s quite possible that Gilles Deslauriers (who plays bass on Chourot’s First Landing) is the one who reminds me of other Fusion acts. No idea if Bob Laredo (Jazzorange), Michel Alibo (Sixun), Alain Caron (Uzeb), and Gilles Deslauriers really all have something in common. But my own subjective assessment of bass playing connects them in a special way.

The most important point, to me, is that even if this connection is idiosyncratic, it still helps me enjoy First Landing.

Nicolas Chourot and his friends from that album (including Gilles Deslauriers) are playing at O Patro Výš, next Saturday (January 23, 2010).

Personal Devices

Still thinking about touch devices, such as the iPod touch and the rumoured “Apple Tablet.”

Thinking out loud. Rambling even more crazily than usual.

Something important about those devices is the need for a real “Personal Digital Assistant.” I put PDAs as a keyword for my previous post because I do use the iPod touch like I was using my PalmOS and even NewtonOS devices. But there’s more to it than that, especially if you think about cloud computing and speech technologies.
I mentioned speech recognition in that previous post. SR tends to be a pipedream of the computing world. Despite all the hopes put into realtime dictation, it still hasn’t taken off in a big way. One reason might be that it’s still somewhat cumbersome to use, in current incarnations. Another reason is that it’s relatively expensive as a standalone product which requires some getting used to. But I get the impression that another set of reasons has to do with the fact that it’s mostly fitting on a personal device. Partly because it needs to be trained. But also because voice itself is a personal thing.

Cloud computing also takes a new meaning with a truly personal device. It’s no surprise that there are so many offerings with some sort of cloud computing feature in the App Store. Not only do Apple’s touch devices have limited file storage space but the notion of accessing your files in the cloud go well with a personal device.
So, what’s the optimal personal device? I’d say that Apple’s touch devices are getting close to it but that there’s room for improvement.

Some perspective…

Originally, the PC was supposed to be a “personal” computer. But the distinction was mostly with mainframes. PCs may be owned by a given person, but they’re not so tied to that person, especially given the fact that they’re often used in a single context (office or home, say). A given desktop PC can be important in someone’s life, but it’s not always present like a personal device should be. What’s funny is that “personal computers” became somewhat more “personal” with the ‘Net and networking in general. Each computer had a name, etc. But those machines remained somewhat impersonal. In many cases, even when there are multiple profiles on the same machine, it’s not so safe to assume who the current user of the machine is at any given point.

On paper, the laptop could have been that “personal device” I’m thinking about. People may share a desktop computer but they usually don’t share their laptop, unless it’s mostly used like a desktop computer. The laptop being relatively easy to carry, it’s common for people to bring one back and forth between different sites: work, home, café, school… Sounds tautological, as this is what laptops are supposed to be. But the point I’m thinking about is that these are still distinct sites where some sort of desk or table is usually available. People may use laptops on their actual laps, but the form factor is still closer to a portable desktop computer than to the kind of personal device I have in mind.

Then, we can go all the way to “wearable computing.” There’s been some hype about wearable computers but it has yet to really be part of our daily lives. Partly for technical reasons but partly because it may not really be what people need.

The original PDAs (especially those on NewtonOS and PalmOS) were getting closer to what people might need, as personal devices. The term “personal digital assistant” seemed to encapsulate what was needed. But, for several reasons, PDAs have been having a hard time. Maybe there wasn’t a killer app for PDAs, outside of “vertical markets.” Maybe the stylus was the problem. Maybe the screen size and bulk of the device weren’t getting to the exact points where people needed them. I was still using a PalmOS device in mid-2008 and it felt like I was among the last PDA users.
One point was that PDAs had been replaced by “smartphones.” After a certain point, most devices running PalmOS were actually phones. RIM’s Blackberry succeeded in a certain niche (let’s use the vague term “professionals”) and is even beginning to expand out of it. And devices using other OSes have had their importance. It may not have been the revolution some readers of Pen Computing might have expected, but the smartphone has been a more successful “personal device” than the original PDAs.

It’s easy to broaden our focus from smartphones and think about cellphones in general. If the 3.3B figure can be trusted, cellphones may already be outnumbering desktop and laptop computers by 3:1. And cellphones really are personal. You bring them everywhere; you don’t need any kind of surface to use them; phone communication actually does seem to be a killer app, even after all this time; there are cellphones in just about any price range; cellphone carriers outside of Canada and the US are offering plans which are relatively reasonable; despite some variation, cellphones are rather similar from one manufacturer to the next… In short, cellphones already were personal devices, even before the smartphone category really emerged.

What did smartphones add? Basically, a few PDA/PIM features and some form of Internet access or, at least, some form of email. “Whoa! Impressive!”

Actually, some PIM features were already available on most cellphones and Internet access from a smartphone is in continuity with SMS and data on regular cellphones.

What did Apple’s touch devices add which was so compelling? Maybe not so much, apart from the multitouch interface, a few games, and integration with desktop/laptop computers. Even then, most of these changes were an evolution over the basic smartphone concept. Still, it seems to have worked as a way to open up personal devices to some new dimensions. People now use the iPhone (or some other multitouch smartphone which came out after the iPhone) as a single device to do all sorts of things. Around the World, multitouch smartphones are still much further from being ubiquitous than are cellphones in general. But we could say that these devices have brought the personal device idea to a new phase. At least, one can say that they’re much more exciting than the other personal computing devices.

But what’s next for personal devices?

Any set of buzzphrases. Cloud computing, speech recognition, social media…

These things can all come together, now. The “cloud” is mostly ready and personal devices make cloud computing more interesting because they’re “always-on,” are almost-wearable, have batteries lasting just about long enough, already serve to keep some important personal data, and are usually single-user.

Speech recognition could go well with those voice-enabled personal devices. For one thing, they already have sound input. And, by this time, people are used to seeing others “talk to themselves” as cellphones are so common. Plus, voice recognition is already understood as a kind of security feature. And, despite their popularity, these devices could use a further killer app, especially in terms of text entry and processing. Some of these devices already have voice control and it’s not so much of a stretch to imagine them having what’s needed for continuous speech recognition.

In terms of getting things onto the device, I’m also thinking about such editing features as a universal rich-text editor (à la TinyMCE), predictive text, macros, better access to calendar/contact data, ubiquitous Web history, multiple pasteboards, data detectors, Automator-like processing, etc. All sorts of things which should come from OS-level features.

“Social media” may seem like too broad a category. In many ways, those devices already take part in social networking, user-generated content, and microblogging, to name a few areas of social media. But what about a unified personal profile based on the device instead of the usual authentication method? Yes, all sorts of security issues. But aren’t people unconcerned about security in the case of social media? Twitter accounts are being hacked left and right yet Twitter doesn’t seem to suffer much. And there could be added security features on a personal device which is meant to really integrate social media. Some current personal devices already work well as a way to keep login credentials to multiple sites. The next step, there, would be to integrate all those social media services into the device itself. We maybe waiting for OpenSocial, OpenID, OAuth, Facebook Connect, Google Connect, and all sorts of APIs to bring us to an easier “social media workflow.” But a personal device could simplify the “social media workflow” even further, with just a few OS-based tweaks.

Unlike my previous, I’m not holding my breath for some specific event which will bring us the ultimate personal device. After all, this is just a new version of my ultimate handheld device blogpost. But, this time, I was focusing on what it means for a device to be “personal.” It’s even more of a drafty draft than my blogposts usually have been ever since I decided to really RERO.

So be it.

Présence féminine et culture geek (Journée Ada Lovelace) #ald09

En 2009, la journée de la femme a été hypothéquée d’une heure, dans certaines contrées qui sont passées à l’heure d’été le 8 mars. Pourtant, plus que jamais, c’est aux femmes que nous devrions accorder plus de place. Cette Journée internationale en l’honneur d’Ada Lovelace et des femmes dans les domaines technologiques est une excellente occasion pour discuter de l’importance de la présence féminine pour la pérennité sociale.

Pour un féministe mâle, le fait de parler de condition féminine peut poser certains défis. Qui suis-je, pour parler des femmes? De quel droit pourrais-je m’approprier de la parole qui devrait, selon moi, être accordée aux femmes? Mes propos ne sont-ils pas teintés de biais? C’est donc d’avantage en tant qu’observateur de ce que j’ai tendance à appeler la «culture geek» (voire la «niche geek» ou la «foule geek») que je parle de cette présence féminine.

Au risque de tomber dans le panneau du stéréotype, j’oserais dire qu’une présence accrue des femmes en milieu geek peut avoir des impacts intéressants en fonction de certains rôles impartis aux femmes dans diverses sociétés liées à la culture geek. En d’autres termes, j’aimerais célébrer le pouvoir féminin, bien plus fondamntal que la «force» masculine.

Je fais en cela référence à des notions sur les femmes et les hommes qui m’ont été révélées au cours de mes recherches sur les confréries de chasseurs, au Mali. En apparence exclusivement mâles, les confréries de chasseurs en Afrique de l’ouest accordent une place prépondérante à la féminité. Comme le dit le proverbe, «nous sommes tous dans les bras de nos mères» (bèè y’i ba bolo). Si le père, notre premier rival (i fa y’i faden folo de ye), peut nous donner la force physique, c’est la mère qui nous donne la puissance, le vrai pouvoir.

Loin de moi l’idée d’assigner aux femmes un pouvoir qui ne viendrait que de leur capacité à donner naissance. Ce n’est pas uniquement en tant que mère que la femme se doit d’être respectée. Bien au contraire, les divers rôles des femmes ont tous à être célébrés. Ce qui donne à la maternité une telle importance, d’un point de vue masculin, c’est son universalité: un homme peut ne pas avoir de sœur, d’épouse ou de fille, il peut même ne pas connaître l’identité précise de son père, il a au minimum eu un contact avec sa mère, de la conception à la naissance.

C’est souvent par référence à la maternité que les hommes conçoivent le respect le plus inconditionnel pour la femme. Et l’image maternelle ne doit pas être négligée, même si elle est souvent stéréotypée. Même si le terme «materner» a des connotations péjoratives, il fait appel à un soi adapté et sans motif spécifique. La culture geek a-t-elle besoin de soins maternels?

Une étude récente s’est penchée sur la dimension hormonale des activités des courtiers de Wall Street, surtout en ce qui a trait à la prise de risques. Selon cette étude (décrite dans une baladodiffusion de vulgarisation scientifique), il y aurait un lien entre certains taux d’hormones et un comportement fondé sur le profit à court terme. Ces hormones sont surtout présentes chez de jeunes hommes, qui constituent la majorité de ce groupe professionnel. Si les résultats de cette étude sont valables, un groupe plus diversifié de courtiers, au niveau du sexe et de l’âge, risque d’être plus prudent qu’un groupe dominé par de jeunes hommes.

Malgré d’énormes différences dans le détail, la culture geek a quelques ressemblances avec la composition de Wall Street, du moins au point de vue hormonal. Si l’appât du gain y est moins saillant que sur le plancher de la Bourse, la culture geek accorde une très large place au culte méritocratique de la compétition et à l’image de l’individu brillant et tout-puissant. La prise de risques n’est pas une caractéristique très visible de la culture geek, mais l’approche «résolution de problèmes» (“troubleshooting”) évoque la décision hâtive plutôt que la réflexion approfondie. Le rôle du dialogue équitable et respectueux, sans en être évacué, n’y est que rarement mis en valeur. La culture geek est «internationale», en ce sens qu’elle trouve sa place dans divers lieux du Globe (généralement définis avec une certaine précision en cebuees névralgiques comme la Silicon Valley). Elle est pourtant loin d’être représentative de la diversité humaine. La proportion bien trop basse de femmes liées à la culture geek est une marque importante de ce manque de diversité. Un groupe moins homogène rendrait plus prégnante la notion de coopération et, avec elle, un plus grand soucis de la dignité humaine. Après tout, le vrai humanisme est autant philogyne que philanthrope.

Un principe similaire est énoncé dans le cadre des soins médicaux. Sans être assignées à des tâches spécifiques, associées à leur sexe, la présence de certaines femmes-médecins semble améliorer certains aspects du travail médical. Il y a peut-être un stéréotype implicite dans tout ça et les femmes du secteur médical ne sont probablement pas traitées d’une bien meilleure façon que les femmes d’autres secteurs d’activité. Pourtant, au-delà du stéréotype, l’association entre féminité et relation d’aide semble se maintenir dans l’esprit des membres de certaines sociétés et peut être utilisée pour rendre la médecine plus «humaine», tant dans la diversité que dans cette notion d’empathie raisonnée, évoquée par l’humanisme.

Je ne peux m’empêcher de penser à cette remarquable expérience, il y a quelques années déjà, de participer à un colloque académique à forte présence féminine. En plus d’une proportion élevée de femmes, ce colloque sur la nourriture et la culture donnait la part belle à l’image de la mère nourricière, à l’influence fondamentale de la sphère donestique sur la vie sociale. Bien que mâle, je m’y suis senti à mon aise et je garde de ces quelques jours l’idée qu’un monde un tant soit peu féminisé pouvait avoir des effets intéressants, d’un point de vue social. Un groupe accordant un réel respect à la condition féminine peut être associé à une ambiance empreinte de «soin», une atmosphère “nurturing”.

Le milieu geek peut être très agréable, à divers niveaux, mais la notion de «soin», l’empathie, voire même l’humanisme n’en sont pas des caractéristiques très évidentes. Un monde geek accordant plus d’importance à la présence des femmes serait peut-être plus humain que ce qu’un portrait global de la culture geek semble présager.

Et n’est-ce pas ce qui s’est passé? Le ‘Net s’est partiellement féminisé au cours des dix dernières années et l’émergence du média social est intimement lié à cette transformation «démographique».

D’aucuns parlent de «démocratisation» d’Internet, usant d’un champ lexical associé au journalisme et à la notion d’État-Nation. Bien qu’il s’agisse de parler d’accès plus uniforme aux moyens technologiques, la source de ce discours se situe dans une vision spécifique de la structure social. Un relent de la Révolution Industrielle, peut-être? Le ‘Net étant construit au-delà des frontières politiques, cette vision du monde semble peu appropriée à la communication mondialisée. D’ailleurs, qu’entend-on vraiment par «démocratisation» d’Internet? La participation active de personnes diversifiées aux processus décisionnels qui créent continuellement le ‘Net? La simple juxtaposition de personnes provenant de milieux socio-économiques distincts? La possibilité pour la majorité de la planète d’utiliser certains outils dans le but d’obtenir ces avantages auxquels elle a droit, par prérogative statistique? Si c’est le cas, il en reviendrait aux femmes, majoritaires sur le Globe, de décider du sort du ‘Net. Pourtant, ce sont surtout des hommes qui dominent le ‘Net. Le contrôle exercé par les hommes semble indirect mais il n’en est pas moins réel.

Cet état des choses a tendance à changer. Bien qu’elles ne soient toujours pas dominantes, les femmes sont de plus en plus présentes, en-ligne. Certaines recherches statistiques semblent d’ailleurs leur assigner la majorité dans certaines sphères d’activité en-ligne. Mais mon approche est holistique et qualitative, plutôt que statistique et déterministe. C’est plutôt au sujet des rôles joués par les femmes que je pense. Si certains de ces rôles semblent sortir en ligne direct du stéréotype d’inégalité sexuelle du milieu du XXè siècle, c’est aussi en reconnaissant l’emprise du passé que nous pouvons comprendre certaines dimensions de notre présent. Les choses ont changé, soit. La conscience de ce changement informe certains de nos actes. Peu d’entre nous ont complètement mis de côté cette notion que notre «passé à tous» était patriarcal et misogyne. Et cette notion conserve sa signifiance dans nos gestes quotidiens puisque nous nous comparons à un modèle précis, lié à la domination et à la lutte des classes.

Au risque, encore une fois, de faire appel à des stéréotypes, j’aimerais parler d’une tendance que je trouve fascinante, dans le comportement de certaines femmes au sein du média social. Les blogueuses, par exemple, ont souvent réussi à bâtir des communautés de lectrices fidèles, des petits groupes d’amies qui partagent leurs vies en public. Au lieu de favoriser le plus grand nombre de visites, plusieurs femmes ont fondé leurs activités sur la blogosphère sur des groupes relativement restreints mais très actifs. D’ailleurs, certains blogues de femmes sont l’objet de longues discussions continues, liant les billets les uns aux autres et, même, dépassant le cadre du blogue.

À ce sujet, je fonde certaines de mes idées sur quelques études du phénomène de blogue, parues il y a déjà plusieurs années (et qu’il me serait difficile de localiser en ce moment) et sur certaines observations au sein de certaines «scènes geeks» comme Yulblog. Lors de certains événements mettant en contacts de nombreuses blogueuses, certaines d’entre elles semblaient préférer demeurer en groupe restreint pour une part importante de la durée de l’événement que de multiplier les nouveaux contacts. Il ne s’agit pas ici d’une restriction, certaines femmes sont mieux à même de provoquer l’«effet du papillon social» que la plupart des hommes. Mais il y a une force tranquille dans ces petits regroupements de femmes, qui fondent leur participation à la blogosphère sur des contacts directs et forts plutôt que sur la «pêche au filet». C’est souvent par de très petits groupes très soudés que les changements sociaux se produisent et, des “quilting bees” aux blogues de groupes de femmes, il y a une puissance ignorée.

Il serait probablement abusif de dire que c’est la présence féminine qui a provoqué l’éclosion du média social au cours des dix dernières années. Mais la présence des femmes est liée au fait que le ‘Net ait pu dépasser la «niche geek». Le domaine de ce que certains appellent le «Web 2.0» (ou la sixième culture d’Internet) n’est peut-être pas plus démocratique que le ‘Net du début des années 1990. Mais il est clairement moins exclusif et plus accueillant.

Comme ma tendre moitié l’a lu sur la devanture d’une taverne: «Bienvenue aux dames!»

Les billets publiés en l’honneur de la Journée Ada Lovelace devaient, semble-t-il, se pencher sur des femmes spécifiques, œuvrant dans des domaines technologiques. J’ai préféré «réfléchir à plume haute» au sujet de quelques éléments qui me trottaient dans la tête. Il serait toutefois de bon ton pour moi de mentionner des noms et de ne pas consigner ce billet à une observation purement macroscopique et impersonnelle. Étant peu porté sur l’individualisme, je préfère citer plusieurs femmes, plutôt que de me concentrer sur une d’entre elles. D’autant plus que la femme à laquelle je pense avec le plus d’intensité dit désirer garder une certaine discrétion et, même si elle blogue depuis bien plus longtemps que moi et qu’elle sait très bien se débrouiller avec les outils en question, elle prétend ne pas être associée à la technologie.

J’ai donc décidé de procéder à une simple énumération (alphabétique, j’aime pas les rangs) de quelques femmes dont j’apprécie le travail et qui ont une présence Internet facilement identifiable. Certaines d’entre elles sont très proches de moi. D’autres planent au-dessus de milieux auxquels je suis lié. D’autres encore sont des présences discrètes ou fortes dans un quelconque domaine que j’associe à la culture geek et/ou au média social. Évidemment, j’en oublie des tonnes. Mais c’est un début. Continuons le combat! 😉

Influence and Butterflies

Seems like “influence” is a key theme in social media, these days. An example among several others:

Influenceur, autorité, passeur de culture ou l’un de ces singes exubérants | Mario tout de go.

In that post, Mario Asselin brings together a number of notions which are at the centre of current discussions about social media. The core notion seems to be that “influence” replaces “authority” as a quality or skill some people have, more than others. Some people are “influencers” and, as such, they have a specific power over others. Such a notion seems to be widely held in social media and numerous services exist which are based on the notion that “influence” can be measured.
I don’t disagree. There’s something important, online, which can be called “influence” and which can be measured. To a large extent, it’s related to a large number of other concepts such as fame and readership, popularity and network centrality. There are significant differences between all of those concepts but they’re still related. They still depict “social power” which isn’t coercive but is the basis of an obvious stratification.
In some contexts, this is what people mean by “social capital.” I originally thought people meant something closer to Bourdieu but a fellow social scientist made me realise that people are probably using Putnam’s concept instead. I recently learnt that George W. Bush himself used “political capital” in a sense which is fairly similar to what most people seem to mean by “social capital.” Even in that context, “capital” is more specific than “influence.” But the core notion is the same.
To put it bluntly:
Some people are more “important” than others.
Social marketers are especially interested in such a notion. Marketing as a whole is about influence. Social marketing, because it allows for social groups to be relatively amorphous, opposes influence to authority. But influence maintains a connection with “top-down” approaches to marketing.
My own point would be that there’s another kind of influence which is difficult to pinpoint but which is highly significant in social networks: the social butterfly effect.
Yep, I’m still at it after more than three years. It’s even more relevant now than it was then. And I’m now able to describe it more clearly and define it more precisely.
The social butterfly effect is a social network analogue to the Edward Lorenz’s well-known “butterfly effect. ” As any analogy, this connection is partial but telling. Like Lorenz’s phrase, “social butterfly effect” is more meaningful than precise. One thing which makes the phrase more important for me is the connection with the notion of a “social butterfly,” which is both a characteristic I have been said to have and a concept I deem important in social science.
I define social butterflies as people who connect to diverse network clusters. Community enthusiast Christine Prefontaine defined social butterflies within (clustered) networks, but I think it’s useful to separate out network clusters. A social butterfly’s network is rather sparse as, on the whole, a small number of people in it have direct connections with one another. But given the topography of most social groups, there likely are clusters within that network. The social butterfly connects these clusters. When the social butterfly is the only node which can connect these clusters directly, her/his “influence” can be as strong as that of a central node in one of these clusters since s/he may be able to bring some new element from one cluster to another.
I like the notion of “repercussion” because it has an auditory sense and it resonates with all sorts of notions I think important without being too buzzwordy. For instance, as expressions like “ripple effect” and “domino effect” are frequently used, they sound like clichés. Obviously, so does “butterfly effect” but I like puns too much to abandon it. From a social perspective, the behaviour of a social butterfly has important “repercussions” in diverse social groups.
Since I define myself as a social butterfly, this all sounds self-serving. And I do pride myself in being a “connector.” Not only in generational terms (I dislike some generational metaphors). But in social terms. I’m rarely, if ever, central to any group. But I’m also especially good at serving as a contact between people from different groups.
Yay, me! 🙂
My thinking about the social butterfly effect isn’t an attempt to put myself on some kind of pedestal. Social butterflies typically don’t have much “power” or “prestige.” Our status is fluid/precarious. I enjoy being a social butterfly but I don’t think we’re better or even more important than anybody else. But I do think that social marketers and other people concerned with “influence” should take us into account.
I say all of this as a social scientist. Some parts of my description are personalized but I’m thinking about a broad stance “from society’s perspective.” In diverse contexts, including this blog, I have been using “sociocentric” in at least three distinct senses: class-based ethnocentrism, a special form of “altrocentrism,” and this “society-centred perspective.” These meanings are distinct enough that they imply homonyms. Social network analysis is typically “egocentric” (“ego-centred”) in that each individual is the centre of her/his own network. This “egocentricity” is both a characteristic of social networks in opposition to other social groups and a methodological issue. It specifically doesn’t imply egotism but it does imply a move away from pre-established social categories. In this sense, social network analysis isn’t “society-centred” and it’s one reason I put so much emphasis on social networks.
In the context of discussions of influence, however, there is a “society-centredness” which needs to be taken into account. The type of “influence” social marketers and others are so interested in relies on defined “spaces.” In some ways, if “so-and-so is influential,” s/he has influence within a specific space, sphere, or context, the boundaries of which may be difficult to define. For marketers, this can bring about the notion of a “market,” including in its regional and demographic senses. This seems to be the main reason for the importance of clusters but it also sounds like a way to recuperate older marketing concepts which seem outdated online.
A related point is the “vertical” dimension of this notion of “influence.” Whether or not it can be measured accurately, it implies some sort of scale. Some people are at the top of the scale, they’re influencers. Those at the bottom are the masses, since we take for granted that pyramids are the main models for social structure. To those of us who favour egalitarianism, there’s something unpalatable about this.
And I would say that online contacts tend toward some form of egalitarianism. To go back to one of my favourite buzzphrases, the notion of attention relates to reciprocity:

It’s an attention economy: you need to pay attention to get attention.

This is one thing journalism tends to “forget.” Relationships between journalists and “people” are asymmetrical. Before writing this post, I read Brian Storm’s commencement speech for the Mizzou J-School. While it does contain some interesting tidbits about the future of journalism, it positions journalists (in this case, recent graduates from an allegedly prestigious school of journalism) away from the masses. To oversimplify, journalists are constructed as those who capture people’s attention by the quality of their work, not by any two-way relationship. Though they rarely discuss this, journalists, especially those in mainstream media, typically perceive themselves as influencers.

Attention often has a temporal dimension which relates to journalism’s obsession with time. Journalists work in time-sensitive contexts, news are timely, audiences spend time with journalistic contents, and journalists fight for this audience time as a scarce resource, especially in connection to radio and television. Much of this likely has to do with the fact that journalism is intimately tied to advertising.

As I write this post, I hear on a radio talk show a short discussion about media coverage of Africa. The topic wakes up the africanist in me. The time devoted to Africa in almost any media outside of Africa is not only very limited but spent on very specific issues having to do with Africa. In mainstream media, Africa only “matters” when major problems occur. Even though most parts of Africa are peaceful and there many fabulously interesting things occuring throughout the continent, Africa is the “forgotten” continent.

A connection I perceive is that, regardless of any other factor, Africans are taken to not be “influential.” What makes this notion especially strange to an africanist is that influence tends to be a very important matter throughout the continent. Most Africans I know or have heard about have displayed a very nuanced and acute sense of “influence” to the extent that “power” often seems less relevant when working in Africa than different elements of influence. I know full well that, to outsiders to African studies, these claims may sound far-fetched. But there’s a lot to be said about the importance of social networks in Africa and this could help refine a number of notions that I have tagged in this post.

Blogging and Literary Standards

I wrote the following comment in response to a conversation between novelist Rick Moody and podcasting pioneer Chris Lydon:

Open Source » Blog Archive » In the Obama Moment: Rick Moody.

In keeping with the RERO principle I describe in that comment, the version on the Open Source site is quite raw. As is my habit, these days, I pushed the “submit” button without rereading what I had written. This version is edited, partly because I noticed some glaring mistakes and partly because I wanted to add some links. (Blog comments are often tagged for moderation if they contain too many links.) As I started editing that comment, I changed a few things, some of which have consequences to the meaning of my comment. There’s this process, in both writing and editing, which “generates new thoughts.” Yet another argument for the RERO principle.

I can already think of an addendum to this post, revolving on my personal position on writing styles (informed by my own blogwriting experience) along with my relative lack of sensitivity for Anglo writing. But I’m still blogging this comment on a standalone basis.

Read on, please… Continue reading Blogging and Literary Standards

Apologies and Social Media: A Follow-Up on PRI's WTP

I did it! I did exactly what I’m usually trying to avoid. And I feel rather good about the outcome despite some potentially “ruffled feathers” («égos froissés»?).

While writing a post about PRI’s The World: Technology Podcast (WTP), I threw caution to the wind.

Why Is PRI’s The World Having Social Media Issues? « Disparate.

I rarely do that. In fact, while writing my post, I was getting an awkward feeling. Almost as if I were writing from a character’s perspective. Playing someone I’m not, with a voice which isn’t my own but that I can appropriate temporarily.

The early effects of my lack of caution took a little bit of time to set in and they were rather negative. What’s funny is that I naïvely took the earliest reaction as being rather positive but it was meant to be very negative. That in itself indicates a very beneficial development in my personal life. And I’m grateful to the person who helped me make this realization.

The person in question is Clark Boyd, someone I knew nothing about a few days ago and someone I’m now getting to know through both his own words and those of people who know about his work.

The power of social media.

And social media’s power is the main target of this, here, follow-up of mine.

 

As I clumsily tried to say in my previous post on WTP, I don’t really have a vested interest in the success or failure of that podcast. I discovered it (as a tech podcast) a few days ago and I do enjoy it. As I (also clumsily) said, I think WTP would rate fairly high on a scale of cultural awareness. To this ethnographer, cultural awareness is too rare a feature in any form of media.

During the latest WTP episode, Boyd discussed what he apparently describes as the mitigated success of his podcast’s embedding in social media and online social networking services. Primarily at stake was the status of the show’s Facebook group which apparently takes too much time to manage and hasn’t increased in membership. But Boyd also made some intriguing comments about other dimensions of the show’s online presence. (If the show were using a Creative Commons license, I’d reproduce these comments here.)

Though it wasn’t that explicit, I interpreted Boyd’s comments to imply that the show’s participants would probably welcome feedback. As giving feedback is an essential part of social media, I thought it appropriate to publish my own raw notes about what I perceived to be the main reasons behind the show’s alleged lack of success in social media spheres.

Let it be noted that, prior to hearing Boyd’s comments, I had no idea what WTP’s status was in terms of social media and social networks. After subscribing to the podcast, the only thing I knew about the show was from the content of those few podcast episodes. Because the show doesn’t go the “meta” route very often (“the show about the show”), my understanding of that podcast was, really, very limited.

My raw notes were set in a tone which is quite unusual for me. In a way, I was “trying it out.” The same tone is used by a lot of friends and acquaintances and, though I have little problem with the individuals who take this tone, I do react a bit negatively when I hear/see it used. For lack of a better term, I’d call it a “scoffing tone.” Not unrelated to the “curmudgeon phase” I described on the same day. But still a bit different. More personalized, in fact. This tone often sounds incredibly dismissive. Yet, when you discuss its target with people who used it, it seems to be “nothing more than a tone.” When people (or cats) use “EPIC FAIL!” as a response to someone’s troubles, they’re not really being mean. They merely use the conventions of a speech community.

Ok, I might be giving these people too much credit. But this tone is so prevalent online that I can’t assume these people have extremely bad intentions. Besides, I can understand the humour in schadenfreude. And I’d hate to use flat-out insults to describe such a large group of people. Even though I do kind of like the self-deprecation made possible by the fact that I adopted the same behaviour.

Whee!

 

So, the power of social media… The tone I’m referring to is common in social media, especially in replies, reactions, responses, comments, feedback. Though I react negatively to that tone, I’m getting to understand its power. At the very least, it makes people react. And it seems to be very straightforward (though I think it’s easily misconstrued). And this tone’s power is but one dimension of the power of social media.

 

Now, going back to the WTP situation.

After posting my raw notes about WTP’s social media issues, I went my merry way. At the back of my mind was this nagging suspicion that my tone would be misconstrued. But instead of taking measures to ensure that my post would have no negative impact (by changing the phrasing or by prefacing it with more tactful comments), I decided to leave it as is.

Is «Rien ne va plus, les jeux sont faits» a corrolary to the RERO mantra?

While I was writing my post, I added all the WTP-related items I could find to my lists: I joined WTP’s apparently-doomed Facebook group, I started following @worldstechpod on Twitter, I added two separate WTP-related blogs to my blogroll… Once I found out what WTP’s online presence was like, I did these few things that any social media fan usually does. “Giving the podcast some love” is the way some social media people might put it.

One interesting effect of my move is that somebody at WTP (probably Clark Boyd) apparently saw my Twitter add and (a few hours after the fact) reciprocated by following me on Twitter. Because I thought feedback about WTP’s social media presence had been requested, I took the opportunity to send a link to my blogpost about WTP with an extra comment about my tone.

To which the @worldstechpod twittername replied with:

@enkerli right, well you took your best shot at me, I’ll give you that. thanks a million. and no, your tone wasn’t “miscontrued” at all.

Call me “naïve” but I interpreted this positively and I even expressed relief.

Turns out, my interpretation was wrong as this is what WTP replied:

@enkerli well, it’s a perfect tone for trashing someone else’s work. thanks.

I may be naïve but I did understand that the last “thanks” was meant as sarcasm. Took me a while but I got it. And I reinterpreted WTP’s previous tweet as sarcastic as well.

Now, if I had read more of WTP’s tweets, I would have understood the “WTP online persona.”  For instance, here’s the tweet announcing the latest WTP episode:

WTP 209 — yet another exercise in utter futility! hurrah! — http://ping.fm/QjkDX

Not to mention this puzzling and decontextualized tweet:

and you make me look like an idiot. thanks!

Had I paid attention to the @worldstechpod archive, I would even have been able to predict how my blogpost would be interpreted. Especially given this tweet:

OK. Somebody school me. Why can I get no love for the WTP on Facebook?

Had I noticed that request, I would have realized that my blogpost would most likely be interpreted as an attempt at “schooling” somebody at WTP. I would have also realized that tweets on the WTP account on Twitter were written by a single individual. Knowing myself, despite my attempt at throwing caution to the wind, I probably would have refrained from posting my WTP comments or, at the very least, I would have rephrased the whole thing.

I’m still glad I didn’t.

Yes, I (unwittingly) “touched a nerve.” Yes, I apparently angered someone I’ve never met (and there’s literally nothing I hate more than angering someone). But I still think the whole situation is leading to something beneficial.

Here’s why…

After that sarcastic tweet about my blogpost, Clark Boyd (because it’s now clear he’s the one tweeting @worldstechpod) sent the following request through Twitter:

rebuttal, anyone? i can’t do it without getting fired. — http://ping.fm/o71wL

The first effect of this request was soon felt right here on my blog. That reaction was, IMHO, based on a misinterpretation of my words. In terms of social media, this kind of reaction is “fair game.” Or, to use a social media phrase: “it’s alll good.”

I hadn’t noticed Boyd’s request for rebuttal. I was assuming that there was a connection between somebody at the show and the fact that this first comment appeared on my blog, but I thought it was less direct than this. Now, it’s possible that there wasn’t any connection between that first “rebuttal” and Clark Boyd’s request through Twitter. But the simplest explanation seems to me to be that the blog comment was a direct result of Clark Boyd’s tweet.

After that initial blog rebuttal, I received two other blog comments which I consider more thoughtful and useful than the earliest one (thanks to the time delay?). The second comment on my post was from a podcaster (Brad P. from N.J.), but it was flagged for moderation because of the links it contained. It’s a bit unfortunate that I didn’t see this comment on time because it probably would have made me understand the situation a lot more quickly.

In his comment, Brad P. gives some context for Clark Boyd’s podcast. What I thought was the work of a small but efficient team of producers and journalists hired by a major media corporation to collaborate with a wider public (à la Search Engine Season I) now sounds more like the labour of love from an individual journalist with limited support from a cerberus-like major media institution. I may still be off, but my original impression was “wronger” than this second one.

The other blog comment, from Dutch blogger and Twitter @Niels, was chronologically the one which first made me realize what was wrong with my post. Niels’s comment is a very effective mix of thoughtful support for some of my points and thoughtful criticism of my post’s tone. Nice job! It actually worked in showing me the error of my ways.

All this to say that I apologise to Mr. Clark Boyd for the harshness of my comments about his show? Not really. I already apologised publicly. And I’ve praised Boyd for both his use of Facebook and of Twitter.

What is it, then?

Well, this post is a way for me to reflect on the power of social media. Boyd talked about social media and online social networks. I’ve used social media (my main blog) to comment on the presence of Boyd’s show in social media and social networking services. Boyd then used social media (Twitter) to not only respond to me but to launch a “rebuttal campaign” about my post. He also made changes to his show’s online presence on a social network (Facebook) and used social media (Twitter) to advertise this change. And I’ve been using social media (Twitter and this blog) to reflect on social media (the “meta” aspect is quite common), find out more about a tricky situation (Twitter), and “spread the word” about PRI’s The World: Technology Podcast (Facebook, blogroll, Twitter).

Sure, I got some egg on my face, some feathers have been ruffled, and Clark Boyd might consider me a jerk.

But, perhaps unfortunately, this is often the way social media works.

 

Heartfelt thanks to Clark Boyd for his help.

Handhelds for the Rest of Us?

Ok, it probably shouldn’t become part of my habits but this is another repost of a blog comment motivated by the OLPC XO.

This time, it’s a reply to Niti Bhan’s enthusiastic blogpost about the eeePC: Perspective 2.0: The little eeePC that could has become the real “iPod” of personal computing

This time, I’m heavily editing my comments. So it’s less of a repost than a new blogpost. In some ways, it’s partly a follow-up to my “Ultimate Handheld Device” post (which ended up focusing on spatial positioning).

Given the OLPC context, the angle here is, hopefully, a culturally aware version of “a handheld device for the rest of us.”

Here goes…

I think there’s room in the World for a device category more similar to handhelds than to subnotebooks. Let’s call it “handhelds for the rest of us” (HftRoU). Something between a cellphone, a portable gaming console, a portable media player, and a personal digital assistant. Handheld devices exist which cover most of these features/applications, but I’m mostly using this categorization to think about the future of handhelds in a globalised World.

The “new” device category could serve as the inspiration for a follow-up to the OLPC project. One thing about which I keep thinking, in relation to the “OLPC” project, is that the ‘L’ part was too restrictive. Sure, laptops can be great tools for students, especially if these students are used to (or need to be trained in) working with and typing long-form text. But I don’t think that laptops represent the most “disruptive technology” around. If we think about their global penetration and widespread impact, cellphones are much closer to the leapfrog effect about which we all have been writing.

So, why not just talk about a cellphone or smartphone? Well, I’m trying to think both more broadly and more specifically. Cellphones are already helping people empower themselves. The next step might to add selected features which bring them closer to the OLPC dream. Also, since cellphones are widely distributed already, I think it’s important to think about devices which may complement cellphones. I have some ideas about non-handheld tools which could make cellphones even more relevant in people’s lives. But they will have to wait for another blogpost.

So, to put it simply, “handhelds for the rest of us” (HftRoU) are somewhere between the OLPC XO-1 and Apple’s original iPhone, in terms of features. In terms of prices, I dream that it could be closer to that of basic cellphones which are in the hands of so many people across the globe. I don’t know what that price may be but I heard things which sounded like a third of the price the OLPC originally had in mind (so, a sixth of the current price). Sure, it may take a while before such a low cost can be reached. But I actually don’t think we’re in a hurry.

I guess I’m just thinking of the electronics (and global) version of the Ford T. With more solidarity in mind. And cultural awareness.

Google’s Open Handset Alliance (OHA) may produce something more appropriate to “global contexts” than Apple’s iPhone. In comparison with Apple’s iPhone, devices developed by the OHA could be better adapted to the cultural, climatic, and economic conditions of those people who don’t have easy access to the kind of computers “we” take for granted. At the very least, the OHA has good representation on at least three continents and, like the old OLPC project, the OHA is officially dedicated to openness.

I actually care fairly little about which teams will develop devices in this category. In fact, I hope that new manufacturers will spring up in some local communities and that major manufacturers will pay attention.

I don’t care about who does it, I’m mostly interested in what the devices will make possible. Learning, broadly speaking. Communicating, in different ways. Empowering themselves, generally.

One thing I have in mind, and which deviates from the OLPC mission, is that there should be appropriate handheld devices for all age-ranges. I do understand the focus on 6-12 year-olds the old OLPC had. But I don’t think it’s very productive to only sell devices to that age-range. Especially not in those parts of the world (i.e., almost anywhere) where generation gaps don’t imply that children are isolated from adults. In fact, as an anthropologist, I react rather strongly to the thought that children should be the exclusive target of a project meant to empower people. But I digress, as always.

I don’t tend to be a feature-freak but I have been thinking about the main features the prototypical device in this category should have. It’s not a rigid set of guidelines. It’s just a way to think out loud about technology’s integration in human life.

The OS and GUI, which seem like major advantages of the eeePC, could certainly be of the mobile/handheld type instead of the desktop/laptop type. The usual suspects: Symbian, NewtonOS, Android, Zune, PalmOS, Cocoa Touch, embedded Linux, Playstation Portable, WindowsCE, and Nintendo DS. At a certain level of abstraction, there are so many commonalities between all of these that it doesn’t seem very efficient to invent a completely new GUI/OS “paradigm,” like OLPC’s Sugar was apparently trying to do.

The HftRoU require some form of networking or wireless connectivity feature. WiFi (802.11*), GSM, UMTS, WiMAX, Bluetooth… Doesn’t need to be extremely fast, but it should be flexible and it absolutely cannot be cost-prohibitive. IP might make much more sense than, say, SMS/MMS, but a lot can be done with any kind of data transmission between devices. XO-style mesh networking could be a very interesting option. As VoIP has proven, voice can efficiently be transmitted as data so “voice networks” aren’t necessary.

My sense is that a multitouch interface with an accelerometer would be extremely effective. Yes, I’m thinking of Apple’s Touch devices and MacBooks. As well as about the Microsoft Surface, and Jeff Han’s Perceptive Pixel. One thing all of these have shown is how “intuitive” it can be to interact with a machine using gestures. Haptic feedback could also be useful but I’m not convinced it’s “there yet.”

I’m really not sure a keyboard is very important. In fact, I think that keyboard-focused laptops and tablets are the wrong basis for thinking about “handhelds for the rest of us.” Bear in mind that I’m not thinking about devices for would-be office workers or even programmers. I’m thinking about the broadest user base you can imagine. “The Rest of Us” in the sense of, those not already using computers very directly. And that user base isn’t that invested in (or committed to) touch-typing. Even people who are very literate don’t tend to be extremely efficient typists. If we think about global literacy rates, typing might be one thing which needs to be leapfrogged. After all, a cellphone keypad can be quite effective in some hands and there are several other ways to input text, especially if typing isn’t too ingrained in you. Furthermore, keyboards aren’t that convenient in multilingual contexts (i.e., in most parts of the world). I say: avoid the keyboard altogether, make it available as an option, or use a virtual one. People will complain. But it’s a necessary step.

If the device is to be used for voice communication, some audio support is absolutely required. Even if voice communication isn’t part of it (and I’m not completely convinced it’s the one required feature), audio is very useful, IMHO (I’m an aural guy). In some parts of the world, speakers are much favoured over headphones or headsets. But I personally wish that at least some HftRoU could have external audio inputs/outputs. Maybe through USB or an iPod-style connector.

A voice interface would be fabulous, but there still seem to be technical issues with both speech recognition and speech synthesis. I used to work in that field and I keep dreaming, like Bill Gates and others do, that speech will finally take the world by storm. But maybe the time still hasn’t come.

It’s hard to tell what size the screen should be. There probably needs to be a range of devices with varying screen sizes. Apple’s Touch devices prove that you don’t need a very large screen to have an immersive experience. Maybe some HftRoU screens should in fact be larger than that of an iPhone or iPod touch. Especially if people are to read or write long-form text on them. Maybe the eeePC had it right. Especially if the devices’ form factor is more like a big handheld than like a small subnotebook (i.e., slimmer than an eeePC). One reason form factor matters, in my mind, is that it could make the devices “disappear.” That, and the difference between having a device on you (in your pocket) and carrying a bag with a device in it. Form factor was a big issue with my Newton MessagePad 130. As the OLPC XO showed, cost and power consumption are also important issues regarding screen size. I’d vote for a range of screens between 3.5 inch (iPhone) and 8.9 inch (eeePC 900) with a rather high resolution. A multitouch version of the XO’s screen could be a major contribution.

In terms of both audio and screen features, some consideration should be given to adaptive technologies. Most of us take for granted that “almost anyone” can hear and see. We usually don’t perceive major issues in the fact that “personal computing” typically focuses on visual and auditory stimuli. But if these devices truly are “for the rest of us,” they could help empower visually- or hearing-impaired individuals, who are often marginalized. This is especially relevant in the logic of humanitarianism.

HftRoU needs a much autonomy from a power source as possible. Both in terms of the number of hours devices can be operated without needing to be connected to a power source and in terms of flexibility in power sources. Power management is a major technological issue, with portable, handheld, and mobile devices. Engineers are hard at work, trying to find as many solutions to this issue as they can. This was, obviously, a major area of research for the OLPC. But I’m not even sure the solutions they have found are the only relevant ones for what I imagine HftRoU to be.

GPS could have interesting uses, but doesn’t seem very cost-effective. Other “wireless positioning systems” (à la Skyhook) might reprsent a more rational option. Still, I think positioning systems are one of the next big things. Not only for navigation or for location-based targeting. But for a set of “unintended uses” which are the hallmark of truly disruptive technology. I still remember an article (probably in the venerable Wired magazine) about the use of GPS/GIS for research into climate change. Such “unintended uses” are, in my mind, much closer to the constructionist ideal than the OLPC XO’s unified design can ever get.

Though a camera seems to be a given in any portable or mobile device (even the OLPC XO has one), I’m not yet that clear on how important it really is. Sure, people like taking pictures or filming things. Yes, pictures taken through cellphones have had a lasting impact on social and cultural events. But I still get the feeling that the main reason cameras are included on so many devices is for impulse buying, not as a feature to be used so frequently by all users. Also, standalone cameras probably have a rather high level of penetration already and it might be best not to duplicate this type of feature. But, of course, a camera could easily be a differentiating factor between two devices in the same category. I don’t think that cameras should be absent from HftRoU. I just think it’s possible to have “killer apps” without cameras. Again, I’m biased.

Apart from networking/connectivity uses, Bluetooth seems like a luxury. Sure, it can be neat. But I don’t feel it adds that much functionality to HftRoU. Yet again, I could be proven wrong. Especially if networking and other inter-device communication are combined. At some abstract level, there isn’t that much difference between exchanging data across a network and controlling a device with another device.

Yes, I do realize I pretty much described an iPod touch (or an iPhone without camera, Bluetooth, or cellphone fees). I’ve been lusting over an iPod touch since September and it does colour my approach. I sincerely think the iPod touch could serve as an inspiration for a new device type. But, again, I care very little about which company makes that device. I don’t even care about how open the operating system is.

As long as our minds are open.