Category Archives: journalism

Twenty Years Online

This month marks the 20th anniversary of my first Internet account. I don’t remember the exact date but I know it was in late summer 1993, right before what became known as “Eternal September”. The Internet wasn’t new, but it still wasn’t on most people’s proverbial “radars”.

Had heard one of my professors, Kevin Tuite, talk about the Internet as a system through which people from all over the World were communicating. Among the examples Tuite gave of possibilities offered by the ‘Net were conversations among people from former Soviet Republics, during this period of broad transitions. As a specialist of Svaneti, in present-day Georgia, Kevin was particularly interested in these conversations.

During that fated Summer of ‘93, I was getting ready to begin the last year of my B.Sc. in anthropology, specializing in linguistic anthropology and ethnomusicology. As I had done during previous summers, I was working BOH at a French restaurant. But, in my free time, I was exploring a brand new world.

In retrospect, it might not be a complete coincidence that my then-girlfriend of four years left me during that Fall 1993 semester.

It started with a local BBS, WAJU (“We Are Joining You”). I’m not exactly sure when I got started, but I remember being on WAJU in July. Had first been lent a 300 baud modem but I quickly switched to a 2400 baud one. My current ISP plan is 15Mbps, literally 50,000 times faster than my original connection.

By August 1993, thanks to the aforementioned Kevin Tuite, I was able to get an account on UdeM’s ERE network, meant for teaching and research (it stood for «Environnement de recherche et d’enseignement»). That network was running on SGI machines which weren’t really meant to handle large numbers of external connections. But it worked for my purpose of processing email (through Pine), Usenet newsgroups, FTP downloads (sometimes through Archie), IRC sessions, individual chats (though Talk), Gopher sites, and other things via Telnet. As much as possible, I did all of these things from campus, through one of the computer rooms, which offered amazingly fast connections (especially compared to my 2.4kbps modem). I spent enough time in those computer rooms that I still remember a distinct smell from them.

However, at some point during that period, I was able to hack a PPP connection going through my ERE account. In fact, I ended up helping some other people (including a few professors) do the same. It then meant we could use native applications to access the ’Net from home and, eventually, browse the Web graphically.

But I’m getting ahead of myself.

By the time I got online, NCSA Mosaic hadn’t been released. In fact, it took a little while before I even heard of the “World Wide Web”. I seem to remember that I only started browsing the Web in 1994. At the same time, I’m pretty sure one of my most online-savvy friends (likely Alex Burton or Martin Dupras) had told me about the Web as soon as version 1.0 of Mosaic was out, or even before.

The Web was a huge improvement, to be sure. But it was neither the beginning nor the end of the ‘Net, for those of us who had been there a little while. Yes, even a few months. Keep in mind that, at the time, there weren’t that many sites, on the Web. Sure, most universities had a Web presence and many people with accounts on university networks had opportunities to create homepages. But there’s a reason there could be Web directories (strongly associated with Yahoo!, now, but quite common at the time). Pages were “static” and there wasn’t much which was “social” on the Web, at the time.

But the ’Net as a whole was very social. At least, for the budding ethnographer that I was, the rest of the ‘Net was a much more interesting context for observation than the Web. Especially newsgroups and mailinglists.

Especially since the ‘Net was going through one of its first demographic explosions. Some AOLers were flooding the ‘Net. Perhaps more importantly, newbie bashing was peaking and comments against AOL or other inexperienced “Netizens” were frequently heard. I personally heard a lot more from people complaining about AOL than from anyone accessing the ’Net through AOL.

Something about the influx which was clear, though, is that the “democratization” was being accompanied by commercialization. A culture of open sharing was being replaced by corporate culture. Free culture was being preempted by a culture of advertising. The first .com domains were almost a novelty, in a ‘Net full of country-specific domains along with lots of .edu, .net, .org, .gov, and even .mil servers.

The ‘Net wasn’t yet about “paying for content”. That would come a few years later, when media properties pushed “user-generated content” into its own category (instead of representing most of what was available online). The ‘Net of the mid-1990s was about gaining as much attention as possible. We’re still in that mode, of course. But the contrast was striking. Casual conversations were in danger of getting drowned by megaphones. The billboard overtook the café. With the shift, a strong sense of antagonism emerged. The sense of belonging to a community of early adopters increased with the sense of being attacked by old “media types”. People less interested in sharing knowledge and more interested in conveying their own corporate messages. Not that individuals had been agenda-free until that point. But there was a big difference between geeks arguing about strongly-held opinions and “brands” being pushed onto the scene.

Early on, the thing I thought the Internet would most likely disrupt was journalism. I had a problem with journalism so, when I saw how the ‘Net could provide increased access to information, I was sure it’d imply a reappropriation of news by people themselves, with everything this means in the spread of critical thinking skills. Some of this has happened, to an extent. But media consolidation had probably a more critical role to play in journalism’s current crisis than online communication. Although, I like to think of these things as complex systems of interrelated trends and tendencies instead of straightforward causal scenarios.

In such a situation, the ‘Net becoming more like a set of conventional mass media channels was bad news. More specifically, the logic of “getting your corporate message across” was quite offputting to a crowd used to more casual (though often heated and loud) conversations. What comes to mind is a large agora with thousands of people having thousands of separate conversations being taken over by a massive PA system. Regardless of the content of the message being broadcast by this PA system, the effect is beyond annoying.

Through all of this, I distinctly remember mid-April, 1994. At that time, the Internet changed.  One might say it never recovered.

At that time, two unscrupulous lawyers sent the first commercial spam on Usenet newsgroups. They apparently made a rather large sum of money from their action but, more importantly, they ended the “Netiquette” era. From this point on, a conflict has emerged between those who use and those who abuse the ‘Net. Yes, strong words. But I sincerely think they’re fitting. Spammers are like Internet’s cancer. They may “serve a function” and may inspire awe. Mostly, though, they’re “cells gone rogue”. Not that I’m saying the ‘Net was free of disease before this “Green Card lottery” moment. For one thing, it’s possible (though unlikely) that flamewars were somewhat more virulent then than they are now. It’s just that the list of known online woes expanded quickly with the addition of cancer-like diseases. From annoying Usenet spam, we went rather rapidly to all sorts of malevolent large-scale actions. Whatever we end up doing online, we carry the shadow of such actions.

Despite how it may sound, my stance isn’t primarily moral. It’s really about a shift from a “conversational” mode to a “mass media” one. Spammers exploited Usenet by using it as a “mass media” channel, at a time when most people online were using it as a large set of “many-to-many” channels.

The distinction between Usenet spam and legitimate advertising may be extremely important, to a very large number of people. But the gates spammers opened were the same ones advertisers have been using ever since.

My nostalgia of the early Internet has a lot to do with this shift. I know we gained a lot, in the meantime. I enjoy many benefits from the “democratization” of the ‘Net. I wouldn’t trade the current online services and tools for those I was using in August, 1993. But I do long for a cancer-free Internet.

Wearable Hub: Getting the Ball Rolling

Statement

After years of hype, wearable devices are happening. What wearable computing lacks is a way to integrate devices into a broader system.

Disclaimer/Disclosure/Warning

  • For the past two months or so, I’ve been taking notes about this “wearable hub” idea (started around CES’s time, as wearable devices like the Pebble and Google Glass were discussed with more intensity). At this point, I have over 3000 words in notes, which probably means that I’d have enough material for a long essay. This post is just a way to release a few ideas and to “think aloud” about what wearables may mean.
  • Some of these notes have to do with the fact that I started using a few wearable devices to monitor my activities, after a health issue pushed me to start doing some exercise.
  • I’m not a technologist nor do I play one on this blog. I’m primarily an ethnographer, with diverse interests in technology and its implications for human beings. I do research on technological appropriation and some of the course I teach relate to the social dimensions of technology. Some of the approaches to technology that I discuss in those courses relate to constructionism and Actor-Network Theory.
  • I consider myself a “geek ethnographer” in the sense that I take part in geek culture (and have come out as a geek) but I’m also an outsider to geekdom.
  • Contrary to the likes of McLuhan, Carr, and Morozov, my perspective on technology and society is non-deterministic. The way I use them, “implication” and “affordance” aren’t about causal effects or, even, about direct connections. I’m not saying that society is causing technology to appear nor am I proposing a line from tools to social impacts. Technology and society are in a complex system.
  • Further, my approach isn’t predictive. I’m not saying what will happen based on technological advances nor am I saying what technology will appear. I’m thinking about the meaning of technology in an intersubjective way.
  • My personal attitude on tools and gadgets is rather ambivalent. This becomes clear as I go back and forth between techno-enthusiastic contexts (where I can almost appear like a Luddite) and techno-skeptical contexts (where some might label me as a gadget freak). I integrate a number of tools in my life but I can be quite wary about them.
  • I’m not wedded to the ideas I’m putting forth, here. They’re just broad musings of what might be. More than anything, I hope to generate thoughtful discussion. That’s why I start this post with a broad statement (not my usual style).
  • Of course, I know that other people have had similar ideas and I know that a concept of “wearable hub” already exists. It’s obvious enough that it’s one of these things which can be invented independently.

From Wearables to Hubs

Back in the 1990s, “wearable computing” became something of a futuristic buzzword, often having to do with articles of clothing. There have been many experiments and prototypes converging on an idea that we would, one day, be able to wear something resembling a full computer. Meanwhile, “personal digital assistants” became something of a niche product and embedded systems became an important dimension of car manufacturing.

Fast-forward to 2007, when a significant shift in the use of smartphones occurred. Smartphones existed before that time, but their usages, meanings, and positions in the public discourse changed quite radically around the time of the iPhone’s release. Not that the iPhone itself “caused a smartphone revolution” or that smartphone adoption suddenly reached a “tipping point”. I conceive of this shift as a complex interplay between society and tools. Not only more Kuhn than Popper, but more Latour than Kurzweil.

Smartphones, it may be argued, “happened”.

Without being described as “wearable devices”, smartphones started playing some of the functions people might have assigned to wearable devices. The move was subtle enough that Limor Fried recently described it as a realization she’s been having. Some tech enthusiasts may be designing location-aware purses and heads-up displays in the form of glasses. Smartphones are already doing a lot of the things wearables were supposed to do. Many people “wear” smartphones at most times during their waking lives and these Internet-connected devices are full of sensors. With the proliferation of cases, one might even perceive some of them as fashion accessories, like watches and sunglasses.

Where smartphones become more interesting, in terms of wearable computing, is as de facto wearable hubs.

My Wearable Devices

Which brings me to mention the four sensors I’ve been using more extensively during the past two months:

Yes, these all have to do with fitness (and there’s quite a bit of overlap between them). And, yes, I started using them a few days after the New Year. But it’s not about holiday gifts or New Year’s resolutions. I’ve had some of these devices for a while and decided to use them after consulting with a physician about hypertension. Not only have they helped me quite a bit in solving some health issues, but these devices got me to think.

(I carry several other things with me at most times. Some of my favourites include Tenqa REMXD Bluetooth headphones and the LiveScribe echo smartpen.)

One aspect is that they’re all about the so-called “quantified self”. As a qualitative researcher, I tend to be skeptical of quants. In this case, though, the stats I’m collecting about myself fit with my qualitative approach. Along with quantitative data from these devices, I’ve started collecting qualitative data about my life. The next step is to integrate all those data points automatically.

These sensors are also connected to “gamification”, a tendency I find worrisome, preferring playfulness. Though game mechanics are applied to the use of these sensors, I choose to rely on my intrinsic motivation, not paying much attention to scores and badges.

But the part which pushed me to start taking the most notes was that all these sensors connect with my iOS ()and Android) devices. And this is where the “wearable hub” comes into play. None of these devices is autonomous. They’re all part of my personal “arsenal”, the equipment I have on my me on most occasions. Though there are many similarities between them, they still serve different purposes, which are much more limited than those “wearable computers” might have been expected to serve. Without a central device serving as a type of “hub”, these sensors wouldn’t be very useful. This “hub” needs not be a smartphone, despite the fact that, by default, smartphones are taken to be the key piece in this kind of setup.

In my personal scenario, I do use a smartphone as a hub. But I also use tablets. And I could easily use an existing device of another type (say, an iPod touch), or even a new type of device meant to serve as a wearable hub. Smartphones’ “hub” affordances aren’t exclusive.

From Digital Hub to Wearable Hub

Most of the devices which would likely serve as hubs for wearable sensors can be described as “Post-PC”. They’re clearly “personal” and they’re arguably “computers”. Yet they’re significantly different from the “Personal Computers” which have been so important at the end of last century (desktop and laptop computers not used as servers, regardless of the OS they run).

Wearability is a key point, here. But it’s not just a matter of weight or form factor. A wearable hub needs to be wireless in at least two important ways: independent from a power source and connected to other devices through radio waves. The fact that they’re worn at all times also implies a certain degree of integration with other things carried throughout the day (wallets, purses, backpacks, pockets…). These devices may also be more “personal” than PCs because they may be more apparent and more amenable to customization than PCs.

Smartphones fit the bill as wearable hubs. Their form factors and battery life make them wearable enough. Bluetooth (or ANT+, Nike+, etc.) has been used to pair them wirelessly with sensors. Their connectivity to GPS and cellular networking as well as their audio and visual i/o can have interesting uses (mapping a walk, data updates during a commute, voice feedback…). And though they’re far from ubiquitous, smartphones have become quite common in key markets.

Part of the reason I keep thinking about “hubs” has to do with comments made in 2001 by then Apple CEO Steve Jobs about the “digital lifestyle” age in “PC evolution” (video of Jobs’s presentation; as an anthropologist, I’ll refrain from commenting on the evolutionary analogies):

We believe the PC, or more… importantly, the Mac can become the “digital hub” of our emerging digital lifestyle, with the ability to add tremendous value to … other digital devices.

… like camcorders, portable media players, cellphones, digital cameras, handheld organizers, etc. (Though they weren’t mentioned, other peripherals like printers and webcams also connect to PCs.)

The PC was thus going to serve as a hub, “not only adding value to these devices but interconnecting them, as well”.

At the time, key PC affordances which distinguished them from those other digital devices:

  • Big screen affording more complex user interfaces
  • Large, inexpensive hard disk storage
  • Burning DVDs and CDs
  • Internet connectivity, especially broadband
  • Running complex applications (including media processing software like the iLife suite)

Though Jobs pinpointed iLife applications as the basis for this “digital hub” vision, it sounds like FireWire was meant to be an even more important part of this vision. Of course, USB has supplanted FireWire in most use cases. It’s interesting, then, to notice that Apple only recently started shipping Macs with USB 3. In fact, DVD burning is absent from recent Macs. In 2001, the Mac might have been at the forefront of this “digital lifestyle” age. In 2013, the Mac has moved away from its role as “digital hub”.

In the meantime, the iPhone has become one of the best known examples of what I’m calling “wearable hubs”. It has a small screen and small, expensive storage (by today’s standards). It also can’t burn DVDs. But it does have nearly-ubiquitous Internet connectivity and can run fairly complex applications, some of which are adapted from the iLife suite. And though it does have wired connectivity (through Lightning or the “dock connector”), its main hub affordances have to do with Bluetooth.

It’s interesting to note that the same Steve Jobs, who used the “digital hub” concept to explain that the PC wasn’t dead in 2001, is partly responsible for popularizing the concept of “post-PC devices” six years later. One might perceive hypocrisy in this much delayed apparent flip-flop. On the other hand, Steve Jobs’s 2007 comments (video) were somewhat nuanced, as to the role of post-PC devices. What’s more interesting, though, is to think about the implications of the shift between two views of digital devices, regardless of Apple’s position through that shift.

Some post-PC devices (including the iPhone, until quite recently) do require a connection to a PC. In this sense, a smartphone might maintain its position with regards to the PC as digital hub. Yet, some of those devices are used independently of PCs, including by some people who never owned PCs.

Post-Smartphone Hubs

It’s possible to imagine a wearable hub outside of the smartphone (and tablet) paradigm. While smartphones are a convenient way to interconnect wearables, their hub-related affordances still sound limited: they lack large displays and their storage space is quite expensive. Their battery life may also be something to consider in terms of serving as hubs. Their form factors make some sense, when functioning as phones. Yet they have little to do with their use as hubs.

Part of the realization, for me, came from the fact that I’ve been using a tablet as something of an untethered hub. Since I use Bluetooth headphones, I can listen to podcasts and music while my tablet is in my backpack without being entangled in a cable. Sounds trivial but it’s one of these affordances I find quite significant. Delegating music playing functions to my tablet relates in part to battery life and use of storage. The tablet’s display has no importance in this scenario. In fact, given some communication between devices, my smartphone could serve as a display for my tablet. So could a “smartwatch” or “smartglasses”.

The Body Hub

Which led me to think about other devices which would work as wearable hubs. I originally thought about backpackable and pocketable devices.

But a friend had a more striking idea:

Under Armour’s Recharge Energy Suit may be an extreme version of this, one which would fit nicely among things Cathi Bond likes to discuss with Nora Young on The Sniffer. Nora herself has been discussing wearables on her blog as well as on her radio show. Sure, part of this concept is quite futuristic. But a sensor mesh undershirt is a neat idea for several reasons.

  • It’s easy to think of various sensors it may contain.
  • Given its surface area, it could hold enough battery power to supplement other devices.
  • It can be quite comfortable in cold weather and might even help diffuse heat in warmer climates.
  • Though wearable, it needs not be visible.
  • Thieves would probably have a hard time stealing it.
  • Vibration and haptic feedback on the body can open interesting possibilities.

Not that it’s the perfect digital hub and I’m sure there are multiple objections to a connected undershirt (including issues with radio signals). But I find the idea rather fun to think, partly because it’s so far away from the use of phones, glasses, and watches as smart devices.

Another thing I find neat, and it may partly be a coincidence, is the very notion of a “mesh”.

The Wearable Mesh

Mesh networking is a neat concept, which generates more hype than practical uses. As an alternative to WiFi access points and cellular connectivity, it’s unclear that it may “take the world by storm”. But as a way to connect personal devices, it might have some potential. After all, as Bernard Benhamou recently pointed out on France Culture’s Place de la toile, the Internet of Things may not require always-on full-bandwith connectivity. Typically, wearable sensors use fairly little bandwidth or only use it for limited amounts of time. A wearable mesh could connect wearable devices to one another while also exchanging data through the Internet itself.

Or with local devices. Smart cities, near field communication, and digital appliances occupy interesting positions among widely-discussed tendencies in the tech world. They may all have something to do with wearable devices. For instance, data exchanged between transit systems and their users could go through wearable devices. And while mobile payment systems can work through smartphones and other cellphones, wallet functions can also be fulfilled by other wearable devices.

Alternative Futures

Which might provide an appropriate segue into the ambivalence I feel toward the “wearable hub” concept I’m describing. Though I propose these ideas as if I were enthusiastic about them, they all give me pause. As a big fan of critical thinking, I like to think about “what might be” to generate questions and discussions exposing a diversity of viewpoints about the future.

Mass media discussions about these issues tend to focus on such things as privacy, availability, norms, and usefulness. Google Glass has generated quite a bit of buzz about all four. Other wearables may mainly raise issues for one or two of these broad dimensions. But the broad domain of wearable computing raises a lot more issues.

Technology enthusiasts enjoy discussing issues through the dualism between dystopia and utopia. An obvious issue with this dualism is that humans disagree about the two categories. Simply put, one person’s dystopia can be another person’s utopia, not to mention the nuanced views of people who see complex relationships between values and social change.

In such a context, a sociologist’s reflex may be to ask about the implications of these diverse values and opinions. For instance:

  • How do people construct these values?
  • Who decides which values are more important?
  • How might social groups cope with changes in values?

Discussing these issues and more, in a broad frame, might be quite useful. Some of the trickiest issues are raised after some changes in technology have already happened. From writing to cars, any technological context has unexpected implications. An ecological view of these implications could broaden the discussion.

I tend to like the concept of the “drift-off moment”, during which listeners (or readers) start thinking about the possibilities afforded a new tool (or concept). In the context of a sales pitch, the idea is that these possibilities are positive, a potential buyer is thinking about the ways she might use a newfangled device. But I also like the deeper process of thinking about all sorts of implications, regardless of their value.

So…

What might be the implications of a wearable hub?

Bean Counters and Ecologists

[So many things in my drafts, but this one should be quick.]

Recently met someone who started describing their restaurant after calling it a “café”. The “pitch” revolved around ethical practices, using local products, etc. As both a coffee geek and ethnographer, my simple question was: “Which coffee do you use?” Turns out, they’re importing coffee from a multinational corporation. “Oh, but, they’re lending us an expensive espresso machine for free! And they have fair-trade coffee!”

Luckily, we didn’t start talking about “fair trade”. And this person was willing to reflect upon the practices involved, including about the analogy with Anheuser-Busch or Coca-Cola. We didn’t get further into the deeper consequences of the resto’s actions, but the “seed” has been planted.

Sure, it’s important to focus on your financials and there’s nothing preventing a business from being both socially responsible and profitable. It just requires a shift in mindset. Small, lean, nimble businesses are more likely to do it than big, multinational corporate empires…

…which leads me to Google.

Over the years since its IPO, Google has attracted its share of praise and criticism. Like any big, multinational corporate empire. In any sector.

Within the tech sector, the Goog‘ is often compared with Microsoft, Facebook, Twitter, and Apple. All of these corporate entities have been associated in some people’s minds with some specific issue, from child labour and failure to protect users’ privacy to anticompetitive practices (the tech equivalent of free fridges and espresso machines). The issues are distinct and tech enthusiast spend a large amount of time discussing which one is worse. Meanwhile, we’re forgetting a number of larger issues.

Twitter is an interesting example, here. The service took its value from being at the centre of an ecosystem. As with any ecosystem, numerous interactions among many different members produce unexpected and often remarkable results. As the story goes, elements like hashtags and “@-replies” were invented by users and became an important part of the system. Third-party developers were instrumental in Twitter’s reach outside of its original confines. Though most of the original actors have since left the company, the ecosystem has maintained itself over the years.

When Twitter started changing the rules concerning its API, it shook the ecosystem. Sure, the ecosystem will maintain itself, in the end. But it’s nearly impossible to predict how it will change. For people at Twitter, it must have been obvious that the first changes was a warning shot to scare away those they didn’t want in their ecosystem. But, to this day, there are people who depend on Twitter, one way or another.

Google Reader offers an interesting case. The decision to kill it might have been myopic and its death might have a domino effect.

The warning shot was ambiguous, but the “writing was on the wall”. Among potential consequences of the move, the death of RSS readers was to be expected. One might also expect users of feedreaders to be displeased. In the end, the ecosystem will maintain itself.

Chances are, feedreading will be even more marginalized than it’s been and something else might replace it. Already, many people have been switching from feedreading to using Twitter as a way to gather news items.

What’s not so well-understood is the set of indirect consequences, further down the line. Again, domino effect. Some dominoes are falling in the direction of news outlets which have been slow to adapt to the ways people create and “consume” news items. Though their ad-driven models may sound similar to Google’s, and though feedreading might not be a significant source of direct revenue, the death of feedreaders may give way to the birth of new models for news production and “consumption” which might destabilize them even further. Among the things I tag as #FoJ (“Future of Journalism”) are several pieces of a big puzzle which seems misunderstood by news organizations.

There are other big dominoes which might fall from the death of Google Reader. Partly because RSS itself is part of a whole ecosystem. Dave Winer and Aaron Swartz have been major actors in the technical specifications of RSS. But Chris Lydon and people building on calendar syndication are also part of the ecosystem. In business-speak, you might call them “stakeholders”. But thinking about the ecosystem itself leads to a deeper set of thoughts, beyond the individuals involved. In the aftermath of Aaron Swartz’s premature death, it may be appropriate to point out that the ecosystem is more than the sum of its parts.

As I said on a service owned by another widely-criticized corporate empire:

Many of us keep saying that Google needs to listen to its social scientists. It also needs to understand ecology.

The Magazine and Social Media

Megaphone red
Megaphone red by Adamantios (via Wikimedia Commons, (GFDL, CC-BY-SA)

The following is my App Store review of The Magazine, a Newsstand offering by Instapaper developer Marco Arment.

Though I like Marco Arment’s work and there’s nothing specifically wrong about this implementation of the magazine model, I don’t find the magazine model particularly useful, at this point. And, make no mistake. The Magazine is indeed a magazine.

Oh, sure, this format overcomes several of the limitations set by advertising-based models and hierarchical boards. But it maintains something of the magazine logic: a tight bundle of a few articles authored by people connected through the same “editorial intent”. It’s not a conversation with the public. In this first issue, it’s not even a conversation among co-authors.

The “linked list” aspect of the “Fireball Format” (from John Gruber’s Daring Fireball media property) is described in one of the pieces in this first issue. Other distinguishing factors of the “Fireball Format” aren’t discussed in that same piece. They include a “no comment” policy which has become rather common among high-profile blogs. Unlike most blogs of the pioneer era in social media, these blogs don’t allow readers to comment directly.

A justification for this policy is that comments can be posted elsewhere. And since most of these bloggers are active on microblogging platforms like App.net and Twitter, there’s a chance that a comment might be noticed by those authors. What’s missing, though, is the sense of belonging which bloggers created among themselves before MySpace.

In other words, now that there are large social networking services online, the social aspects of blogging have been deemphasized and authorial dimensions have come to prominence. Though Arment dislikes the word, blog authors have become “brands”. It still works when these authors are in conversation with one another, when there’s a likelihood of a “followup” (FU in 5by5 parlance), when authors are responsive.

None of that interaction potential seems to be part of the core model for The Magazine. You can scream at your iOS device all you want, Jason Snell will probably not respond to you in a future edition of The Magazine. You can attempt dialogue on Twitter, but any conversation you may succeed in starting there is unlikely to have any impact on The Magazine. You’re talking with authors, now, not with members of a community.

With The Magazine, the transition from social to authorial is almost complete. Not only are posts set apart from the conversation but the editorial act of bundling posts together brings back all the problems media scholars have been pointing out for the past several decades. The issue at stake isn’t merely the move to online delivery. It’s the structure of authority and the one-to-many broadcast-style transmission. We’ve taken a step back.

So, while The Magazine has certain technical advantages over old school magazines like The Daily and Wired, it represents a step away from social media and towards mass media. Less critical thinking, more pedestals.

A new model could emerge using the infrastructure and business model that Arment built. But it’d require significant work outside of the application. The Feature might contribute something to this new model, especially if the way posts are bundled together became more flexible.

So, all in all, I consider The Magazine to be a step in the wrong direction by someone whose work I respect.

Good thing we still have podcasts.

Activism and Journalism

In yesterday’s “Introduction to Society” class, we discussed a number of things related to activism, journalism, labour issues, and even Apple and Foxconn (along with slacktivism, Kony 2012, mass media, moral entrepreneurs, and Wal-Mart).

This discussion was sparked, in part, from a student’s question:

What good are the finding the sociologists obtain if the sociologists themselves are passive to the issues observed?

Very good question, and I feel that the discussion we’ve had in class scratched the surface of the issue.

My response could have related to my current work, which I have mentioned in class on several occasions. These days, an important part of my work outside of the Ivory Tower has to do with community organizations. More specifically, I do fieldwork for Communautique, whose mission is to:

Support civic participation by promoting information literacy, appropriation of information and communications technologies and contribution to their development.

Though I’m no activist, I see a clear role for activism and my work directly supports a form of activism. The goal here is social change, toward increased participation by diverse citizens. Thankfully, this is no “us/them” campaign. There’s no demonization, here. Many of us may disagree on a course of action, but inclusion, not confrontation, is among this work’s main goals.

I sincerely think that my work, however modest, may have a positive impact. Not that I delude myself into thinking that there’s a “quick fix” to problems associated with social exclusion. But I see a fairly clear bifurcation between paths and I choose one which might lead to increased inclusiveness.

I didn’t talk about my work during out classroom discussion. Though I love to talk about it, I try to make these discussions as interactive as possible. Even when I end up talking more than anybody else, I do what I can not to lead the discussion in too specific a direction. So, instead of talking about Communautique, we talked about Foxconn. I’m pretty sure I brought it up, but it was meant as a way to discuss a situation with which students can relate.

Turns out, there was an ideal case to discuss many of these themes. Here’s a message about this case that I just sent to the class’s forum:

Some of you might have heard of this but I hadn’t, before going to class. Sounds to me like it brings together several points we’ve discussed yesterday (activism, journalism, message dissemination, labour conditions, Foxconn, Apple…). It also has a lot to do with approaches to truth, which do tend to differ.

 

So… An episode of This American Life about Foxconn factories making Apple products contained a number of inaccurate things, coming from Mike Daisey, a guy who does monologues as stage plays. These things were presented as facts (and had gone through an elaborate “factchecking” process) and Daisey defends them as theatre, meant to make people react.

 

Here’s a piece about it, from someone who was able to pinpoint some inaccuracies: “An acclaimed Apple critic made up the details”.

 

The retraction from the team at This American Life took a whole show, along with an apparently difficult blogpost.

Interesting stuff, if you ask me. Especially since people might argue that the whole event may negatively impact the cause. After all, the problems of factory workers in China may appeal to more than people’s quickest emotional responses. Though I’m a big fan of emotions, I also think there’s an opportunity to discuss these issues thoughtfully and critically. The issue goes further than Apple or even Foxconn. And it has a lot to do with Wallerstein’s “World Systems Theory”.

 

Anyhoo… Just thought some of you may be interested.

“Booth Babe” Controversy

I posted the following to the class forums for my two sections of SOCI203 “Introduction to Society”.
This might be a useful context to discuss journalism, gender issues, feminism as equality between genders, and feminist sociology.
Some context…
As Wikipedia says, Violet Blue (her real name) is an author and sex educator.
(Blue’s main site is somewhat NSFW (“Not Safe For Work”, meaning containing some potentially-offensive material), so I won’t link to it in this context, since the point isn’t about risqué blogging.)
Blue has a column about technology and, as far as I can tell from mentions of her name in the “geek scene”, her reputation is quite positive overall.
Like many others, Blue has issues with what she has called “booth babes”. As stated in HollyHen’s aforelinked blog comment, Blue’s description of said “booth babes” specifically paints them as women whose sexuality, sexiness, or sexual attributes are exploited for marketing purposes during trade shows.
The controversy erupted (!) from a picture labeled “The Saddest Booth Babe In The World” which Blue posted in relation to a blogpost she wrote about a Mac-centric trade show. Reactions to that picture came quickly, especially from people who were questioning Blue’s labeling of someone in that picture as a “Booth Babe”. As, again, HollyHen said, it’s hard to interpret anyone in that picture as a “Booth Babe” and there’s even something strange about using such a label in this context.
Where it gets perhaps more interesting (or, at least, sadder) is that the woman labeled as a “Booth Babe” in the picture is likely to be a software developer and Blue has refrained from apologizing for calling her a “sad Booth Babe”. Maybe the label isn’t slanderous or even insulting, in Blue’s mind. But the overall feeling from many readers is that there’s a missed opportunity, here, especially since Blue didn’t dare talk to the subject of her picture.
Instead, Blue has taken a very defensive stance.
I eventually became aware of the controversy through Mac-centric blogs, firstvia John Gruber then via Shawn King. Both King and Gruber have posted followup comments about the controversy. (King’s followup is clearly sarcastic and includes some comments people may easily find offensive.) In my experience, Mac-centric bloggers and several of their readers tend to go through a fairly unique dynamic by which key figures in that scene are frequently defended vigorously in something of a counterattack. In many contexts, it can indeed feel like a “pile on” effect. But I haven’t noticed any occasion where claiming that one is a victim of a Mac-centric pile-on has had an overall positive effect on the conversation or on the person’s overall reputation.
(By the way, what I call “Mac-centric” blogging includes some work by people who have been labeled “Apple fanboys”, but my labeling isn’t meant to carry any specific connotation, whether positive or negative. I just mean people who write about diverse issues using the Mac and other Apple products as a basis for a number of their comments. In journalistic terms, you could say that these are people who have Apple as their “beat”.)
So… Where does that leave us? I already gave something of my opinion about this. I do think the “sad booth babe” label was negative, that it could easily be taken as an insult, and that it seems ill-suited as a description of a software developer who holds a booth at a trade show in order to show off her work. Even if it turns out that the woman in the picture isn’t the Hungarian developer people surmise she might be, I do find it strange that Violet Blue would use her image as a representation of a “sad booth babe”. While the label isn’t as negative as, say, “bimbo” or “ditzy blonde”, I have to agree with HollyHen and others that using it in the legend of that picture has little positive impact on discussion of the issues at hand (exploitation of women to sell computer-related products and services).
But you may disagree.
So, let me know.

Intimacy, Network Effect, Hype

Is “intimacy” a mere correlate of the network effect?

Can we use the network effect to explain what has been happening with Quora?

Is the Quora hype related to network effect?

I really don’t feel a need to justify my dislike of Quora. Oh, sure, I can explain it. At length. Even on Quora itself. And elsewhere. But I tend to sense some defensiveness on the part of Quora fans.

[Speaking of fans, I have blogposts on fanboism laying in my head, waiting to be hatched. Maybe this will be part of it.]

But the important point, to me, isn’t about whether or not I like Quora. It’s about what makes Quora so divisive. There are people who dislike it and there are some who defend it.

Originally, I was only hearing from contacts and friends who just looooved Quora. So I was having a “Ionesco moment”: why is it that seemingly “everyone” who uses it loves Quora when, to me, it represents such a move in the wrong direction? Is there something huge I’m missing? Or has that world gone crazy?

It was a surreal experience.

And while I’m all for surrealism, I get this strange feeling when I’m so unable to understand a situation. It’s partly a motivation for delving into the issue (I’m surely not the only ethnographer to get this). But it’s also unsettling.

And, for Quora at least, this phase seems to be over. I now think I have a good idea as to what makes for such a difference in people’s experiences with Quora.

It has to do with the network effect.

I’m sure some Quora fanbois will disagree, but it’s now such a clear picture in my mind that it gets me into the next phase. Which has little to do with Quora itself.

The “network effect” is the kind of notion which is so commonplace that few people bother explaining it outside of introductory courses (same thing with “group forming” in social psychology and sociology, or preferential marriage patterns in cultural anthropology). What someone might call (perhaps dismissively): “textbook stuff.”

I’m completely convinced that there’s a huge amount of research on the network effect, but I’m also guessing few people looking it up. And I’m accusing people, here. Ever since I first heard of it (in 1993, or so), I’ve rarely looked at explanations of it and I actually don’t care about the textbook version of the concept. And I won’t “look it up.” I’m more interested in diverse usage patterns related to the concept (I’m a linguistic anthropologist).

So, the version I first heard (at a time when the Internet was off most people’s radar) was something like: “in networked technology, you need critical mass for the tools to become truly useful. For instance, the telephone has no use if you’re the only one with one and it has only very limited use if you can only call a single person.” Simple to the point of being simplistic, but a useful reminder.

Over the years, I’ve heard and read diverse versions of that same concept, usually in more sophisticated form, but usually revolving around the same basic idea that there’s a positive effect associated with broader usage of some networked technology.

I’m sure specialists have explored every single implication of this core idea, but I’m not situating myself as a specialist of technological networks. I’m into social networks, which may or may not be associated with technology (however defined). There are social equivalents of the “network effect” and I know some people are passionate about those. But I find that it’s quite limiting to focus so exclusively on quantitative aspects of social networks. What’s so special about networks, in a social science perspective, isn’t scale. Social scientists are used to working with social groups at any scale and we’re quite aware of what might happen at different scales. But networks are fascinating because of different features they may have. We may gain a lot when we think of social networks as acephalous, boundless, fluid, nameless, indexical, and impactful. [I was actually lecturing about some of this in my “Intro to soci” course, yesterday…]

So, from my perspective, “network effect” is an interesting concept when talking about networked technology, in part because it relates to the social part of those networks (innovation happens mainly through technological adoption, not through mere “invention”). But it’s not really the kind of notion I’d visit regularly.

This case is somewhat different. I’m perceiving something rather obvious (and which is probably discussed extensively in research fields which have to do with networked technology) but which strikes me as missing from some discussions of social networking systems online. In a way, it’s so obvious that it’s kind of difficult to explain.

But what’s coming up in my mind has to do with a specific notion of “intimacy.” It’s actually something which has been on my mind for a while and it might still need to “bake” a bit longer before it can be shared properly. But, like other University of the Streets participants, I perceive the importance of sharing “half-baked thoughts.”

And, right now, I’m thinking of an anecdotal context which may get the point across.

Given my attendance policy, there are class meetings during which a rather large proportion of the class is missing. I tend to call this an “intimate setting,” though I’m aware that it may have different connotations to different people. From what I can observe, people in class get the point. The classroom setting is indeed changing significantly and it has to do with being more “intimate.”

Not that we’re necessarily closer to one another physically or intellectually. It needs not be a “bonding experience” for the situation to be interesting. And it doesn’t have much to do with “absolute numbers” (a classroom with 60 people is relatively intimate when the usual attendance is close to 100; a classroom with 30 people feels almost overwhelming when only 10 people were showing up previously). But there’s some interesting phenomenon going on when there are fewer people than usual, in a classroom.

Part of this phenomenon may relate to motivation. In some ways, one might expect that those who are attending at that point are the “most dedicated students” in the class. This might be a fairly reasonable assumption in the context of a snowstorm but it might not work so well in other contexts (say, when the incentive to “come to class” relates to extrinsic motivation). So, what’s interesting about the “intimate setting” isn’t necessarily that it brings together “better people.” It’s that something special goes on.

What’s going on, with the “intimate classroom,” can vary quite a bit. But there’s still “something special” about it. Even when it’s not a bonding experience, it’s still a shared experience. While “communities of practice” are fascinating, this is where I tend to care more about “communities of experience.” And, again, it doesn’t have much to do with scale and it may have relatively little to do with proximity (physical or intellectual). But it does have to do with cognition and communication. What is special with the “intimate classroom” has to do with shared assumptions.

Going back to Quora…

While an online service with any kind of network effect is still relatively new, there’s something related to the “intimate setting” going on. In other words, it seems like the initial phase of the network effect is the “intimacy” phase: the service has a “large enough userbase” to be useful (so, it’s achieved a first type of critical mass) but it’s still not so “large” as to be overwhelming.

During that phase, the service may feel to people like a very welcoming place. Everyone can be on a “first-name basis. ” High-status users mingle with others as if there weren’t any hierarchy. In this sense, it’s a bit like the liminal phase of a rite of passage, during which communitas is achieved.

This phase is a bit like the Golden Age for an online service with a significant “social dimension.” It’s the kind of time which may make people “wax nostalgic about the good ole days,” once it’s over. It’s the time before the BYT comes around.

Sure, there’s a network effect at stake.  You don’t achieve much of a “sense of belonging” by yourself. But, yet again, it’s not really a question of scale. You can feel a strong bond in a dyad and a team of three people can perform quite well. On the other hand, the cases about which I’m thinking are orders of magnitude beyond the so-called “Dunbar number” which seems to obsess so many people (outside of anthro, at least).

Here’s where it might get somewhat controversial (though similar things have been said about Quora): I’d argue that part of this “intimacy effect” has to do with a sense of “exclusivity.” I don’t mean this as the way people talk about “elitism” (though, again, there does seem to be explicit elitism involved in Quora’s case). It’s more about being part of a “select group of people.” About “being there at the time.” It can get very elitist, snobbish, and self-serving very fast. But it’s still about shared experiences and, more specifically, about the perceived boundedness of communities of experience.

We all know about early adopters, of course. And, as part of my interest in geek culture, I keep advocating for more social awareness in any approach to the adoption part of social media tools. But what I mean here isn’t about a “personality type” or about the “attributes of individual actors.” In fact, this is exactly a point at which the study of social networks starts deviating from traditional approaches to sociology. It’s about the special type of social group the “initial userbase” of such a service may represent.

From a broad perspective (as outsiders, say, or using the comparativist’s “etic perspective”), that userbase is likely to be rather homogeneous. Depending on the enrollment procedure for the service, the structure of the group may be a skewed version of an existing network structure. In other words, it’s quite likely that, during that phase, most of the people involved were already connected through other means. In Quora’s case, given the service’s pushy overeagerness on using Twitter and Facebook for recruitment, it sounds quite likely that many of the people who joined Quora could already be tied through either Twitter or Facebook.

Anecdotally, it’s certainly been my experience that the overwhelming majority of people who “follow me on Quora” have been part of my first degree on some social media tool in the recent past. In fact, one of my main reactions as I’ve been getting those notifications of Quora followers was: “here are people with whom I’ve been connected but with whom I haven’t had significant relationships.” In some cases, I was actually surprised that these people would “follow” me while it appeared like they actually weren’t interested in having any kind of meaningful interactions. To put it bluntly, it sometimes appeared as if people who had been “snubbing” me were suddenly interested in something about me. But that was just in the case of a few people I had unsuccessfully tried to engage in meaningful interactions and had given up thinking that we might not be that compatible as interlocutors. Overall, I was mostly surprised at seeing the quick uptake in my follower list, which doesn’t tend to correlate with meaningful interaction, in my experience.

Now that I understand more about the unthinking way new Quora users are adding people to their networks, my surprise has transformed into an additional annoyance with the service. In a way, it’s a repeat of the time (what was it? 2007?) when Facebook applications got their big push and we kept receiving those “app invites” because some “social media mar-ke-tors” had thought it wise to force people to “invite five friends to use the service.” To Facebook’s credit (more on this later, I hope), these pushy and thoughtless “invitations” are a thing of the past…on those services where people learnt a few lessons about social networks.

Perhaps interestingly, I’ve had a very similar experience with Scribd, at about the same time. I was receiving what seemed like a steady flow of notifications about people from my first degree online network connecting with me on Scribd, whether or not they had ever engaged in a meaningful interaction with me. As with Quora, my initial surprise quickly morphed into annoyance. I wasn’t using any service much and these meaningless connections made it much less likely that I would ever use these services to get in touch with new and interesting people. If most of the people who are connecting with me on Quora and Scribd are already in my first degree and if they tend to be people I have limited interactions, why would I use these services to expand the range of people with whom I want to have meaningful interactions? They’re already within range and they haven’t been very communicative (for whatever reason, I don’t actually assume they were consciously snubbing me). Investing in Quora for “networking purposes” seemed like a futile effort, for me.

Perhaps because I have a specific approach to “networking.”

In my networking activities, I don’t focus on either “quantity” or “quality” of the people involved. I seriously, genuinely, honestly find something worthwhile in anyone with whom I can eventually connect, so the “quality of the individuals” argument doesn’t work with me. And I’m seriously, genuinely, honestly not trying to sell myself on a large market, so the “quantity” issue is one which has almost no effect on me. Besides, I already have what I consider to be an amazing social network online, in terms of quality of interactions. Sure, people with whom I interact are simply amazing. Sure, the size of my first degree network on some services is “well above average.” But these things wouldn’t matter at all if I weren’t able to have meaningful interactions in these contexts. And, as it turns out, I’m lucky enough to be able to have very meaningful interactions in a large range of contexts, both offline and on. Part of it has to do with the fact that I’m teaching addict. Part of it has to do with the fact that I’m a papillon social (social butterfly). It may even have to do with a stage in my life, at which I still care about meeting new people but I don’t really need new people in my circle. Part of it makes me much less selective than most other people (I like to have new acquaintances) and part of it makes me more selective (I don’t need new “friends”). If it didn’t sound condescending, I’d say it has to do with maturity. But it’s not about my own maturity as a human being. It’s about the maturity of my first-degree network.

There are other people who are in an expansionist phase. For whatever reason (marketing and job searches are the best-known ones, but they’re really not the only ones), some people need to get more contacts and/or contacts with people who have some specific characteristics. For instance, there are social activists out there who need to connect to key decision-makers because they have a strong message to carry. And there are people who were isolated from most other people around them because of stigmatization who just need to meet non-judgmental people. These, to me, are fine goals for someone to expand her or his first-degree network.

Some of it may have to do with introversion. While extraversion is a “dominant trait” of mine, I care deeply about people who consider themselves introverts, even when they start using it as a divisive label. In fact, that’s part of the reason I think it’d be neat to hold a ShyCamp. There’s a whole lot of room for human connection without having to rely on devices of outgoingness.

So, there are people who may benefit from expansion of their first-degree network. In this context, the “network effect” matters in a specific way. And if I think about “network maturity” in this case, there’s no evaluation involved, contrary to what it may seem like.

As you may have noticed, I keep insisting on the fact that we’re talking about “first-degree network.” Part of the reason is that I was lecturing about a few key network concepts just yesterday so, getting people to understand the difference between “the network as a whole” (especially on an online service) and “a given person’s first-degree network” is important to me. But another part relates back to what I’m getting to realize about Quora and Scribd: the process of connecting through an online service may have as much to do with collapsing some degrees of separation than with “being part of the same network.” To use Granovetter’s well-known terms, it’s about transforming “weak ties” into “strong” ones.

And I specifically don’t mean it as a “quality of interaction.” What is at stake, on Quora and Scribd, seems to have little to do with creating stronger bonds. But they may want to create closer links, in terms of network topography. In a way, it’s a bit like getting introduced on LinkedIn (and it corresponds to what biz-minded people mean by “networking”): you care about having “access” to that person, but you don’t necessarily care about her or him, personally.

There’s some sense in using such an approach on “utilitarian networks” like professional or Q&A ones (LinkedIn does both). But there are diverse ways to implement this approach and, to me, Quora and Scribd do it in a way which is very precisely counterproductive. The way LinkedIn does it is context-appropriate. So is the way Academia.edu does it. In both of these cases, the “transaction cost” of connecting with someone is commensurate with the degree of interaction which is possible. On Scribd and Quora, they almost force you to connect with “people you already know” and the “degree of interaction” which is imposed on users is disproportionately high (especially in Quora’s case, where a contact of yours can annoy you by asking you personally to answer a specific question). In this sense, joining Quora is a bit closer to being conscripted in a war while registering on Academia.edu is just a tiny bit more like getting into a country club. The analogies are tenuous but they probably get the point across. Especially since I get the strong impression that the “intimacy phase” has a lot to do with the “country club mentality.”

See, the social context in which these services gain much traction (relatively tech-savvy Anglophones in North America and Europe) assign very negative connotations to social exclusion but people keep being fascinating by the affordances of “select clubs” in terms of social capital. In other words, people may be very vocal as to how nasty it would be if some people had exclusive access to some influential people yet there’s what I perceive as an obsession with influence among the same people. As a caricature: “The ‘human rights’ movement leveled the playing field and we should never ever go back to those dark days of Old Boys’ Clubs and Secret Societies. As soon as I become the most influential person on the planet, I’ll make sure that people who think like me get the benefits they deserve.”

This is where the notion of elitism, as applied specifically to Quora but possibly expanding to other services, makes the most sense. “Oh, no, Quora is meant for everyone. It’s Democratic! See? I can connect with very influential people. But, isn’t it sad that these plebeians are coming to Quora without a proper knowledge of  the only right way to ask questions and without proper introduction by people I can trust? I hate these n00bz! Even worse, there are people now on the service who are trying to get social capital by promoting themselves. The nerve on these people, to invade my own dedicated private sphere where I was able to connect with the ‘movers and shakers’ of the industry.” No wonder Quora is so journalistic.

But I’d argue that there’s a part of this which is a confusion between first-degree networks and connection. Before Quora, the same people were indeed connected to these “influential people,” who allegedly make Quora such a unique system. After all, they were already online and I’m quite sure that most of them weren’t more than three or four degrees of separation from Quora’s initial userbase. But access to these people was difficult because connections were indirect. “Mr. Y Z, the CEO of Company X was already in my network, since there were employees of Company X who were connected through Twitter to people who follow me. But I couldn’t just coldcall CEO Z to ask him a question, since CEOs are out of reach, in their caves. Quora changed everything because Y responded to a question by someone ‘totally unconnected to him’ so it’s clear, now, that I have direct access to my good ol’ friend Y’s inner thoughts and doubts.”

As RMS might say, this type of connection is a “seductive mirage.” Because, I would argue, not much has changed in terms of access and whatever did change was already happening all over this social context.

At the risk of sounding dismissive, again, I’d say that part of what people find so alluring in Quora is “simply” an epiphany about the Small World phenomenon. With all sorts of fallacies caught in there. Another caricature: “What? It takes only three contacts for me to send something from rural Idaho to the head honcho at some Silicon Valley firm? This is the first time something like this happens, in the History of the Whole Wide World!”

Actually, I do feel quite bad about these caricatures. Some of those who are so passionate about Quora, among my contacts, have been very aware of many things happening online since the early 1990s. But I have to be honest in how I receive some comments about Quora and much of it sounds like a sudden realization of something which I thought was a given.

The fact that I feel so bad about these characterizations relates to the fact that, contrary to what I had planned to do, I’m not linking to specific comments about Quora. Not that I don’t want people to read about this but I don’t want anyone to feel targeted. I respect everyone and my characterizations aren’t judgmental. They’re impressionistic and, again, caricatures.

Speaking of what I had planned, beginning this post… I actually wanted to talk less about Quora specifically and more about other issues. Sounds like I’m currently getting sidetracked, and it’s kind of sad. But it’s ok. The show must go on.

So, other services…

While I had a similar experiences with Scribd and Quora about getting notifications of new connections from people with whom I haven’t had meaningful interactions, I’ve had a very different experience on many (probably most) other services.

An example I like is Foursquare. “Friendship requests” I get on Foursquare are mostly from: people with whom I’ve had relatively significant interactions in the past, people who were already significant parts of my second-degree network, or people I had never heard of. Sure, there are some people with whom I had tried to establish connections, including some who seem to reluctantly follow me on Quora. But the proportion of these is rather minimal and, for me, the stakes in accepting a friend request on Foursquare are quite low since it’s mostly about sharing data I already share publicly. Instead of being able to solicit my response to a specific question, the main thing my Foursquare “friends” can do that others can’t is give me recommendations, tips, and “notifications of their presence.” These are all things I might actually enjoy, so there’s nothing annoying about it. Sure, like any online service with a network component, these days, there are some “friend requests” which are more about self-promotion. But those are usually easy to avoid and, even if I get fooled by a “social media mar-ke-tor,” the most this person may do to me is give usrecommendation about “some random place.” Again, easy to avoid. So, the “social network” dimension of Foursquare seems appropriate, to me. Not ideal, but pretty decent.

I never really liked the “game” aspect and while I did play around with getting badges and mayorships in my first few weeks, it never felt like the point of Foursquare, to me. As Foursquare eventually became mainstream in Montreal and I was asked by a journalist about my approach to Foursquare, I was exactly in the phase when I was least interested in the game aspect and wished we could talk a whole lot more about the other dimensions of the phenomenon.

And I realize that, as I’m saying this, I may sound to some as exactly those who are bemoaning the shift out of the initial userbase of some cherished service. But there are significant differences. Note that I’m not complaining about the transition in the userbase. In the Foursquare context, “the more the merrier.” I was actually glad that Foursquare was becoming mainstream as it was easier to explain to people, it became more connected with things business owners might do, and generally had more impact. What gave me pause, at the time, is the journalistic hype surrounding Foursquare which seemed to be missing some key points about social networks online. Besides, I was never annoyed by this hype or by Foursquare itself. I simply thought that it was sad that the focus would be on a dimension of the service which was already present on not only Dodgeball and other location-based services but, pretty much, all over the place. I was critical of the seemingly unthinking way people approached Foursquare but the service itself was never that big a deal for me, either way.

And I pretty much have the same attitude toward any tool. I happen to have my favourites, which either tend to fit neatly in my “workflow” or otherwise have some neat feature I enjoy. But I’m very wary of hype and backlash. Especially now. It gets old very fast and it’s been going for quite a while.

Maybe I should just move away from the “tech world.” It’s the context for such hype and buzz machine that it almost makes me angry. [I very rarely get angry.] Why do I care so much? You can say it’s accumulation, over the years. Because I still care about social media and I really do want to know what people are saying about social media tools. I just wish discussion of these tools weren’t soooo “superlative”…

Obviously, I digress. But this is what I like to do on my blog and it has a cathartic effect. I actually do feel better now, thank you.

And I can talk about some other things I wanted to mention. I won’t spend much time on them because this is long enough (both as a blogpost and as a blogging session). But I want to set a few placeholders, for further discussion.

One such placeholder is about some pet theories I have about what worked well with certain services. Which is exactly the kind of thing “social media entrepreneurs” and journalists are so interested in, but end up talking about the same dimensions.

Let’s take Twitter, for instance. Sure, sure, there’s been a lot of talk about what made Twitter a success and probably-everybody knows that it got started as a side-project at Odeo, and blah, blah, blah. Many people also realize that there were other microblogging services around as Twitter got traction. And I’m sure some people use Twitter as a “textbook case” of “network effect” (however they define that effect). I even mention the celebrity dimensions of the “Twitter phenomenon” in class (my students aren’t easily starstruck by Bieber and Gaga) and I understand why journalists are so taken by Twitter’s “broadcast” mission. But something which has been discussed relatively rarely is the level of responsiveness by Twitter developers, over the years, to people’s actual use of the service. Again, we all know that “@-replies,” “hashtags,” and “retweets” were all emerging usage patterns that Twitter eventually integrated. And some discussion has taken place when Twitter changed it’s core prompt to reflect the fact that the way people were using it had changed. But there’s relatively little discussion as to what this process implies in terms of “developing philosophy.” As people are still talking about being “proactive” (ugh!) with users, and crude measurements of popularity keep being sold and bandied about, a large part of the tremendous potential for responsiveness (through social media or otherwise) is left untapped. People prefer to hype a new service which is “likely to have Twitter-like success because it has the features users have said they wanted in the survey we sell.” Instead of talking about the “get satisfaction” effect in responsiveness. Not that “consumers” now have “more power than ever before.” But responsive developers who refrain from imposing their views (Quora, again) tend to have a more positive impact, socially, than those which are merely trying to expand their userbase.

Which leads me to talk about Facebook. I could talk for hours on end about Facebook, but I almost feel afraid to do so. At this point, Facebook is conceived in what I perceive to be such a narrow way that it seems like anything I might say would sound exceedingly strange. Given the fact that it was part one of the first waves of Web tools with explicit social components to reach mainstream adoption, it almost sounds “historical” in timeframe. But, as so many people keep saying, it’s just not that old. IMHO, part of the implication of Facebook’s relatively young age should be that we are able to discuss it as a dynamic process, instead of assigning it to a bygone era. But, whatever…

Actually, I think part of the reason there’s such lack of depth in discussing Facebook is also part of the reason it was so special: it was originally a very select service. Since, for a significant period of time, the service was only available to people with email addresses ending in “.edu,” it’s not really surprising that many of the people who keep discussing it were actually not on the service “in its formative years.” But, I would argue, the fact that it was so exclusive at first (something which is often repeated but which seems to be understood in a very theoretical sense) contributed quite significantly to its success. Of course, similar claims have been made but, I’d say that my own claim is deeper than others.

[Bang! I really don’t tend to make claims so, much of this blogpost sounds to me as if it were coming from somebody else…]

Ok, I don’t mean it so strongly. But there’s something I think neat about the Facebook of 2005, the one I joined. So I’d like to discuss it. Hence the placeholder.

And, in this placeholder, I’d fit: the ideas about responsiveness mentioned with Twitter, the stepwise approach adopted by Facebook (which, to me, was the real key to its eventual success), the notion of intimacy which is the true core of this blogpost, the notion of hype/counterhype linked to journalistic approaches, a key distinction between privacy and intimacy, some non-ranting (but still rambling) discussion as to what Google is missing in its “social” projects, anecdotes about “sequential network effects” on Facebook as the service reached new “populations,” some personal comments about what I get out of Facebook even though I almost never spent any significant amount of time on it, some musings as to the possibility that there are online services which have reached maturity and may remain stable in the foreseeable future, a few digressions about fanboism or about the lack of sophistication in the social network models used in online services, and maybe a bit of fun at the expense of “social media expert marketors”…

But that’ll be for another time.

Cheers!

Jazz and Identity: Comment on Lydon's Iyer Interview

Radio Open Source » Blog Archive » Vijay Iyer’s Life in Music: “Striving is the Back Story…”.

Sounds like it will be a while before the United States becomes a truly post-racial society.

Iyer can define himself as American and he can even one-up other US citizens in Americanness, but he’s still defined by his having “a Brahmin Indian name and heritage, and a Yale degree in physics.”

Something by which I was taken aback, at IU Bloomington ten years ago, is the fact that those who were considered to be “of color” (as if colour were the factor!) were expected to mostly talk about their “race” whereas those who were considered “white” were expected to remain silent when notions of “race” and ethnicity came up for discussion. Granted, ethnicity and “race” were frequently discussed, so it was possible to hear the voices of those “of color” on a semi-regular basis. Still, part of my culture shock while living in the MidWest was the conspicuous silence of students with brilliant ideas who happened to be considered African-American.

Something similar happened with gender, on occasion, in that women were strongly encouraged to speak out…when a gender angle was needed. Thankfully, some of these women (at least, among those whose “racial” identity was perceived as neutral) did speak up, regardless of topic. But there was still an expectation that when they did, their perspective was intimately gendered.

Of course, some gender lines were blurred: the gender ratio among faculty members was relatively balanced (probably more women than men), the chair of the department was a woman for a time, and one department secretary was a man. But women’s behaviours were frequently interpreted in a gender-specific way, while men were often treated as almost genderless. Male privilege manifested itself in the fact that it was apparently difficult for women not to be gender-conscious.

Those of us who were “international students” had the possibility to decide when our identities were germane to the discussion. At least, I was able to push my «différence» when I so pleased, often by becoming the token Francophone in discussions about Francophone scholars, yet being able not to play the “Frenchie card” when I didn’t find it necessary. At the same time, my behaviour may have been deemed brash and a fellow student teased me by calling me “Mr. Snottyhead.” As an instructor later told me, “it’s just that, since you’re Canadian, we didn’t expect you to be so different.” (My response: “I know some Canadians who would despise that comment. But since I’m Québécois, it doesn’t matter.”) This was in reference to a seminar with twenty students, including seven “internationals”: one Zimbabwean, one Swiss-German, two Koreans, one Japanese, one Kenyan, and one “Québécois of Swiss heritage.” In this same graduate seminar, the instructor expected everyone to know of Johnny Appleseed and of John Denver.

Again, a culture shock. Especially for someone coming from a context in which the ethnic identity of the majority is frequently discussed and in which cultural identity is often “achieved” instead of being ascribed. This isn’t to say that Quebec society is devoid of similar issues. Everybody knows, Quebec has more than its fair share of identity-based problems. The fact of the matter is, Quebec society is entangled in all sorts of complex identity issues, and for many of those, Quebec may appear underprepared. The point is precisely that, in Quebec, identity politics is a matter for everyone. Nobody has the luxury to treat their identity as “neutral.”

Going back to Iyer… It’s remarkable that his thoughtful comments on Jazz end up associated more with his background than with his overall approach. As if what he had to say were of a different kind than those from Roy Hayes or Robin Kelley. As if Iyer had more in common with Koo Nimo than with, say, Sonny Rollins. Given Lydon’s journalistic background, it’s probably significant that the Iyer conversation carried the “Life in Music” name of  the show’s music biography series yet got “filed under” the show’s “Year of India” series. I kid you not.

And this is what we hear at the end of each episode’s intro:

This is Open Source, from the Watson Institute at Brown University. An American conversation with Global attitude, we call it.

Guess the “American” part was taken by Jazz itself, so Iyer was assigned the “Global” one. Kind of wishing the roles were reversed, though Iyer had rehearsed his part.

But enough symbolic interactionism. For now.

During Lydon’s interview with Iyer, I kept being reminded of a conversation (in Brookline)  with fellow Canadian-ethnomusicologist-and-Jazz-musician Tanya Kalmanovitch. Kalmanovitch had fantastic insight to share on identity politics at play through the international (yet not post-national) Jazz scene. In fact, methinks she’d make a great Open Source guest. She lives in Brooklyn but works as assistant chair of contemporary improv at NEC, in B-Town, so Lydon could probably meet her locally.

Anyhoo…

In some ways, Jazz is more racialized and ethnicized now than it was when Howie Becker published Outsiders. (hey, I did hint symbolic interactionism’d be back!). It’s also very national, gendered, compartmentalized… In a word: modern. Of course, Jazz (or something like it) shall play a role in postmodernity. But only if it sheds itself of its modernist trappings. We should hear out Kevin Mahogany’s (swung) comments about a popular misconception:

Some cats work from nine to five
Change their life for line of jive
Never had foresight to see
Where the changes had to be
Thought that they had heard the word
Thought it all died after Bird
But we’re still swingin’

The following anecdote seems à propos.

Branford Marsalis quartet on stage outside at the Indy Jazz Fest 1999. Some dude in the audience starts heckling the band: “Play something we know!” Marsalis, not losing his cool, engaged the heckler in a conversation on Jazz history, pushing the envelope, playing the way you want to play, and expected behaviour during shows. Though the audience sounded divided when Marsalis advised the heckler to go to Chaka Khan‘s show on the next stage over, if that was more to the heckler’s liking, there wasn’t a major shift in the crowd and, hopefully, most people understood how respectful Marsalis’s comments really were. What was especially precious is when Marsalis asked the heckler: “We’re cool, man?”

It’s nothing personal.

In Phase

Lissajous curve
Lissajous curve

Something which happens to me on a rather regular basis (and about which I blogged before) is that I’ll hear about something right after thinking about it. For instance, if I think about the fact that a given tool should exist, it may be announced right at that moment.

Hey, I was just thinking about this!

The effect is a bit strange but it’s quite easy to explain. It feels like a “premonition,” but it probably has more to do with “being in phase.” In some cases, it may also be that I heard about that something but hadn’t registered the information. I know it happens a lot and  it might not be too hard to trace back. But I prefer thinking about phase.

And, yes, I am thinking about phase difference in waves. Not in a very precise sense, but the image still works, for me. Especially with the Lissajous representation, as above.

See, I don’t particularly want to be “ahead of the curve” and I don’t particularly mind being “behind the curve.” But when I’m right “in the curve,” something interesting happens. I’m “in the now.”

I originally thought about being “in tune” and it could also be about “in sync” or even “matching impedances.” But I still like the waves analogy. Especially since, when two waves are in phase, they reinforce one another. As analogies go, it’s not only a beautiful one, but a powerful one. And, yes, I do think about my sweetheart.

One reason I like the concept of phase difference is that I think through sound. My first exposure to the concept comes from courses in musical acoustics, almost twenty years ago. It wasn’t the main thing I’d remember from the course and it’s not something I investigated at any point since. Like I keep telling students, some things hit you long after you’ve heard about it in a course. Lifelong learning and “landminds” are based on such elements, even tiny unimportant ones. Phase difference is one such thing.

And it’s no big deal, of course. It’s not like I spent days thinking about these concepts. But I’ve been feeling like writing, lately, and this is as good an opportunity as any.

The trigger for this particular thing is rather silly and is probably explained more accurately, come to think of it, by “unconsciously registering” something before consciously registering it.

Was having breakfast and started thinking about the importance of being environmentally responsible, the paradox of “consumption as freedom,” the consequences of some lifestyle choices including carfree living, etc. This stream of thought led me, not unexpectedly, to the perspectives on climate change, people’s perception of scientific evidence, and the so-called ClimateGate. I care a lot about critical thinking, regardless of whether or not I agree with a certain idea, so I think the email controversy shows the importance of transparency. So far, nothing unexpected. Within a couple of minutes, I had covered a few of the subjects du jour. And that’s what struck me, because right then, I (over)heard a radio host introduce a guest whose talk is titled:

What is the role of climate scientists in the climate change debate?

Obviously, Tremblay addressed ClimateGate quite directly. So my thoughts were “in phase” with Tremblay’s.

A few minutes prior to (over)hearing this introduction, I (over)heard a comment about topics of social conversations at different points in recent history. According to screenwriter Fabienne Larouche, issues covered in the first seasons of her “flagship” tv series are still at the forefront in Quebec society today, fourteen years later. So I was probably even more “in tune” with the notion of being “in phase.” Especially with my society.

I said “(over)heard” because I wasn’t really listening to that radio show. It was just playing in the background and I wasn’t paying much attention. I don’t tend to listen to live radio but I do listen to some radio recordings as podcasts. One reason I like doing so is that I can pay much closer attention to what I hear. Another is that I can listen to what I want when I feel like listen to it, which means that I can prepare myself for a heady topic or choose some tech-fluff to wind down after a course. There’s also the serendipity of listening to very disparate programmes in the same listening session, as if I were “turning the dial” after each show on a worldwide radio (I often switch between French and English and/or between European and North American sources). For a while now, I’ve been listening to podcasts at double-speed, which helps me focus on what’s most significant.

(In Jazz, we talk about “top notes,” meaning the ones which are more prominent. It’s easier to focus on them at double-speed than at normal speed so “double-times” have an interesting cognitive effect.)

So, I felt “in phase.” As mentioned, it probably has much more to do with having passively heard things without paying attention yet letting it “seep into my brain” to create connections between a few subjects which get me to the same point as what comes later. A large part of this is well-known in psychology, especially in terms of cognition. We start noticing things when they enter into a schema we have in our mind. These things we start noticing were there all along so the “discovery” is only in our mind (in the sense that it wouldn’t be a discovery for others). When we learn a new word, for instance, we start hearing it everywhere.

But there are also words which start being used by everyone because they have been diffused largely at a given point in time. An actual neologism can travel quickly and a word in our passive vocabulary can also come to prominence, especially in mainstream media. Clearly, this is an issue of interest to psychologists, folklorists, and media analysts alike. I’m enough of a folklorist and media observer to think about the social processes behind the diffusion of terms regardless of what psychologists think.

A few months back, I got the impression that the word “nimble” had suddenly increased in currency after it was used in a speech by the current PotUS. Since I’m a non-native speaker of English, I’m likely to be accused of noticing the word because it’s part my own passive vocabulary. I have examples in French, though some are with words which were new to me, at the time («peoplisation», «battante»…). I probably won’t be able to defend myself from those who say that it’s just a matter of my own exposure to those terms. Though there are ways to analyze the currency of a given term, I’m not sure I trust this type of analysis a lot more than my gut feeling, at least in terms of realtime trends.

Which makes me think of “memetics.” Not in the strict sense that Dawkins would like us to use. But in the way popular culture cares about the propagation of “units of thought.” I recently read a fascinating blogpost (in French) about  memetics from this perspective, playing Dawkins against himself. As coincidences keep happening (or, more accurately, as I’m accutely tuned to find coincidences everywhere), I’ve been having a discussion about Mahir‘s personal homepage (aka “I kiss you”), who became an “Internet celebrity” through this process which is now called memetic. The reason his page was noticed isn’t that it was so unique. But it had this je ne sais quoi which captured the imagination, at the time (the latter part of the “Dot-Com Bubble”). As some literary critics and many other humanists teach us, it’s not the item itself which counts, it’s how we receive it (yes, I tend to be on the “reception” and “eye of the beholder” side of things). Mahir was striking because he was, indeed, “out of phase” with the times.

As I think about phase, I keep hearing the other acoustic analogy: the tuning of sine waves. When a sine wave is very slightly “out of tune” with another, we hear a very slow oscillation (interference beats) until they produce resonance. There’s a direct relationship between beat tones and phase, but I think “in tune” and “in phase” remain separate analogies.

One reason I like to think about waves for these analogies is that I tend to perceive temporal change through these concepts. If we think of historical change through cycles, being “in phase” is a matter of matching two change processes until they’re aligned but the cycles may be in harmonic relationships. One can move twice as fast as society and still be “in phase” with it.

Sure, I’m overextending the analogies, and there’s something far-fetched about this. But that’s pretty much what I like about analogical thinking. As I’m under the weather, this kind of rambling is almost therapeutic.

Scriptocentrism and the Freedom to Think

As a comment on my previous blogpost on books, a friend sent me (through Facebook) a link to a blogpost about a petition to Amazon with the following statement:

The freedom to read is tantamount to the freedom to think.

As this friend and I are both anthros+africanists, I’m reacting (perhaps a bit strongly) to that statement.

Given my perspective, I would dare say that I find this statement (brought about by DbD)… ethnocentric.

There, I said it.

And I’ll try to back it up in this blogpost in order to spark even more discussion.

We won’t exhaust this topic any time soon, but I feel there’s a lot we can do about it which has rarely been done.

I won’t use the textbook case of “Language in the Inner City,” but it could help us talk about who decides, in a given social context, what is important. We both come from a literacy-focused background, so we may have to take a step back. Not sure if Bourdieu has commented on Labov, especially in terms of what all this means for “education,” but I’d even want to bring in Ivan Illich, at some point.

Hunters with whom I’ve been working, in Mali, vary greatly in terms of literacy. Some of them have a strong university background and one can even write French legalese (he’s a judge). Others (or some of the same) have gone to Koranic school long enough that can read classical Arabic. Some have the minimal knowledge of Arabic which suffices, for them, to do divination. Many of them have a very low level of functional literacy. There’s always someone around them who can read and write, so they’re usually not out of the loop and it’s not like the social hierarchy stereotypical of the Catholic Church during the Middle Ages in Europe. It’s a very different social context which can hardly be superimposed with the history of writing and the printing press in Europe.

In terms of “freedom to thinik,” I really wouldn’t say that they’re lacking. Of course, “free thinker” has a specific meaning in liberal societies with a European background. But even this meaning can be applied to many people I’ve met in Mali.

And I go back to the social context. Those with the highest degree of functional literacy aren’t necessarily those with the highest social status. And unlike Harlem described by Labov, it’s a relatively independent context from the one in which literacy is a sine qua non. Sure, it’s a neocolonial context and Euro-Americans keep insisting that literacy in Latin script is “the most important thing ever” if they are to become a true liberal democracy. Yet, internally, it’s perfectly possible for someone to think freely, get recognition, and help other people to think without going through the written medium.

Many of those I know who have almost nonexistent skills in the written medium also have enough power (in a Weberian sense) that they get others to do the reading and writing for them. And because there are many social means to ensure that communication has worked appropriately, these “scribes” aren’t very likely to use this to take anything away from those for whom they read and write.

In Switzerland, one of my recent ancestors was functionally illiterate. Because of this, she “signed away” most of her wealth. Down the line, I’m one of her very few heirs. So, in a way, I lost part of my inheritance due to illiteracy.

Unless the switch to a European model for notarial services becomes complete, a case like this is unlikely to occur among people I know in Mali. If it does happen, it’s clearly not a failure of the oral system but a problem with this kind of transition. It’s somewhat similar to the situation with women in diverse parts of the continent during the period of direct colonialism: the fact that women have lost what powers they had (say, in a matrilineal/matrilocal society) has to do with the switch to a hierarchical system which put the emphasis on new factors which excluded the type of influence women had.

In other words, I fully understand the connections between liberalism and literacy and I’ve heard enough about the importance of the printing press and journalism in these liberal societies to understand what role reading has played in those contexts. I simply dispute the notion that these connections should be universal.

Yes, I wish the “Universal Declaration of Human Rights” (including the (in)famous Article 26, which caused so many issues) were more culturally aware.

I started reading Deschooling Society a few weeks ago. In terms of “insight density,” it’s much higher than the book which prompted this discussion. While reading the first chapter, I constructed a number of ideas which I personally find useful.

I haven’t finished reading the book. Yet. I might eventually finish it. But much of what I wanted to get from that book, I was able to get from diverse sources. Including that part of the book I did read, sequentially. But, also, everything which has been written about Illich since 1971. And I’ll be interested in reading comments by the reading group at Wikiversity.

Given my background, I have as many “things to say” about the issues surrounding schooling as what I’ve read. If I had the time, I could write as much on what I’ve read from that book and it’d probably bring me a lot of benefits.

I’ve heard enough strong reactions against this attitude I’m displaying that I can hear it, already: “how can you talk about a book you haven’t read.” And I sincerely think these people miss an important point. I wouldn’t go so far as to say that their reading habits are off (that’d be mean), especially since those are well-adapted to certain contexts, including what I call scriptocentrism. Not that these people are scriptocentric. But their attitude “goes well with” scriptocentrism.

Academia, despite being to context for an enormous amount of writing and reading, isn’t displaying that kind of scriptocentrism. Sure, a lot of what we do needs to be written (although, it’s often surprising how much insight goes unwritten in the work of many an academic). And we do get evaluated through our writing. Not to mention that we need to write in a very specific mode, which almost causes a diglossia.

But we simply don’t feel forced to “read the whole text.”

A colleague has described this as the “dirty little secret” of academia. And one which changes many things for students, to the point that it almost sounds as if it remains a secret so as to separate students into categories of “those who get it” and “the mass.”

It doesn’t take a semester to read a textbook so there are students who get the impression that they can simply read the book in a weekend and take the exams. These students may succeed, depending on the course. In fact, they may get really good grades. But they run into a wall if they want to go on with a career making any use of knowledge construction skills.

Bill Reimer has interesting documents about “better reading.” It’s a PowerPoint presentation accompanied by exercises in a PDF format. (No, I won’t discuss format here.)

I keep pointing students to those documents for a simple reason: Reimer isn’t advocating reading every word in sequence. His “skim then focus” advice might be the one piece which is harder to get through to people but it’s tremendously effective in academic contexts. It’s also one which is well-adapted to the kind of online reading I’m thinking about. And not necessarily that good for physical books. Sure, you can efficiently flip pages in a book. But skimming a text on paper is more likely to be about what stands out visually than about the structure of the text. Especially with book-length texts. The same advice holds with physical books, of course. After all, this kind of advice originally comes from that historical period which I might describe as the “heyday of books”: the late 20th Century. But I’d say that the kind of “better reading” Reimer describes is enhanced in the context of online textuality. Not just the “Read/Write Web” but Instant Messaging, email, forums, ICQ, wikis, hypertext, Gopher, even PowerPoint…

Much of this has to do with different models of human communication. The Shannon/Weaver crowd have a linear/directional model, based on information processing. Codec and modem. Something which, after Irvine’s Shadow Conversations, I tend to call “the football theory of communication.” This model might be the best-known one, especially among those who study in departments of communication along with other would-be journalists. Works well for a “broadcast” medium with mostly indirect interaction (books, television, radio, cinema, press conferences, etc.). Doesn’t work so well for the backchannel-heavy “smalltalk”  stuff of most human communication actually going on in this world.

Some cognitivists (including Chomsky) have a schema-based model. Constructivists (from Piaget on) have an elaborate model based on knowledge. Several linguistic anthropologists (including yours truly but also Judith Irvine, Richard Bauman, and Dell Hymes) have a model which gives more than lipservice to the notion of performance. And there’s a functional model of any human communication in Jakobson’s classic text on verbal communication. It’s a model which can sound as if it were linear/bidirectional but it’s much broader than this. His six “functions of verbal communication” do come from six elements of the communication process (channel, code, form, context, speaker, listener). But each of these elements embeds a complex reality and Jakobson’s model seems completely compatible with a holistic approach to human communication. In fact, Jakobson has had a tremendous impact on a large variety of people, including many key figures in linguistic anthropology along with Lévi-Strauss and, yes, even Chomsky.

(Sometimes, I wish more people knew about Jakobson. Oh, wait! Since Jakobson was living in the US, I need to americanize this statement: “Jakobson is the most underrated scholar ever.”)

All these models do (or, in my mind, should) integrate written communication. Yet scriptocentrism has often led us far away from “texts as communication” and into “text as an object.” Scriptocentrism works well with modernity. Going away from scriptocentrism is a way to accept our postmodern reality.