Category Archives: mass media

Twenty Years Online

This month marks the 20th anniversary of my first Internet account. I don’t remember the exact date but I know it was in late summer 1993, right before what became known as “Eternal September”. The Internet wasn’t new, but it still wasn’t on most people’s proverbial “radars”.

Had heard one of my professors, Kevin Tuite, talk about the Internet as a system through which people from all over the World were communicating. Among the examples Tuite gave of possibilities offered by the ‘Net were conversations among people from former Soviet Republics, during this period of broad transitions. As a specialist of Svaneti, in present-day Georgia, Kevin was particularly interested in these conversations.

During that fated Summer of ‘93, I was getting ready to begin the last year of my B.Sc. in anthropology, specializing in linguistic anthropology and ethnomusicology. As I had done during previous summers, I was working BOH at a French restaurant. But, in my free time, I was exploring a brand new world.

In retrospect, it might not be a complete coincidence that my then-girlfriend of four years left me during that Fall 1993 semester.

It started with a local BBS, WAJU (“We Are Joining You”). I’m not exactly sure when I got started, but I remember being on WAJU in July. Had first been lent a 300 baud modem but I quickly switched to a 2400 baud one. My current ISP plan is 15Mbps, literally 50,000 times faster than my original connection.

By August 1993, thanks to the aforementioned Kevin Tuite, I was able to get an account on UdeM’s ERE network, meant for teaching and research (it stood for «Environnement de recherche et d’enseignement»). That network was running on SGI machines which weren’t really meant to handle large numbers of external connections. But it worked for my purpose of processing email (through Pine), Usenet newsgroups, FTP downloads (sometimes through Archie), IRC sessions, individual chats (though Talk), Gopher sites, and other things via Telnet. As much as possible, I did all of these things from campus, through one of the computer rooms, which offered amazingly fast connections (especially compared to my 2.4kbps modem). I spent enough time in those computer rooms that I still remember a distinct smell from them.

However, at some point during that period, I was able to hack a PPP connection going through my ERE account. In fact, I ended up helping some other people (including a few professors) do the same. It then meant we could use native applications to access the ’Net from home and, eventually, browse the Web graphically.

But I’m getting ahead of myself.

By the time I got online, NCSA Mosaic hadn’t been released. In fact, it took a little while before I even heard of the “World Wide Web”. I seem to remember that I only started browsing the Web in 1994. At the same time, I’m pretty sure one of my most online-savvy friends (likely Alex Burton or Martin Dupras) had told me about the Web as soon as version 1.0 of Mosaic was out, or even before.

The Web was a huge improvement, to be sure. But it was neither the beginning nor the end of the ‘Net, for those of us who had been there a little while. Yes, even a few months. Keep in mind that, at the time, there weren’t that many sites, on the Web. Sure, most universities had a Web presence and many people with accounts on university networks had opportunities to create homepages. But there’s a reason there could be Web directories (strongly associated with Yahoo!, now, but quite common at the time). Pages were “static” and there wasn’t much which was “social” on the Web, at the time.

But the ’Net as a whole was very social. At least, for the budding ethnographer that I was, the rest of the ‘Net was a much more interesting context for observation than the Web. Especially newsgroups and mailinglists.

Especially since the ‘Net was going through one of its first demographic explosions. Some AOLers were flooding the ‘Net. Perhaps more importantly, newbie bashing was peaking and comments against AOL or other inexperienced “Netizens” were frequently heard. I personally heard a lot more from people complaining about AOL than from anyone accessing the ’Net through AOL.

Something about the influx which was clear, though, is that the “democratization” was being accompanied by commercialization. A culture of open sharing was being replaced by corporate culture. Free culture was being preempted by a culture of advertising. The first .com domains were almost a novelty, in a ‘Net full of country-specific domains along with lots of .edu, .net, .org, .gov, and even .mil servers.

The ‘Net wasn’t yet about “paying for content”. That would come a few years later, when media properties pushed “user-generated content” into its own category (instead of representing most of what was available online). The ‘Net of the mid-1990s was about gaining as much attention as possible. We’re still in that mode, of course. But the contrast was striking. Casual conversations were in danger of getting drowned by megaphones. The billboard overtook the café. With the shift, a strong sense of antagonism emerged. The sense of belonging to a community of early adopters increased with the sense of being attacked by old “media types”. People less interested in sharing knowledge and more interested in conveying their own corporate messages. Not that individuals had been agenda-free until that point. But there was a big difference between geeks arguing about strongly-held opinions and “brands” being pushed onto the scene.

Early on, the thing I thought the Internet would most likely disrupt was journalism. I had a problem with journalism so, when I saw how the ‘Net could provide increased access to information, I was sure it’d imply a reappropriation of news by people themselves, with everything this means in the spread of critical thinking skills. Some of this has happened, to an extent. But media consolidation had probably a more critical role to play in journalism’s current crisis than online communication. Although, I like to think of these things as complex systems of interrelated trends and tendencies instead of straightforward causal scenarios.

In such a situation, the ‘Net becoming more like a set of conventional mass media channels was bad news. More specifically, the logic of “getting your corporate message across” was quite offputting to a crowd used to more casual (though often heated and loud) conversations. What comes to mind is a large agora with thousands of people having thousands of separate conversations being taken over by a massive PA system. Regardless of the content of the message being broadcast by this PA system, the effect is beyond annoying.

Through all of this, I distinctly remember mid-April, 1994. At that time, the Internet changed.  One might say it never recovered.

At that time, two unscrupulous lawyers sent the first commercial spam on Usenet newsgroups. They apparently made a rather large sum of money from their action but, more importantly, they ended the “Netiquette” era. From this point on, a conflict has emerged between those who use and those who abuse the ‘Net. Yes, strong words. But I sincerely think they’re fitting. Spammers are like Internet’s cancer. They may “serve a function” and may inspire awe. Mostly, though, they’re “cells gone rogue”. Not that I’m saying the ‘Net was free of disease before this “Green Card lottery” moment. For one thing, it’s possible (though unlikely) that flamewars were somewhat more virulent then than they are now. It’s just that the list of known online woes expanded quickly with the addition of cancer-like diseases. From annoying Usenet spam, we went rather rapidly to all sorts of malevolent large-scale actions. Whatever we end up doing online, we carry the shadow of such actions.

Despite how it may sound, my stance isn’t primarily moral. It’s really about a shift from a “conversational” mode to a “mass media” one. Spammers exploited Usenet by using it as a “mass media” channel, at a time when most people online were using it as a large set of “many-to-many” channels.

The distinction between Usenet spam and legitimate advertising may be extremely important, to a very large number of people. But the gates spammers opened were the same ones advertisers have been using ever since.

My nostalgia of the early Internet has a lot to do with this shift. I know we gained a lot, in the meantime. I enjoy many benefits from the “democratization” of the ‘Net. I wouldn’t trade the current online services and tools for those I was using in August, 1993. But I do long for a cancer-free Internet.

Wearable Hub: Getting the Ball Rolling

Statement

After years of hype, wearable devices are happening. What wearable computing lacks is a way to integrate devices into a broader system.

Disclaimer/Disclosure/Warning

  • For the past two months or so, I’ve been taking notes about this “wearable hub” idea (started around CES’s time, as wearable devices like the Pebble and Google Glass were discussed with more intensity). At this point, I have over 3000 words in notes, which probably means that I’d have enough material for a long essay. This post is just a way to release a few ideas and to “think aloud” about what wearables may mean.
  • Some of these notes have to do with the fact that I started using a few wearable devices to monitor my activities, after a health issue pushed me to start doing some exercise.
  • I’m not a technologist nor do I play one on this blog. I’m primarily an ethnographer, with diverse interests in technology and its implications for human beings. I do research on technological appropriation and some of the course I teach relate to the social dimensions of technology. Some of the approaches to technology that I discuss in those courses relate to constructionism and Actor-Network Theory.
  • I consider myself a “geek ethnographer” in the sense that I take part in geek culture (and have come out as a geek) but I’m also an outsider to geekdom.
  • Contrary to the likes of McLuhan, Carr, and Morozov, my perspective on technology and society is non-deterministic. The way I use them, “implication” and “affordance” aren’t about causal effects or, even, about direct connections. I’m not saying that society is causing technology to appear nor am I proposing a line from tools to social impacts. Technology and society are in a complex system.
  • Further, my approach isn’t predictive. I’m not saying what will happen based on technological advances nor am I saying what technology will appear. I’m thinking about the meaning of technology in an intersubjective way.
  • My personal attitude on tools and gadgets is rather ambivalent. This becomes clear as I go back and forth between techno-enthusiastic contexts (where I can almost appear like a Luddite) and techno-skeptical contexts (where some might label me as a gadget freak). I integrate a number of tools in my life but I can be quite wary about them.
  • I’m not wedded to the ideas I’m putting forth, here. They’re just broad musings of what might be. More than anything, I hope to generate thoughtful discussion. That’s why I start this post with a broad statement (not my usual style).
  • Of course, I know that other people have had similar ideas and I know that a concept of “wearable hub” already exists. It’s obvious enough that it’s one of these things which can be invented independently.

From Wearables to Hubs

Back in the 1990s, “wearable computing” became something of a futuristic buzzword, often having to do with articles of clothing. There have been many experiments and prototypes converging on an idea that we would, one day, be able to wear something resembling a full computer. Meanwhile, “personal digital assistants” became something of a niche product and embedded systems became an important dimension of car manufacturing.

Fast-forward to 2007, when a significant shift in the use of smartphones occurred. Smartphones existed before that time, but their usages, meanings, and positions in the public discourse changed quite radically around the time of the iPhone’s release. Not that the iPhone itself “caused a smartphone revolution” or that smartphone adoption suddenly reached a “tipping point”. I conceive of this shift as a complex interplay between society and tools. Not only more Kuhn than Popper, but more Latour than Kurzweil.

Smartphones, it may be argued, “happened”.

Without being described as “wearable devices”, smartphones started playing some of the functions people might have assigned to wearable devices. The move was subtle enough that Limor Fried recently described it as a realization she’s been having. Some tech enthusiasts may be designing location-aware purses and heads-up displays in the form of glasses. Smartphones are already doing a lot of the things wearables were supposed to do. Many people “wear” smartphones at most times during their waking lives and these Internet-connected devices are full of sensors. With the proliferation of cases, one might even perceive some of them as fashion accessories, like watches and sunglasses.

Where smartphones become more interesting, in terms of wearable computing, is as de facto wearable hubs.

My Wearable Devices

Which brings me to mention the four sensors I’ve been using more extensively during the past two months:

Yes, these all have to do with fitness (and there’s quite a bit of overlap between them). And, yes, I started using them a few days after the New Year. But it’s not about holiday gifts or New Year’s resolutions. I’ve had some of these devices for a while and decided to use them after consulting with a physician about hypertension. Not only have they helped me quite a bit in solving some health issues, but these devices got me to think.

(I carry several other things with me at most times. Some of my favourites include Tenqa REMXD Bluetooth headphones and the LiveScribe echo smartpen.)

One aspect is that they’re all about the so-called “quantified self”. As a qualitative researcher, I tend to be skeptical of quants. In this case, though, the stats I’m collecting about myself fit with my qualitative approach. Along with quantitative data from these devices, I’ve started collecting qualitative data about my life. The next step is to integrate all those data points automatically.

These sensors are also connected to “gamification”, a tendency I find worrisome, preferring playfulness. Though game mechanics are applied to the use of these sensors, I choose to rely on my intrinsic motivation, not paying much attention to scores and badges.

But the part which pushed me to start taking the most notes was that all these sensors connect with my iOS ()and Android) devices. And this is where the “wearable hub” comes into play. None of these devices is autonomous. They’re all part of my personal “arsenal”, the equipment I have on my me on most occasions. Though there are many similarities between them, they still serve different purposes, which are much more limited than those “wearable computers” might have been expected to serve. Without a central device serving as a type of “hub”, these sensors wouldn’t be very useful. This “hub” needs not be a smartphone, despite the fact that, by default, smartphones are taken to be the key piece in this kind of setup.

In my personal scenario, I do use a smartphone as a hub. But I also use tablets. And I could easily use an existing device of another type (say, an iPod touch), or even a new type of device meant to serve as a wearable hub. Smartphones’ “hub” affordances aren’t exclusive.

From Digital Hub to Wearable Hub

Most of the devices which would likely serve as hubs for wearable sensors can be described as “Post-PC”. They’re clearly “personal” and they’re arguably “computers”. Yet they’re significantly different from the “Personal Computers” which have been so important at the end of last century (desktop and laptop computers not used as servers, regardless of the OS they run).

Wearability is a key point, here. But it’s not just a matter of weight or form factor. A wearable hub needs to be wireless in at least two important ways: independent from a power source and connected to other devices through radio waves. The fact that they’re worn at all times also implies a certain degree of integration with other things carried throughout the day (wallets, purses, backpacks, pockets…). These devices may also be more “personal” than PCs because they may be more apparent and more amenable to customization than PCs.

Smartphones fit the bill as wearable hubs. Their form factors and battery life make them wearable enough. Bluetooth (or ANT+, Nike+, etc.) has been used to pair them wirelessly with sensors. Their connectivity to GPS and cellular networking as well as their audio and visual i/o can have interesting uses (mapping a walk, data updates during a commute, voice feedback…). And though they’re far from ubiquitous, smartphones have become quite common in key markets.

Part of the reason I keep thinking about “hubs” has to do with comments made in 2001 by then Apple CEO Steve Jobs about the “digital lifestyle” age in “PC evolution” (video of Jobs’s presentation; as an anthropologist, I’ll refrain from commenting on the evolutionary analogies):

We believe the PC, or more… importantly, the Mac can become the “digital hub” of our emerging digital lifestyle, with the ability to add tremendous value to … other digital devices.

… like camcorders, portable media players, cellphones, digital cameras, handheld organizers, etc. (Though they weren’t mentioned, other peripherals like printers and webcams also connect to PCs.)

The PC was thus going to serve as a hub, “not only adding value to these devices but interconnecting them, as well”.

At the time, key PC affordances which distinguished them from those other digital devices:

  • Big screen affording more complex user interfaces
  • Large, inexpensive hard disk storage
  • Burning DVDs and CDs
  • Internet connectivity, especially broadband
  • Running complex applications (including media processing software like the iLife suite)

Though Jobs pinpointed iLife applications as the basis for this “digital hub” vision, it sounds like FireWire was meant to be an even more important part of this vision. Of course, USB has supplanted FireWire in most use cases. It’s interesting, then, to notice that Apple only recently started shipping Macs with USB 3. In fact, DVD burning is absent from recent Macs. In 2001, the Mac might have been at the forefront of this “digital lifestyle” age. In 2013, the Mac has moved away from its role as “digital hub”.

In the meantime, the iPhone has become one of the best known examples of what I’m calling “wearable hubs”. It has a small screen and small, expensive storage (by today’s standards). It also can’t burn DVDs. But it does have nearly-ubiquitous Internet connectivity and can run fairly complex applications, some of which are adapted from the iLife suite. And though it does have wired connectivity (through Lightning or the “dock connector”), its main hub affordances have to do with Bluetooth.

It’s interesting to note that the same Steve Jobs, who used the “digital hub” concept to explain that the PC wasn’t dead in 2001, is partly responsible for popularizing the concept of “post-PC devices” six years later. One might perceive hypocrisy in this much delayed apparent flip-flop. On the other hand, Steve Jobs’s 2007 comments (video) were somewhat nuanced, as to the role of post-PC devices. What’s more interesting, though, is to think about the implications of the shift between two views of digital devices, regardless of Apple’s position through that shift.

Some post-PC devices (including the iPhone, until quite recently) do require a connection to a PC. In this sense, a smartphone might maintain its position with regards to the PC as digital hub. Yet, some of those devices are used independently of PCs, including by some people who never owned PCs.

Post-Smartphone Hubs

It’s possible to imagine a wearable hub outside of the smartphone (and tablet) paradigm. While smartphones are a convenient way to interconnect wearables, their hub-related affordances still sound limited: they lack large displays and their storage space is quite expensive. Their battery life may also be something to consider in terms of serving as hubs. Their form factors make some sense, when functioning as phones. Yet they have little to do with their use as hubs.

Part of the realization, for me, came from the fact that I’ve been using a tablet as something of an untethered hub. Since I use Bluetooth headphones, I can listen to podcasts and music while my tablet is in my backpack without being entangled in a cable. Sounds trivial but it’s one of these affordances I find quite significant. Delegating music playing functions to my tablet relates in part to battery life and use of storage. The tablet’s display has no importance in this scenario. In fact, given some communication between devices, my smartphone could serve as a display for my tablet. So could a “smartwatch” or “smartglasses”.

The Body Hub

Which led me to think about other devices which would work as wearable hubs. I originally thought about backpackable and pocketable devices.

But a friend had a more striking idea:

Under Armour’s Recharge Energy Suit may be an extreme version of this, one which would fit nicely among things Cathi Bond likes to discuss with Nora Young on The Sniffer. Nora herself has been discussing wearables on her blog as well as on her radio show. Sure, part of this concept is quite futuristic. But a sensor mesh undershirt is a neat idea for several reasons.

  • It’s easy to think of various sensors it may contain.
  • Given its surface area, it could hold enough battery power to supplement other devices.
  • It can be quite comfortable in cold weather and might even help diffuse heat in warmer climates.
  • Though wearable, it needs not be visible.
  • Thieves would probably have a hard time stealing it.
  • Vibration and haptic feedback on the body can open interesting possibilities.

Not that it’s the perfect digital hub and I’m sure there are multiple objections to a connected undershirt (including issues with radio signals). But I find the idea rather fun to think, partly because it’s so far away from the use of phones, glasses, and watches as smart devices.

Another thing I find neat, and it may partly be a coincidence, is the very notion of a “mesh”.

The Wearable Mesh

Mesh networking is a neat concept, which generates more hype than practical uses. As an alternative to WiFi access points and cellular connectivity, it’s unclear that it may “take the world by storm”. But as a way to connect personal devices, it might have some potential. After all, as Bernard Benhamou recently pointed out on France Culture’s Place de la toile, the Internet of Things may not require always-on full-bandwith connectivity. Typically, wearable sensors use fairly little bandwidth or only use it for limited amounts of time. A wearable mesh could connect wearable devices to one another while also exchanging data through the Internet itself.

Or with local devices. Smart cities, near field communication, and digital appliances occupy interesting positions among widely-discussed tendencies in the tech world. They may all have something to do with wearable devices. For instance, data exchanged between transit systems and their users could go through wearable devices. And while mobile payment systems can work through smartphones and other cellphones, wallet functions can also be fulfilled by other wearable devices.

Alternative Futures

Which might provide an appropriate segue into the ambivalence I feel toward the “wearable hub” concept I’m describing. Though I propose these ideas as if I were enthusiastic about them, they all give me pause. As a big fan of critical thinking, I like to think about “what might be” to generate questions and discussions exposing a diversity of viewpoints about the future.

Mass media discussions about these issues tend to focus on such things as privacy, availability, norms, and usefulness. Google Glass has generated quite a bit of buzz about all four. Other wearables may mainly raise issues for one or two of these broad dimensions. But the broad domain of wearable computing raises a lot more issues.

Technology enthusiasts enjoy discussing issues through the dualism between dystopia and utopia. An obvious issue with this dualism is that humans disagree about the two categories. Simply put, one person’s dystopia can be another person’s utopia, not to mention the nuanced views of people who see complex relationships between values and social change.

In such a context, a sociologist’s reflex may be to ask about the implications of these diverse values and opinions. For instance:

  • How do people construct these values?
  • Who decides which values are more important?
  • How might social groups cope with changes in values?

Discussing these issues and more, in a broad frame, might be quite useful. Some of the trickiest issues are raised after some changes in technology have already happened. From writing to cars, any technological context has unexpected implications. An ecological view of these implications could broaden the discussion.

I tend to like the concept of the “drift-off moment”, during which listeners (or readers) start thinking about the possibilities afforded a new tool (or concept). In the context of a sales pitch, the idea is that these possibilities are positive, a potential buyer is thinking about the ways she might use a newfangled device. But I also like the deeper process of thinking about all sorts of implications, regardless of their value.

So…

What might be the implications of a wearable hub?

Megaphone red

The Magazine and Social Media

Megaphone red
Megaphone red by Adamantios (via Wikimedia Commons, (GFDL, CC-BY-SA)

The following is my App Store review of The Magazine, a Newsstand offering by Instapaper developer Marco Arment.

Though I like Marco Arment’s work and there’s nothing specifically wrong about this implementation of the magazine model, I don’t find the magazine model particularly useful, at this point. And, make no mistake. The Magazine is indeed a magazine.

Oh, sure, this format overcomes several of the limitations set by advertising-based models and hierarchical boards. But it maintains something of the magazine logic: a tight bundle of a few articles authored by people connected through the same “editorial intent”. It’s not a conversation with the public. In this first issue, it’s not even a conversation among co-authors.

The “linked list” aspect of the “Fireball Format” (from John Gruber’s Daring Fireball media property) is described in one of the pieces in this first issue. Other distinguishing factors of the “Fireball Format” aren’t discussed in that same piece. They include a “no comment” policy which has become rather common among high-profile blogs. Unlike most blogs of the pioneer era in social media, these blogs don’t allow readers to comment directly.

A justification for this policy is that comments can be posted elsewhere. And since most of these bloggers are active on microblogging platforms like App.net and Twitter, there’s a chance that a comment might be noticed by those authors. What’s missing, though, is the sense of belonging which bloggers created among themselves before MySpace.

In other words, now that there are large social networking services online, the social aspects of blogging have been deemphasized and authorial dimensions have come to prominence. Though Arment dislikes the word, blog authors have become “brands”. It still works when these authors are in conversation with one another, when there’s a likelihood of a “followup” (FU in 5by5 parlance), when authors are responsive.

None of that interaction potential seems to be part of the core model for The Magazine. You can scream at your iOS device all you want, Jason Snell will probably not respond to you in a future edition of The Magazine. You can attempt dialogue on Twitter, but any conversation you may succeed in starting there is unlikely to have any impact on The Magazine. You’re talking with authors, now, not with members of a community.

With The Magazine, the transition from social to authorial is almost complete. Not only are posts set apart from the conversation but the editorial act of bundling posts together brings back all the problems media scholars have been pointing out for the past several decades. The issue at stake isn’t merely the move to online delivery. It’s the structure of authority and the one-to-many broadcast-style transmission. We’ve taken a step back.

So, while The Magazine has certain technical advantages over old school magazines like The Daily and Wired, it represents a step away from social media and towards mass media. Less critical thinking, more pedestals.

A new model could emerge using the infrastructure and business model that Arment built. But it’d require significant work outside of the application. The Feature might contribute something to this new model, especially if the way posts are bundled together became more flexible.

So, all in all, I consider The Magazine to be a step in the wrong direction by someone whose work I respect.

Good thing we still have podcasts.

Activism and Journalism

In yesterday’s “Introduction to Society” class, we discussed a number of things related to activism, journalism, labour issues, and even Apple and Foxconn (along with slacktivism, Kony 2012, mass media, moral entrepreneurs, and Wal-Mart).

This discussion was sparked, in part, from a student’s question:

What good are the finding the sociologists obtain if the sociologists themselves are passive to the issues observed?

Very good question, and I feel that the discussion we’ve had in class scratched the surface of the issue.

My response could have related to my current work, which I have mentioned in class on several occasions. These days, an important part of my work outside of the Ivory Tower has to do with community organizations. More specifically, I do fieldwork for Communautique, whose mission is to:

Support civic participation by promoting information literacy, appropriation of information and communications technologies and contribution to their development.

Though I’m no activist, I see a clear role for activism and my work directly supports a form of activism. The goal here is social change, toward increased participation by diverse citizens. Thankfully, this is no “us/them” campaign. There’s no demonization, here. Many of us may disagree on a course of action, but inclusion, not confrontation, is among this work’s main goals.

I sincerely think that my work, however modest, may have a positive impact. Not that I delude myself into thinking that there’s a “quick fix” to problems associated with social exclusion. But I see a fairly clear bifurcation between paths and I choose one which might lead to increased inclusiveness.

I didn’t talk about my work during out classroom discussion. Though I love to talk about it, I try to make these discussions as interactive as possible. Even when I end up talking more than anybody else, I do what I can not to lead the discussion in too specific a direction. So, instead of talking about Communautique, we talked about Foxconn. I’m pretty sure I brought it up, but it was meant as a way to discuss a situation with which students can relate.

Turns out, there was an ideal case to discuss many of these themes. Here’s a message about this case that I just sent to the class’s forum:

Some of you might have heard of this but I hadn’t, before going to class. Sounds to me like it brings together several points we’ve discussed yesterday (activism, journalism, message dissemination, labour conditions, Foxconn, Apple…). It also has a lot to do with approaches to truth, which do tend to differ.

 

So… An episode of This American Life about Foxconn factories making Apple products contained a number of inaccurate things, coming from Mike Daisey, a guy who does monologues as stage plays. These things were presented as facts (and had gone through an elaborate “factchecking” process) and Daisey defends them as theatre, meant to make people react.

 

Here’s a piece about it, from someone who was able to pinpoint some inaccuracies: “An acclaimed Apple critic made up the details”.

 

The retraction from the team at This American Life took a whole show, along with an apparently difficult blogpost.

Interesting stuff, if you ask me. Especially since people might argue that the whole event may negatively impact the cause. After all, the problems of factory workers in China may appeal to more than people’s quickest emotional responses. Though I’m a big fan of emotions, I also think there’s an opportunity to discuss these issues thoughtfully and critically. The issue goes further than Apple or even Foxconn. And it has a lot to do with Wallerstein’s “World Systems Theory”.

 

Anyhoo… Just thought some of you may be interested.

Intimacy, Network Effect, Hype

Is “intimacy” a mere correlate of the network effect?

Can we use the network effect to explain what has been happening with Quora?

Is the Quora hype related to network effect?

I really don’t feel a need to justify my dislike of Quora. Oh, sure, I can explain it. At length. Even on Quora itself. And elsewhere. But I tend to sense some defensiveness on the part of Quora fans.

[Speaking of fans, I have blogposts on fanboism laying in my head, waiting to be hatched. Maybe this will be part of it.]

But the important point, to me, isn’t about whether or not I like Quora. It’s about what makes Quora so divisive. There are people who dislike it and there are some who defend it.

Originally, I was only hearing from contacts and friends who just looooved Quora. So I was having a “Ionesco moment”: why is it that seemingly “everyone” who uses it loves Quora when, to me, it represents such a move in the wrong direction? Is there something huge I’m missing? Or has that world gone crazy?

It was a surreal experience.

And while I’m all for surrealism, I get this strange feeling when I’m so unable to understand a situation. It’s partly a motivation for delving into the issue (I’m surely not the only ethnographer to get this). But it’s also unsettling.

And, for Quora at least, this phase seems to be over. I now think I have a good idea as to what makes for such a difference in people’s experiences with Quora.

It has to do with the network effect.

I’m sure some Quora fanbois will disagree, but it’s now such a clear picture in my mind that it gets me into the next phase. Which has little to do with Quora itself.

The “network effect” is the kind of notion which is so commonplace that few people bother explaining it outside of introductory courses (same thing with “group forming” in social psychology and sociology, or preferential marriage patterns in cultural anthropology). What someone might call (perhaps dismissively): “textbook stuff.”

I’m completely convinced that there’s a huge amount of research on the network effect, but I’m also guessing few people looking it up. And I’m accusing people, here. Ever since I first heard of it (in 1993, or so), I’ve rarely looked at explanations of it and I actually don’t care about the textbook version of the concept. And I won’t “look it up.” I’m more interested in diverse usage patterns related to the concept (I’m a linguistic anthropologist).

So, the version I first heard (at a time when the Internet was off most people’s radar) was something like: “in networked technology, you need critical mass for the tools to become truly useful. For instance, the telephone has no use if you’re the only one with one and it has only very limited use if you can only call a single person.” Simple to the point of being simplistic, but a useful reminder.

Over the years, I’ve heard and read diverse versions of that same concept, usually in more sophisticated form, but usually revolving around the same basic idea that there’s a positive effect associated with broader usage of some networked technology.

I’m sure specialists have explored every single implication of this core idea, but I’m not situating myself as a specialist of technological networks. I’m into social networks, which may or may not be associated with technology (however defined). There are social equivalents of the “network effect” and I know some people are passionate about those. But I find that it’s quite limiting to focus so exclusively on quantitative aspects of social networks. What’s so special about networks, in a social science perspective, isn’t scale. Social scientists are used to working with social groups at any scale and we’re quite aware of what might happen at different scales. But networks are fascinating because of different features they may have. We may gain a lot when we think of social networks as acephalous, boundless, fluid, nameless, indexical, and impactful. [I was actually lecturing about some of this in my "Intro to soci" course, yesterday...]

So, from my perspective, “network effect” is an interesting concept when talking about networked technology, in part because it relates to the social part of those networks (innovation happens mainly through technological adoption, not through mere “invention”). But it’s not really the kind of notion I’d visit regularly.

This case is somewhat different. I’m perceiving something rather obvious (and which is probably discussed extensively in research fields which have to do with networked technology) but which strikes me as missing from some discussions of social networking systems online. In a way, it’s so obvious that it’s kind of difficult to explain.

But what’s coming up in my mind has to do with a specific notion of “intimacy.” It’s actually something which has been on my mind for a while and it might still need to “bake” a bit longer before it can be shared properly. But, like other University of the Streets participants, I perceive the importance of sharing “half-baked thoughts.”

And, right now, I’m thinking of an anecdotal context which may get the point across.

Given my attendance policy, there are class meetings during which a rather large proportion of the class is missing. I tend to call this an “intimate setting,” though I’m aware that it may have different connotations to different people. From what I can observe, people in class get the point. The classroom setting is indeed changing significantly and it has to do with being more “intimate.”

Not that we’re necessarily closer to one another physically or intellectually. It needs not be a “bonding experience” for the situation to be interesting. And it doesn’t have much to do with “absolute numbers” (a classroom with 60 people is relatively intimate when the usual attendance is close to 100; a classroom with 30 people feels almost overwhelming when only 10 people were showing up previously). But there’s some interesting phenomenon going on when there are fewer people than usual, in a classroom.

Part of this phenomenon may relate to motivation. In some ways, one might expect that those who are attending at that point are the “most dedicated students” in the class. This might be a fairly reasonable assumption in the context of a snowstorm but it might not work so well in other contexts (say, when the incentive to “come to class” relates to extrinsic motivation). So, what’s interesting about the “intimate setting” isn’t necessarily that it brings together “better people.” It’s that something special goes on.

What’s going on, with the “intimate classroom,” can vary quite a bit. But there’s still “something special” about it. Even when it’s not a bonding experience, it’s still a shared experience. While “communities of practice” are fascinating, this is where I tend to care more about “communities of experience.” And, again, it doesn’t have much to do with scale and it may have relatively little to do with proximity (physical or intellectual). But it does have to do with cognition and communication. What is special with the “intimate classroom” has to do with shared assumptions.

Going back to Quora…

While an online service with any kind of network effect is still relatively new, there’s something related to the “intimate setting” going on. In other words, it seems like the initial phase of the network effect is the “intimacy” phase: the service has a “large enough userbase” to be useful (so, it’s achieved a first type of critical mass) but it’s still not so “large” as to be overwhelming.

During that phase, the service may feel to people like a very welcoming place. Everyone can be on a “first-name basis. ” High-status users mingle with others as if there weren’t any hierarchy. In this sense, it’s a bit like the liminal phase of a rite of passage, during which communitas is achieved.

This phase is a bit like the Golden Age for an online service with a significant “social dimension.” It’s the kind of time which may make people “wax nostalgic about the good ole days,” once it’s over. It’s the time before the BYT comes around.

Sure, there’s a network effect at stake.  You don’t achieve much of a “sense of belonging” by yourself. But, yet again, it’s not really a question of scale. You can feel a strong bond in a dyad and a team of three people can perform quite well. On the other hand, the cases about which I’m thinking are orders of magnitude beyond the so-called “Dunbar number” which seems to obsess so many people (outside of anthro, at least).

Here’s where it might get somewhat controversial (though similar things have been said about Quora): I’d argue that part of this “intimacy effect” has to do with a sense of “exclusivity.” I don’t mean this as the way people talk about “elitism” (though, again, there does seem to be explicit elitism involved in Quora’s case). It’s more about being part of a “select group of people.” About “being there at the time.” It can get very elitist, snobbish, and self-serving very fast. But it’s still about shared experiences and, more specifically, about the perceived boundedness of communities of experience.

We all know about early adopters, of course. And, as part of my interest in geek culture, I keep advocating for more social awareness in any approach to the adoption part of social media tools. But what I mean here isn’t about a “personality type” or about the “attributes of individual actors.” In fact, this is exactly a point at which the study of social networks starts deviating from traditional approaches to sociology. It’s about the special type of social group the “initial userbase” of such a service may represent.

From a broad perspective (as outsiders, say, or using the comparativist’s “etic perspective”), that userbase is likely to be rather homogeneous. Depending on the enrollment procedure for the service, the structure of the group may be a skewed version of an existing network structure. In other words, it’s quite likely that, during that phase, most of the people involved were already connected through other means. In Quora’s case, given the service’s pushy overeagerness on using Twitter and Facebook for recruitment, it sounds quite likely that many of the people who joined Quora could already be tied through either Twitter or Facebook.

Anecdotally, it’s certainly been my experience that the overwhelming majority of people who “follow me on Quora” have been part of my first degree on some social media tool in the recent past. In fact, one of my main reactions as I’ve been getting those notifications of Quora followers was: “here are people with whom I’ve been connected but with whom I haven’t had significant relationships.” In some cases, I was actually surprised that these people would “follow” me while it appeared like they actually weren’t interested in having any kind of meaningful interactions. To put it bluntly, it sometimes appeared as if people who had been “snubbing” me were suddenly interested in something about me. But that was just in the case of a few people I had unsuccessfully tried to engage in meaningful interactions and had given up thinking that we might not be that compatible as interlocutors. Overall, I was mostly surprised at seeing the quick uptake in my follower list, which doesn’t tend to correlate with meaningful interaction, in my experience.

Now that I understand more about the unthinking way new Quora users are adding people to their networks, my surprise has transformed into an additional annoyance with the service. In a way, it’s a repeat of the time (what was it? 2007?) when Facebook applications got their big push and we kept receiving those “app invites” because some “social media mar-ke-tors” had thought it wise to force people to “invite five friends to use the service.” To Facebook’s credit (more on this later, I hope), these pushy and thoughtless “invitations” are a thing of the past…on those services where people learnt a few lessons about social networks.

Perhaps interestingly, I’ve had a very similar experience with Scribd, at about the same time. I was receiving what seemed like a steady flow of notifications about people from my first degree online network connecting with me on Scribd, whether or not they had ever engaged in a meaningful interaction with me. As with Quora, my initial surprise quickly morphed into annoyance. I wasn’t using any service much and these meaningless connections made it much less likely that I would ever use these services to get in touch with new and interesting people. If most of the people who are connecting with me on Quora and Scribd are already in my first degree and if they tend to be people I have limited interactions, why would I use these services to expand the range of people with whom I want to have meaningful interactions? They’re already within range and they haven’t been very communicative (for whatever reason, I don’t actually assume they were consciously snubbing me). Investing in Quora for “networking purposes” seemed like a futile effort, for me.

Perhaps because I have a specific approach to “networking.”

In my networking activities, I don’t focus on either “quantity” or “quality” of the people involved. I seriously, genuinely, honestly find something worthwhile in anyone with whom I can eventually connect, so the “quality of the individuals” argument doesn’t work with me. And I’m seriously, genuinely, honestly not trying to sell myself on a large market, so the “quantity” issue is one which has almost no effect on me. Besides, I already have what I consider to be an amazing social network online, in terms of quality of interactions. Sure, people with whom I interact are simply amazing. Sure, the size of my first degree network on some services is “well above average.” But these things wouldn’t matter at all if I weren’t able to have meaningful interactions in these contexts. And, as it turns out, I’m lucky enough to be able to have very meaningful interactions in a large range of contexts, both offline and on. Part of it has to do with the fact that I’m teaching addict. Part of it has to do with the fact that I’m a papillon social (social butterfly). It may even have to do with a stage in my life, at which I still care about meeting new people but I don’t really need new people in my circle. Part of it makes me much less selective than most other people (I like to have new acquaintances) and part of it makes me more selective (I don’t need new “friends”). If it didn’t sound condescending, I’d say it has to do with maturity. But it’s not about my own maturity as a human being. It’s about the maturity of my first-degree network.

There are other people who are in an expansionist phase. For whatever reason (marketing and job searches are the best-known ones, but they’re really not the only ones), some people need to get more contacts and/or contacts with people who have some specific characteristics. For instance, there are social activists out there who need to connect to key decision-makers because they have a strong message to carry. And there are people who were isolated from most other people around them because of stigmatization who just need to meet non-judgmental people. These, to me, are fine goals for someone to expand her or his first-degree network.

Some of it may have to do with introversion. While extraversion is a “dominant trait” of mine, I care deeply about people who consider themselves introverts, even when they start using it as a divisive label. In fact, that’s part of the reason I think it’d be neat to hold a ShyCamp. There’s a whole lot of room for human connection without having to rely on devices of outgoingness.

So, there are people who may benefit from expansion of their first-degree network. In this context, the “network effect” matters in a specific way. And if I think about “network maturity” in this case, there’s no evaluation involved, contrary to what it may seem like.

As you may have noticed, I keep insisting on the fact that we’re talking about “first-degree network.” Part of the reason is that I was lecturing about a few key network concepts just yesterday so, getting people to understand the difference between “the network as a whole” (especially on an online service) and “a given person’s first-degree network” is important to me. But another part relates back to what I’m getting to realize about Quora and Scribd: the process of connecting through an online service may have as much to do with collapsing some degrees of separation than with “being part of the same network.” To use Granovetter’s well-known terms, it’s about transforming “weak ties” into “strong” ones.

And I specifically don’t mean it as a “quality of interaction.” What is at stake, on Quora and Scribd, seems to have little to do with creating stronger bonds. But they may want to create closer links, in terms of network topography. In a way, it’s a bit like getting introduced on LinkedIn (and it corresponds to what biz-minded people mean by “networking”): you care about having “access” to that person, but you don’t necessarily care about her or him, personally.

There’s some sense in using such an approach on “utilitarian networks” like professional or Q&A ones (LinkedIn does both). But there are diverse ways to implement this approach and, to me, Quora and Scribd do it in a way which is very precisely counterproductive. The way LinkedIn does it is context-appropriate. So is the way Academia.edu does it. In both of these cases, the “transaction cost” of connecting with someone is commensurate with the degree of interaction which is possible. On Scribd and Quora, they almost force you to connect with “people you already know” and the “degree of interaction” which is imposed on users is disproportionately high (especially in Quora’s case, where a contact of yours can annoy you by asking you personally to answer a specific question). In this sense, joining Quora is a bit closer to being conscripted in a war while registering on Academia.edu is just a tiny bit more like getting into a country club. The analogies are tenuous but they probably get the point across. Especially since I get the strong impression that the “intimacy phase” has a lot to do with the “country club mentality.”

See, the social context in which these services gain much traction (relatively tech-savvy Anglophones in North America and Europe) assign very negative connotations to social exclusion but people keep being fascinating by the affordances of “select clubs” in terms of social capital. In other words, people may be very vocal as to how nasty it would be if some people had exclusive access to some influential people yet there’s what I perceive as an obsession with influence among the same people. As a caricature: “The ‘human rights’ movement leveled the playing field and we should never ever go back to those dark days of Old Boys’ Clubs and Secret Societies. As soon as I become the most influential person on the planet, I’ll make sure that people who think like me get the benefits they deserve.”

This is where the notion of elitism, as applied specifically to Quora but possibly expanding to other services, makes the most sense. “Oh, no, Quora is meant for everyone. It’s Democratic! See? I can connect with very influential people. But, isn’t it sad that these plebeians are coming to Quora without a proper knowledge of  the only right way to ask questions and without proper introduction by people I can trust? I hate these n00bz! Even worse, there are people now on the service who are trying to get social capital by promoting themselves. The nerve on these people, to invade my own dedicated private sphere where I was able to connect with the ‘movers and shakers’ of the industry.” No wonder Quora is so journalistic.

But I’d argue that there’s a part of this which is a confusion between first-degree networks and connection. Before Quora, the same people were indeed connected to these “influential people,” who allegedly make Quora such a unique system. After all, they were already online and I’m quite sure that most of them weren’t more than three or four degrees of separation from Quora’s initial userbase. But access to these people was difficult because connections were indirect. “Mr. Y Z, the CEO of Company X was already in my network, since there were employees of Company X who were connected through Twitter to people who follow me. But I couldn’t just coldcall CEO Z to ask him a question, since CEOs are out of reach, in their caves. Quora changed everything because Y responded to a question by someone ‘totally unconnected to him’ so it’s clear, now, that I have direct access to my good ol’ friend Y’s inner thoughts and doubts.”

As RMS might say, this type of connection is a “seductive mirage.” Because, I would argue, not much has changed in terms of access and whatever did change was already happening all over this social context.

At the risk of sounding dismissive, again, I’d say that part of what people find so alluring in Quora is “simply” an epiphany about the Small World phenomenon. With all sorts of fallacies caught in there. Another caricature: “What? It takes only three contacts for me to send something from rural Idaho to the head honcho at some Silicon Valley firm? This is the first time something like this happens, in the History of the Whole Wide World!”

Actually, I do feel quite bad about these caricatures. Some of those who are so passionate about Quora, among my contacts, have been very aware of many things happening online since the early 1990s. But I have to be honest in how I receive some comments about Quora and much of it sounds like a sudden realization of something which I thought was a given.

The fact that I feel so bad about these characterizations relates to the fact that, contrary to what I had planned to do, I’m not linking to specific comments about Quora. Not that I don’t want people to read about this but I don’t want anyone to feel targeted. I respect everyone and my characterizations aren’t judgmental. They’re impressionistic and, again, caricatures.

Speaking of what I had planned, beginning this post… I actually wanted to talk less about Quora specifically and more about other issues. Sounds like I’m currently getting sidetracked, and it’s kind of sad. But it’s ok. The show must go on.

So, other services…

While I had a similar experiences with Scribd and Quora about getting notifications of new connections from people with whom I haven’t had meaningful interactions, I’ve had a very different experience on many (probably most) other services.

An example I like is Foursquare. “Friendship requests” I get on Foursquare are mostly from: people with whom I’ve had relatively significant interactions in the past, people who were already significant parts of my second-degree network, or people I had never heard of. Sure, there are some people with whom I had tried to establish connections, including some who seem to reluctantly follow me on Quora. But the proportion of these is rather minimal and, for me, the stakes in accepting a friend request on Foursquare are quite low since it’s mostly about sharing data I already share publicly. Instead of being able to solicit my response to a specific question, the main thing my Foursquare “friends” can do that others can’t is give me recommendations, tips, and “notifications of their presence.” These are all things I might actually enjoy, so there’s nothing annoying about it. Sure, like any online service with a network component, these days, there are some “friend requests” which are more about self-promotion. But those are usually easy to avoid and, even if I get fooled by a “social media mar-ke-tor,” the most this person may do to me is give usrecommendation about “some random place.” Again, easy to avoid. So, the “social network” dimension of Foursquare seems appropriate, to me. Not ideal, but pretty decent.

I never really liked the “game” aspect and while I did play around with getting badges and mayorships in my first few weeks, it never felt like the point of Foursquare, to me. As Foursquare eventually became mainstream in Montreal and I was asked by a journalist about my approach to Foursquare, I was exactly in the phase when I was least interested in the game aspect and wished we could talk a whole lot more about the other dimensions of the phenomenon.

And I realize that, as I’m saying this, I may sound to some as exactly those who are bemoaning the shift out of the initial userbase of some cherished service. But there are significant differences. Note that I’m not complaining about the transition in the userbase. In the Foursquare context, “the more the merrier.” I was actually glad that Foursquare was becoming mainstream as it was easier to explain to people, it became more connected with things business owners might do, and generally had more impact. What gave me pause, at the time, is the journalistic hype surrounding Foursquare which seemed to be missing some key points about social networks online. Besides, I was never annoyed by this hype or by Foursquare itself. I simply thought that it was sad that the focus would be on a dimension of the service which was already present on not only Dodgeball and other location-based services but, pretty much, all over the place. I was critical of the seemingly unthinking way people approached Foursquare but the service itself was never that big a deal for me, either way.

And I pretty much have the same attitude toward any tool. I happen to have my favourites, which either tend to fit neatly in my “workflow” or otherwise have some neat feature I enjoy. But I’m very wary of hype and backlash. Especially now. It gets old very fast and it’s been going for quite a while.

Maybe I should just move away from the “tech world.” It’s the context for such hype and buzz machine that it almost makes me angry. [I very rarely get angry.] Why do I care so much? You can say it’s accumulation, over the years. Because I still care about social media and I really do want to know what people are saying about social media tools. I just wish discussion of these tools weren’t soooo “superlative”…

Obviously, I digress. But this is what I like to do on my blog and it has a cathartic effect. I actually do feel better now, thank you.

And I can talk about some other things I wanted to mention. I won’t spend much time on them because this is long enough (both as a blogpost and as a blogging session). But I want to set a few placeholders, for further discussion.

One such placeholder is about some pet theories I have about what worked well with certain services. Which is exactly the kind of thing “social media entrepreneurs” and journalists are so interested in, but end up talking about the same dimensions.

Let’s take Twitter, for instance. Sure, sure, there’s been a lot of talk about what made Twitter a success and probably-everybody knows that it got started as a side-project at Odeo, and blah, blah, blah. Many people also realize that there were other microblogging services around as Twitter got traction. And I’m sure some people use Twitter as a “textbook case” of “network effect” (however they define that effect). I even mention the celebrity dimensions of the “Twitter phenomenon” in class (my students aren’t easily starstruck by Bieber and Gaga) and I understand why journalists are so taken by Twitter’s “broadcast” mission. But something which has been discussed relatively rarely is the level of responsiveness by Twitter developers, over the years, to people’s actual use of the service. Again, we all know that “@-replies,” “hashtags,” and “retweets” were all emerging usage patterns that Twitter eventually integrated. And some discussion has taken place when Twitter changed it’s core prompt to reflect the fact that the way people were using it had changed. But there’s relatively little discussion as to what this process implies in terms of “developing philosophy.” As people are still talking about being “proactive” (ugh!) with users, and crude measurements of popularity keep being sold and bandied about, a large part of the tremendous potential for responsiveness (through social media or otherwise) is left untapped. People prefer to hype a new service which is “likely to have Twitter-like success because it has the features users have said they wanted in the survey we sell.” Instead of talking about the “get satisfaction” effect in responsiveness. Not that “consumers” now have “more power than ever before.” But responsive developers who refrain from imposing their views (Quora, again) tend to have a more positive impact, socially, than those which are merely trying to expand their userbase.

Which leads me to talk about Facebook. I could talk for hours on end about Facebook, but I almost feel afraid to do so. At this point, Facebook is conceived in what I perceive to be such a narrow way that it seems like anything I might say would sound exceedingly strange. Given the fact that it was part one of the first waves of Web tools with explicit social components to reach mainstream adoption, it almost sounds “historical” in timeframe. But, as so many people keep saying, it’s just not that old. IMHO, part of the implication of Facebook’s relatively young age should be that we are able to discuss it as a dynamic process, instead of assigning it to a bygone era. But, whatever…

Actually, I think part of the reason there’s such lack of depth in discussing Facebook is also part of the reason it was so special: it was originally a very select service. Since, for a significant period of time, the service was only available to people with email addresses ending in “.edu,” it’s not really surprising that many of the people who keep discussing it were actually not on the service “in its formative years.” But, I would argue, the fact that it was so exclusive at first (something which is often repeated but which seems to be understood in a very theoretical sense) contributed quite significantly to its success. Of course, similar claims have been made but, I’d say that my own claim is deeper than others.

[Bang! I really don't tend to make claims so, much of this blogpost sounds to me as if it were coming from somebody else...]

Ok, I don’t mean it so strongly. But there’s something I think neat about the Facebook of 2005, the one I joined. So I’d like to discuss it. Hence the placeholder.

And, in this placeholder, I’d fit: the ideas about responsiveness mentioned with Twitter, the stepwise approach adopted by Facebook (which, to me, was the real key to its eventual success), the notion of intimacy which is the true core of this blogpost, the notion of hype/counterhype linked to journalistic approaches, a key distinction between privacy and intimacy, some non-ranting (but still rambling) discussion as to what Google is missing in its “social” projects, anecdotes about “sequential network effects” on Facebook as the service reached new “populations,” some personal comments about what I get out of Facebook even though I almost never spent any significant amount of time on it, some musings as to the possibility that there are online services which have reached maturity and may remain stable in the foreseeable future, a few digressions about fanboism or about the lack of sophistication in the social network models used in online services, and maybe a bit of fun at the expense of “social media expert marketors”…

But that’ll be for another time.

Cheers!

Minds of All Sizes Think Alike

Or «les esprits de toutes tailles se rencontrent».

This post is a response to the following post about Social Network Analysis (SNA), social change, and communication.

…My heart’s in Accra » Shortcuts in the social graph.

I have too many disparate things to say about that post to make it into a neat and tidy “quickie,” yet I feel like I should probably be working on other things. So we’ll see how this goes.

First, a bit of context..

[This "bit of context" may be a bit long so, please bear with me. Or you could get straight to the point, if you don't think you can bear the context bit.]

I’ve never met Ethan Zuckerman (@EthanZ), who wrote the post to which I’m responding. And I don’t think we’ve had any extended conversation in the past. Further, I doubt that I’m on his radar. He’s probably seen my name, since I’ve commented on some of his posts and some of his contacts may have had references to me through social media. But I very much doubt that he’s ever mentioned me to anyone. I’m not noticeable to him.

I, on the other hand, have mentioned Zuckerman on several occasions. Latest time I remember was in class, a few weeks ago. It’s a course on Africa and I was giving students a list of online sources with relevance to our work. Zuckerman’s connection to Africa may not be his main thing, despite his blog’s name, but it’s part of the reason I got interested in his work, a few years ago.

In fact, there’s something embarrassing, here.. I so associate Zuckerman to Africa that my mind can’t help but link him to Erik Hersman, aka White African. I did meet Herman. [To be exact, I met Erik at BarCampAustin, which is quite possibly the conference-like event which has had the most influence on me, in the past few years (I go to a lot of these events).] When I did meet Hersman, I made a faux-pas in associating him with Zuckerman. Good-natured as he seemed to be, Hersman smiled as he corrected me.

EthanZ and I have other contacts in common. Jeremy Clarke, for instance, who co-organizes WordCamp Montreal and has been quite active in Montreal’s geek scene. Jeremy’s also a developer for Global Voices, a blogging community that Zuckerman co-founded. I’m assuming Clarke and Zuckerman know each other.

Another mutual contact is Christopher Lydon, host of Radio Open Source. Chris and I have exchanged a few emails, and Zuckerman has been on ROS on a few occasions.

According to Facebook, Zuckerman and I have four contacts in common. Apart from Clarke and Hersman, there’s P. Kerim Friedman and Gerd Leonhard. Kerim is a fellow linguistic anthropologist and we’ve collaborated on the official Society for Linguistic Anthropology (SLA) site. I got in touch with Leonhard through “Music 2.0″ issues, as he was interviewed by Charles McEnerney on Well-Rounded Radio.

On LinkedIn, Zuckerman is part of my third degree, with McEnerney as one of my first-degree contacts who could connect me to Zuckerman, through Zuckerman’s contacts.

(Yes, I’m fully aware of the fact that I haven’t name a single woman in this list. Nor someone who doesn’t write in English with some frequency, for that matter.)

By this time, my guess is that you may be either annoyed or confused. “Surely, he can’t be that obsessed with Zuckerman as to stalk him in every network.”

No, I’m not at all obsessed with Ethan Zuckerman in any way, shape, or form. Though I mention him on occasion and I might have a good conversation with him if the occasion arises, I wouldn’t go hang out in Cambridge just in case I might meet him. Though I certainly respect his work, I wouldn’t treat him as my “idol” or anything like that. In other words, he isn’t a focus in my life.

And that’s a key point, to me.

In certain contexts, when social networks are discussed, too much is made of the importance of individuals. Yet, there’s something to be said about relative importance.

In his “shortcuts” post, Zuckerman talks about a special kind of individuals. Those who are able to bypass something of a clustering effect happening in many human networks. Malcolm Gladwell (probably “inspired” by somebody else) has used “connectors” to label a fairly similar category of people and, given Gladwell’s notoriety in some circles, the name has resonance in some contexts (mostly “business-focused people,” I would say, with a clear idea in my mind of the groupthink worldview implied).

In one of my earliest blogposts, I talked about an effect happening through a similar mechanism, calling it the “Social Butterfly Effect” (SBE). I still like it, as a concept. Now, I admit that it focuses on a certain type of individuals. But it’s more about their position in “the grand scheme of things” than about who they are, though I do associate myself with this “type.”

The basic idea is quite simple. People who participate in different (sub)networks, who make such (sub)networks sparser, are having unpredictable and unmeasurable effects on what is transmitted through the network(s).

On one hand, it’s linked to my fragmentary/naïve understanding of the Butterfly Effect in the study of climate and as a component of Chaos Theory.

On the other hand, it’s related to Granovetter‘s well-known notion of “weak ties.” And it seems like Granovetter is making something of a comeback, as we discuss different mechanisms behind social change.

Interestingly, much of what is being said about weak ties, these past few weeks, relates to Gladwell’s flamebait apparent lack of insight in describing current social processes. Sounds like Gladwell may be too caught up in the importance of individuals to truly grok the power of networks.

Case in point.. One of the most useful pieces I’ve read about weak ties, recently, was Jonah Lehrer‘s direct response to Gladwell:

Weak Ties, Twitter and Revolution | Wired Science | Wired.com.

Reading Lehrer’s piece, one gets the clear impression that Gladwell hadn’t “done his homework” on Granovetter before launching his trolling “controversial” piece on activism.

But I digress. Slightly.

Like the Gladwell-specific coverage, Zuckerman’s blogpost is also about social change and he’s already responded to Gladwell. One way to put it is that, as a figure, Gladwell has shaped the discussion in a way similar to a magnetic field orienting iron filings around it. Since it’s a localized effect having to do with polarization, the analogy is fairly useful, as analogies go.

Which brings me to groupthink, the apparent target of Zuckerman’s piece.

Still haven’t read Irving Janis but I’ve been quite interested in groupthink for a while. Awareness of the concept is something I immediately recognize, praise, and associate with critical thinking.

In fact, it’s one of several things I was pleasantly surprised to find in an introductory sociology WikiBook I ended up using in my  “Intro. to Society” course, last year. Critical thinking was the main theme of that course, and this short section was quite fitting in the overall discussion.

So, what of groupthink and networks? Zuckerman sounds worried:

This is interesting to me because I’m intrigued – and worried – by information flows through social networks. If we’re getting more (not lots yet, but more) information through social networks and less through curated media like newspapers, do we run the risk of encountering only information that our friends have access to? Are we likely to be overinformed about some conversations and underinformed about others? And could this isolation lead to ideological polarization, as Cass Sunstein and others suggest? And if those fears are true, is there anything we can do to rewire social networks so that we’re getting richer, more diverse information?

Similar questions have animated many discussions in media-focused circles, especially in those contexts where the relative value (and meaning) of “old vs. new media” may be debated. At about the same time as I started blogging, I remember discussing things with a statistician friend about the polarization effect of media, strong confirmation bias in reading news stories, and political lateralization.

In the United States, especially, there’s a narrative (heard loud and clear) that people who disagree on some basic ideas are unable to hear one another. “Shockingly,” some say, “conservatives and liberals read different things.” Or “those on (the) two sides of (the) debate understand things in completely different ways.” It even reminds me of the connotations of Tannen’s booktitle, You Just Don’t Understand. Irreconciliable differences. (And the first time I mention a woman in this decidedly imbalanced post.)

While, as a French-Canadian ethnographer, my perspective is quite different from Zuckerman, I can’t help but sympathize with the feeling. Not that I associate groupthink with a risk in social media (au contraire!). But, like Zuckerman, I wish to find ways to move beyond these boundaries we impose on ourselves.

Zuckerman specifically discusses the attempt by Onnik Krikorian (@OneWMPhoto) to connect Armenians (at least those in Hayastan) and Azeris, with Facebook “affording” Krikorian some measure of success. This case is now well-known in media-centric circles and it has almost become shorthand for the power of social media. Given a personal interest in Armenians (at least in the Diaspora), my reaction to Krikorian’s success are less related to the media aspect than to the personal one.

At a personal level, boundaries may seem difficult to surmount but they can also be fairly porous and even blurry. Identity may be negotiated. Individuals crossing boundaries may be perceived in diverse ways, some of which have little to do with other people crossing the same boundaries. Things are lived directly, from friendships to wars, from breakups to reconciliations. Significant events happen regardless of the way  they’re being perceived across boundaries.

Not that boundaries don’t matter but they don’t necessarily circumscribe what happens in “personal lives.” To use an seemingly-arbitrary example, code-switching doesn’t “feel” strange at an individual level. It’s only when people insist on separating languages using fairly artificial criteria that alternance between them sounds awkward.

In other words, people cross boundaries all the time and “there’s nothing to it.”

Boundaries have quite a different aspect, at the macrolevel implied by the journalistic worldview (with nation-based checkbox democracy at its core and business-savvy professionalization as its mission). To “macros” like journos and politicos, boundaries look like borders, appearing clearly on maps (including mind ones) and implying important disconnects. The border between Armenia and Azerbaijan is a boundary separating two groups and the conflicts between these two groups reify that boundary. Reaching out across the border is a diplomatic process and necessitates finding the right individuals for the task. Most of the important statuses are ascribed, which may sound horrible to some holding neoliberal ideas about freewill and “individual freedoms.”

Though it’s quite common for networked activities to be somewhat constrained by boundaries, a key feature of networks is that they’re typically boundless. Sure, there are networks which are artificially isolated from the rest. The main example I can find is that of a computer virology laboratory.

Because, technically, you only need one link between two networks to transform them into a single network. So, it’s quite possible to perceive Verizon’s wireless network as a distinct entity, limited by the national boundaries of the U.S. of A. But the simple fact that someone can use Verizon’s network to contact someone in Ségou shows that the network isn’t isolated. Simple, but important to point out.

Especially since we’re talking about a number of things happening on a single network: The Internet. (Yes, there is such a thing as Internet2 and there are some technical distinctions at stake. But we’re still talking about an interconnected world.)

As is well-known, there are significant clusters in this One Network. McLuhan’s once-popular “Global Village” fallacy used to hide this, but we now fully realize that language barriers, national borders, and political lateralization go with “low-bandwidth communication,” in some spots of The Network. “Gs don’t talk to Cs so even though they’re part of the same network, there’s a weak spot, there.” In a Shannon/Weaver view, it sounds quite important to identify these weak spots. “Africa is only connected to North America via a few lines so access is limited, making things difficult for Africans.” Makes sense.

But going back to weak ties, connectors, Zuckerman’s shortcuts, and my own social butterflies, the picture may be a little bit more fleshed out.

Actually, the image I have in mind has, on one side, a wire mesh serving as the floor of an anechoic chamber  and on the other some laser beams going in pseudorandom directions as in Entrapment or Mission Impossible. In the wire mesh, weaker spots might cause a person to fall through and land on those artificial stalagmites. With the laser beams, the pseudorandom structure makes it more difficult to “find a path through the maze.” Though some (engineers) may see the mesh as the ideal structure for any network, there’s something humanly fascinating about the pseudorandom structure of social networks.

Obviously, I have many other ideas in mind. For instance, I wanted to mention “Isabel Wilkerson’s Leaderless March that Remade America.” Or go back to that intro soci Wikibook to talk about some very simple and well-understood ideas about social movements, which often seem to be lacking in discussions of social change. I even wanted to recount some anecdotes of neat network effects in my own life, such as the serendipity coming from discuss disparate subjects to unlike people or the misleading impression that measuring individualized influence is a way to understand social media. Not to mention a whole part I had in my mind about Actor Network Theory, non-human actors, and material culture (the other course I currently teach).

But I feel like going back to more time-sensitive things.

Still, I should probably say a few words about this post’s title.

My mother and I were discussing parallel inventions and polygenesis with the specific theme of moving away from the focus on individualized credit. My favourite example, and one I wish Gladwell (!) had used in Outliers (I actually asked him about it) is that of Gregor Mendel and the “rediscovery” of his laws by de Vries, Correns, and Tschermak. A semi-Marxian version of the synchronous polygenesis part might hold that “ideas are in the air” or that the timing of such dicoveries and inventions has to do with zeitgeist. A neoliberal version could be the “great minds think alike” expression or its French equivalent «Les grands esprits se rencontrent» (“The great spirits meet each other”). Due to my reluctance in sizing up minds, I’d have a hard time using that as a title. In the past, I used a similar title to refer to another form of serendipity:

To me, most normally constituted minds are “great,” so I still could have used the expression as a title. But an advantage of tweaking an expression is that it brings attention to what it implies.

In this case, the “thinking alike” may be a form of groupthink.

 

Academics and Their Publics

Misunderstood by Raffi Asdourian
Misunderstood by Raffi Asdourian

Academics are misunderstood.

Almost by definition.

Pretty much any academic eventually feels that s/he is misunderstood. Misunderstandings about some core notions in about any academic field are involved in some of the most common pet peeves among academics.

In other words, there’s nothing as transdisciplinary as misunderstanding.

It can happen in the close proximity of a given department (“colleagues in my department misunderstand my work”). It can happen through disciplinary boundaries (“people in that field have always misunderstood our field”). And, it can happen generally: “Nobody gets us.”

It’s not paranoia and it’s probably not self-victimization. But there almost seems to be a form of “onedownmanship” at stake with academics from different disciplines claiming that they’re more misunderstood than others. In fact, I personally get the feeling that ethnographers are more among the most misunderstood people around, but even short discussions with friends in other fields (including mathematics) have helped me get the idea that, basically, we’re all misunderstood at the same “level” but there are variations in the ways we’re misunderstood. For instance, anthropologists in general are mistaken for what they aren’t based on partial understanding by the general population.

An example from my own experience, related to my decision to call myself an “informal ethnographer.” When you tell people you’re an anthropologist, they form an image in their minds which is very likely to be inaccurate. But they do typically have an image in their minds. On the other hand, very few people have any idea about what “ethnography” means, so they’re less likely to form an opinion of what you do from prior knowledge. They may puzzle over the term and try to take a guess as to what “ethnographer” might mean but, in my experience, calling myself an “ethnographer” has been a more efficient way to be understood than calling myself an “anthropologist.”

This may all sound like nitpicking but, from the inside, it’s quite impactful. Linguists are frequently asked about the number of languages they speak. Mathematicians are taken to be number freaks. Psychologists are perceived through the filters of “pop psych.” There are many stereotypes associated with engineers. Etc.

These misunderstandings have an impact on anyone’s work. Not only can it be demoralizing and can it impact one’s sense of self-worth, but it can influence funding decisions as well as the use of research results. These misunderstandings can underminine learning across disciplines. In survey courses, basic misunderstandings can make things very difficult for everyone. At a rather basic level, academics fight misunderstandings more than they fight ignorance.

The  main reason I’m discussing this is that I’ve been given several occasions to think about the interface between the Ivory Tower and the rest of the world. It’s been a major theme in my blogposts about intellectuals, especially the ones in French. Two years ago, for instance, I wrote a post in French about popularizers. A bit more recently, I’ve been blogging about specific instances of misunderstandings associated with popularizers, including Malcolm Gladwell’s approach to expertise. Last year, I did a podcast episode about ethnography and the Ivory Tower. And, just within the past few weeks, I’ve been reading a few things which all seem to me to connect with this same issue: common misunderstandings about academic work. The connections are my own, and may not be so obvious to anyone else. But they’re part of my motivations to blog about this important issue.

In no particular order:

But, of course, I think about many other things. Including (again, in no particular order):

One discussion I remember, which seems to fit, included comments about Germaine Dieterlen by a friend who also did research in West Africa. Can’t remember the specifics but the gist of my friend’s comment was that “you get to respect work by the likes of Germaine Dieterlen once you start doing field research in the region.” In my academic background, appreciation of Germaine Dieterlen’s may not be unconditional, but it doesn’t necessarily rely on extensive work in the field. In other words, while some parts of Dieterlen’s work may be controversial and it’s extremely likely that she “got a lot of things wrong,” her work seems to be taken seriously by several French-speaking africanists I’ve met. And not only do I respect everyone but I would likely praise someone who was able to work in the field for so long. She’s not my heroine (I don’t really have heroes) or my role-model, but it wouldn’t have occurred to me that respect for her wasn’t widespread. If it had seemed that Dieterlen’s work had been misunderstood, my reflex would possibly have been to rehabilitate her.

In fact, there’s  a strong academic tradition of rehabilitating deceased scholars. The first example which comes to mind is a series of articles (PDF, in French) and book chapters by UWO linguistic anthropologist Regna Darnell.about “Benjamin Lee Whorf as a key figure in linguistic anthropology.” Of course, saying that these texts by Darnell constitute a rehabilitation of Whorf reveals a type of evaluation of her work. But that evaluation comes from a third person, not from me. The likely reason for this case coming up to my mind is that the so-called “Sapir-Whorf Hypothesis” is among the most misunderstood notions from linguistic anthropology. Moreover, both Whorf and Sapir are frequently misunderstood, which can make matters difficulty for many linguistic anthropologists talking with people outside the discipline.

The opposite process is also common: the “slaughtering” of “sacred cows.” (First heard about sacred cows through an article by ethnomusicologist Marcia Herndon.) In some significant ways, any scholar (alive or not) can be the object of not only critiques and criticisms but a kind of off-handed dismissal. Though this often happens within an academic context, the effects are especially lasting outside of academia. In other words, any scholar’s name is likely to be “sullied,” at one point or another. Typically, there seems to be a correlation between the popularity of a scholar and the likelihood of her/his reputation being significantly tarnished at some point in time. While there may still be people who treat Darwin, Freud, Nietzsche, Socrates, Einstein, or Rousseau as near divinities, there are people who will avoid any discussion about anything they’ve done or said. One way to put it is that they’re all misunderstood. Another way to put it is that their main insights have seeped through “common knowledge” but that their individual reputations have decreased.

Perhaps the most difficult case to discuss is that of Marx (Karl, not Harpo). Textbooks in introductory sociology typically have him as a key figure in the discipline and it seems clear that his insight on social issues was fundamental in social sciences. But, outside of some key academic contexts, his name is associated with a large series of social events about which people tend to have rather negative reactions. Even more so than for Paul de Man or  Martin Heidegger, Marx’s work is entangled in public opinion about his ideas. Haven’t checked for examples but I’m quite sure that Marx’s work is banned in a number of academic contexts. However, even some of Marx’s most ardent opponents are likely to agree with several aspects of Marx’s work and it’s sometimes funny how Marxian some anti-Marxists may be.

But I digress…

Typically, the “slaughtering of sacred cows” relates to disciplinary boundaries instead of social ones. At least, there’s a significant difference between your discipline’s own “sacred cows” and what you perceive another discipline’s “sacred cows” to be. Within a discipline, the process of dismissing a prior scholar’s work is almost œdipean (speaking of Freud). But dismissal of another discipline’s key figures is tantamount to a rejection of that other discipline. It’s one thing for a physicist to show that Newton was an alchemist. It’d be another thing entirely for a social scientist to deconstruct James Watson’s comments about race or for a theologian to argue with Darwin. Though discussions may have to do with individuals, the effects of the latter can widen gaps between scholarly disciplines.

And speaking of disciplinarity, there’s a whole set of issues having to do with discussions “outside of someone’s area of expertise.” On one side, comments made by academics about issues outside of their individual areas of expertise can be very tricky and can occasionally contribute to core misunderstandings. The fear of “talking through one’s hat” is quite significant, in no small part because a scholar’s prestige and esteem may greatly decrease as a result of some blatantly inaccurate statements (although some award-winning scholars seem not to be overly impacted by such issues).

On the other side, scholars who have to impart expert knowledge to people outside of their discipline  often have to “water down” or “boil down” their ideas and, in effect, oversimplifying these issues and concepts. Partly because of status (prestige and esteem), lowering standards is also very tricky. In some ways, this second situation may be more interesting. And it seems unavoidable.

How can you prevent misunderstandings when people may not have the necessary background to understand what you’re saying?

This question may reveal a rather specific attitude: “it’s their fault if they don’t understand.” Such an attitude may even be widespread. Seems to me, it’s not rare to hear someone gloating about other people “getting it wrong,” with the suggestion that “we got it right.”  As part of negotiations surrounding expert status, such an attitude could even be a pretty rational approach. If you’re trying to position yourself as an expert and don’t suffer from an “impostor syndrome,” you can easily get the impression that non-specialists have it all wrong and that only experts like you can get to the truth. Yes, I’m being somewhat sarcastic and caricatural, here. Academics aren’t frequently that dismissive of other people’s difficulties understanding what seem like simple concepts. But, in the gap between academics and the general population a special type of intellectual snobbery can sometimes be found.

Obviously, I have a lot more to say about misunderstood academics. For instance, I wanted to address specific issues related to each of the links above. I also had pet peeves about widespread use of concepts and issues like “communities” and “Eskimo words for snow” about which I sometimes need to vent. And I originally wanted this post to be about “cultural awareness,” which ends up being a core aspect of my work. I even had what I might consider a “neat” bit about public opinion. Not to mention my whole discussion of academic obfuscation (remind me about “we-ness and distinction”).

But this is probably long enough and the timing is right for me to do something else.

I’ll end with an unverified anecdote that I like. This anecdote speaks to snobbery toward academics.

[It's one of those anecdotes which was mentioned in a course I took a long time ago. Even if it's completely fallacious, it's still inspiring, like a tale, cautionary or otherwise.]

As the story goes (at least, what I remember of it), some ethnographers had been doing fieldwork  in an Australian cultural context and were focusing their research on a complex kinship system known in this context. Through collaboration with “key informants,” the ethnographers eventually succeeded in understanding some key aspects of this kinship system.

As should be expected, these kinship-focused ethnographers wrote accounts of this kinship system at the end of their field research and became known as specialists of this system.

After a while, the fieldworkers went back to the field and met with the same people who had described this kinship system during the initial field trip. Through these discussions with their “key informants,” the ethnographers end up hearing about a radically different kinship system from the one about which they had learnt, written, and taught.

The local informants then told the ethnographers: “We would have told you earlier about this but we didn’t think you were able to understand it.”