Category Archives: Internet

Internet and Privilege

Part of what was going through my mind, writing that Internet nostalgia post, was the notion that my being granted Internet access in August of ‘93 was a privilege. Quite literally. By backing up my request for an account on the Mistral.ERE.UMontreal.CA machine, Kevin Tuite was granting me access to a whole wide world, mostly undiscovered by other undergraduate students. Knowing how justifiably strict André Earl Paquet (UdeM SysAdmin) was, the fact that I got on ERE at such an early stage is rather significant.

It’s not the only time I was allowed access to restricted areas, “before my time”. Often with teachers. For instance, I’m still moved by a strong musical moment in which I’ve had the privilege to participate as a student in a music daycamp. The camp’s instructors were hanging out at the end of the day and I was waiting for a ride with one of them. I was the only student there and the age difference (I was 13 and they were 19 or 20) should have mattered. The point is, we all lay down on the floor with lights off and we all started… vocal improvisations over the sound of a vending machine. Deep.

Part of my privileged access to teachers might have been related to the fact that my father was a teacher and I perceived his colleagues as normal human beings. In fact, I was only a kid when I witnessed a group of teachers cry. In a tiny-scale version, it’s distantly related to African soldiers fighting alongside colonials and seeing fear in their eyes. I know how far those two situations sound, from one another. But there’s something significant about hierarchy, that it of“bten relies on flimsy masks.

But back to the Internet. I was privileged in my early access. I’m still privileged with better access to the ‘Net than a large part of the population of the planet, though there are hundreds of millions of us with such access. In this sense, I’m on “the better side of the Digital Divide”. I’m also privileged with working knowledge of a number of tools, which I acquired through many ways which are still inaccessible to most people on the planet. Not only was my university education part of this but the fact that I was getting a steady (though relatively low) salary during that Summer of 1993 meant that I could spend that formative time online.

The “classic” (almost textbook) example of privileged access to the Internet is Bill Gates. Though he’s occasionally been portrayed as a “self-made man”. Of course, the concept has a specific meaning in financial circles. But deep privilege is often hidden by the Horatio Alger connotations of that concept. Not to take anything away from Gates’s business acumen and programming chops, but I find it important to point out that, in the 1970s, it would have been extremely unlikely to have a computer mogul emerge out of a rural single-parent low-income family in the US Heartland.

“But”, I hear some sociology students say, “that’s just life! It’s not ‘privilege’! Would you say that Gates was privileged by when he was born?”

Why, yes, I probably would call that “privilege”. That’s a big part of what we mean by privilege, in sociology: arbitrary conditions which imply easier access to key resources. Even such a thing as going to a school which had decent computer labs at a time when most schools didn’t is significant privilege.

“Oh, but, but…”, some of the same students might say, “that means nothing, then. Success is still 90% hard work.”

You’re engineering majors, right?

“What does this have to do with anything?”

Depending on how you think about determinism, that might be accurate. But I’d say it’s misleading. Some people might talk about “luck” instead of privilege, and assign it a 10% influence. But it’s at least an enabling factor in this model and it might be a whole lot more. If “success” doesn’t happen without “luck”, the proportional impact of “luck” is a moot point.

“C’m’on!”, students continue, ”Bill Gates had to work hard! He wasn’t born with a silver spoon in his mouth!”

I don’t dispute that. I’d be very surprised if Gates had an actual silver spoon in his mouth at birth and I don’t think it’d have been that useful for him. But I’m saying that privilege is something we do well to put in context.

“Now you’re playing with us!”

Yep. I love to play. But there’s an important idea, here, which may help you understand sociology:

Privilege is often invisible to those who hold it.

Can you do well in a challenging situation without being told what an inspiration you are?

“Huh?”

Check out the “Invisible Knapsack”.

“Meh…”

It’s an assignment!

“Ok.”

Twenty Years Online

This month marks the 20th anniversary of my first Internet account. I don’t remember the exact date but I know it was in late summer 1993, right before what became known as “Eternal September”. The Internet wasn’t new, but it still wasn’t on most people’s proverbial “radars”.

Had heard one of my professors, Kevin Tuite, talk about the Internet as a system through which people from all over the World were communicating. Among the examples Tuite gave of possibilities offered by the ‘Net were conversations among people from former Soviet Republics, during this period of broad transitions. As a specialist of Svaneti, in present-day Georgia, Kevin was particularly interested in these conversations.

During that fated Summer of ‘93, I was getting ready to begin the last year of my B.Sc. in anthropology, specializing in linguistic anthropology and ethnomusicology. As I had done during previous summers, I was working BOH at a French restaurant. But, in my free time, I was exploring a brand new world.

In retrospect, it might not be a complete coincidence that my then-girlfriend of four years left me during that Fall 1993 semester.

It started with a local BBS, WAJU (“We Are Joining You”). I’m not exactly sure when I got started, but I remember being on WAJU in July. Had first been lent a 300 baud modem but I quickly switched to a 2400 baud one. My current ISP plan is 15Mbps, literally 50,000 times faster than my original connection.

By August 1993, thanks to the aforementioned Kevin Tuite, I was able to get an account on UdeM’s ERE network, meant for teaching and research (it stood for «Environnement de recherche et d’enseignement»). That network was running on SGI machines which weren’t really meant to handle large numbers of external connections. But it worked for my purpose of processing email (through Pine), Usenet newsgroups, FTP downloads (sometimes through Archie), IRC sessions, individual chats (though Talk), Gopher sites, and other things via Telnet. As much as possible, I did all of these things from campus, through one of the computer rooms, which offered amazingly fast connections (especially compared to my 2.4kbps modem). I spent enough time in those computer rooms that I still remember a distinct smell from them.

However, at some point during that period, I was able to hack a PPP connection going through my ERE account. In fact, I ended up helping some other people (including a few professors) do the same. It then meant we could use native applications to access the ’Net from home and, eventually, browse the Web graphically.

But I’m getting ahead of myself.

By the time I got online, NCSA Mosaic hadn’t been released. In fact, it took a little while before I even heard of the “World Wide Web”. I seem to remember that I only started browsing the Web in 1994. At the same time, I’m pretty sure one of my most online-savvy friends (likely Alex Burton or Martin Dupras) had told me about the Web as soon as version 1.0 of Mosaic was out, or even before.

The Web was a huge improvement, to be sure. But it was neither the beginning nor the end of the ‘Net, for those of us who had been there a little while. Yes, even a few months. Keep in mind that, at the time, there weren’t that many sites, on the Web. Sure, most universities had a Web presence and many people with accounts on university networks had opportunities to create homepages. But there’s a reason there could be Web directories (strongly associated with Yahoo!, now, but quite common at the time). Pages were “static” and there wasn’t much which was “social” on the Web, at the time.

But the ’Net as a whole was very social. At least, for the budding ethnographer that I was, the rest of the ‘Net was a much more interesting context for observation than the Web. Especially newsgroups and mailinglists.

Especially since the ‘Net was going through one of its first demographic explosions. Some AOLers were flooding the ‘Net. Perhaps more importantly, newbie bashing was peaking and comments against AOL or other inexperienced “Netizens” were frequently heard. I personally heard a lot more from people complaining about AOL than from anyone accessing the ’Net through AOL.

Something about the influx which was clear, though, is that the “democratization” was being accompanied by commercialization. A culture of open sharing was being replaced by corporate culture. Free culture was being preempted by a culture of advertising. The first .com domains were almost a novelty, in a ‘Net full of country-specific domains along with lots of .edu, .net, .org, .gov, and even .mil servers.

The ‘Net wasn’t yet about “paying for content”. That would come a few years later, when media properties pushed “user-generated content” into its own category (instead of representing most of what was available online). The ‘Net of the mid-1990s was about gaining as much attention as possible. We’re still in that mode, of course. But the contrast was striking. Casual conversations were in danger of getting drowned by megaphones. The billboard overtook the café. With the shift, a strong sense of antagonism emerged. The sense of belonging to a community of early adopters increased with the sense of being attacked by old “media types”. People less interested in sharing knowledge and more interested in conveying their own corporate messages. Not that individuals had been agenda-free until that point. But there was a big difference between geeks arguing about strongly-held opinions and “brands” being pushed onto the scene.

Early on, the thing I thought the Internet would most likely disrupt was journalism. I had a problem with journalism so, when I saw how the ‘Net could provide increased access to information, I was sure it’d imply a reappropriation of news by people themselves, with everything this means in the spread of critical thinking skills. Some of this has happened, to an extent. But media consolidation had probably a more critical role to play in journalism’s current crisis than online communication. Although, I like to think of these things as complex systems of interrelated trends and tendencies instead of straightforward causal scenarios.

In such a situation, the ‘Net becoming more like a set of conventional mass media channels was bad news. More specifically, the logic of “getting your corporate message across” was quite offputting to a crowd used to more casual (though often heated and loud) conversations. What comes to mind is a large agora with thousands of people having thousands of separate conversations being taken over by a massive PA system. Regardless of the content of the message being broadcast by this PA system, the effect is beyond annoying.

Through all of this, I distinctly remember mid-April, 1994. At that time, the Internet changed.  One might say it never recovered.

At that time, two unscrupulous lawyers sent the first commercial spam on Usenet newsgroups. They apparently made a rather large sum of money from their action but, more importantly, they ended the “Netiquette” era. From this point on, a conflict has emerged between those who use and those who abuse the ‘Net. Yes, strong words. But I sincerely think they’re fitting. Spammers are like Internet’s cancer. They may “serve a function” and may inspire awe. Mostly, though, they’re “cells gone rogue”. Not that I’m saying the ‘Net was free of disease before this “Green Card lottery” moment. For one thing, it’s possible (though unlikely) that flamewars were somewhat more virulent then than they are now. It’s just that the list of known online woes expanded quickly with the addition of cancer-like diseases. From annoying Usenet spam, we went rather rapidly to all sorts of malevolent large-scale actions. Whatever we end up doing online, we carry the shadow of such actions.

Despite how it may sound, my stance isn’t primarily moral. It’s really about a shift from a “conversational” mode to a “mass media” one. Spammers exploited Usenet by using it as a “mass media” channel, at a time when most people online were using it as a large set of “many-to-many” channels.

The distinction between Usenet spam and legitimate advertising may be extremely important, to a very large number of people. But the gates spammers opened were the same ones advertisers have been using ever since.

My nostalgia of the early Internet has a lot to do with this shift. I know we gained a lot, in the meantime. I enjoy many benefits from the “democratization” of the ‘Net. I wouldn’t trade the current online services and tools for those I was using in August, 1993. But I do long for a cancer-free Internet.

Wearable Hub: Getting the Ball Rolling

Statement

After years of hype, wearable devices are happening. What wearable computing lacks is a way to integrate devices into a broader system.

Disclaimer/Disclosure/Warning

  • For the past two months or so, I’ve been taking notes about this “wearable hub” idea (started around CES’s time, as wearable devices like the Pebble and Google Glass were discussed with more intensity). At this point, I have over 3000 words in notes, which probably means that I’d have enough material for a long essay. This post is just a way to release a few ideas and to “think aloud” about what wearables may mean.
  • Some of these notes have to do with the fact that I started using a few wearable devices to monitor my activities, after a health issue pushed me to start doing some exercise.
  • I’m not a technologist nor do I play one on this blog. I’m primarily an ethnographer, with diverse interests in technology and its implications for human beings. I do research on technological appropriation and some of the course I teach relate to the social dimensions of technology. Some of the approaches to technology that I discuss in those courses relate to constructionism and Actor-Network Theory.
  • I consider myself a “geek ethnographer” in the sense that I take part in geek culture (and have come out as a geek) but I’m also an outsider to geekdom.
  • Contrary to the likes of McLuhan, Carr, and Morozov, my perspective on technology and society is non-deterministic. The way I use them, “implication” and “affordance” aren’t about causal effects or, even, about direct connections. I’m not saying that society is causing technology to appear nor am I proposing a line from tools to social impacts. Technology and society are in a complex system.
  • Further, my approach isn’t predictive. I’m not saying what will happen based on technological advances nor am I saying what technology will appear. I’m thinking about the meaning of technology in an intersubjective way.
  • My personal attitude on tools and gadgets is rather ambivalent. This becomes clear as I go back and forth between techno-enthusiastic contexts (where I can almost appear like a Luddite) and techno-skeptical contexts (where some might label me as a gadget freak). I integrate a number of tools in my life but I can be quite wary about them.
  • I’m not wedded to the ideas I’m putting forth, here. They’re just broad musings of what might be. More than anything, I hope to generate thoughtful discussion. That’s why I start this post with a broad statement (not my usual style).
  • Of course, I know that other people have had similar ideas and I know that a concept of “wearable hub” already exists. It’s obvious enough that it’s one of these things which can be invented independently.

From Wearables to Hubs

Back in the 1990s, “wearable computing” became something of a futuristic buzzword, often having to do with articles of clothing. There have been many experiments and prototypes converging on an idea that we would, one day, be able to wear something resembling a full computer. Meanwhile, “personal digital assistants” became something of a niche product and embedded systems became an important dimension of car manufacturing.

Fast-forward to 2007, when a significant shift in the use of smartphones occurred. Smartphones existed before that time, but their usages, meanings, and positions in the public discourse changed quite radically around the time of the iPhone’s release. Not that the iPhone itself “caused a smartphone revolution” or that smartphone adoption suddenly reached a “tipping point”. I conceive of this shift as a complex interplay between society and tools. Not only more Kuhn than Popper, but more Latour than Kurzweil.

Smartphones, it may be argued, “happened”.

Without being described as “wearable devices”, smartphones started playing some of the functions people might have assigned to wearable devices. The move was subtle enough that Limor Fried recently described it as a realization she’s been having. Some tech enthusiasts may be designing location-aware purses and heads-up displays in the form of glasses. Smartphones are already doing a lot of the things wearables were supposed to do. Many people “wear” smartphones at most times during their waking lives and these Internet-connected devices are full of sensors. With the proliferation of cases, one might even perceive some of them as fashion accessories, like watches and sunglasses.

Where smartphones become more interesting, in terms of wearable computing, is as de facto wearable hubs.

My Wearable Devices

Which brings me to mention the four sensors I’ve been using more extensively during the past two months:

Yes, these all have to do with fitness (and there’s quite a bit of overlap between them). And, yes, I started using them a few days after the New Year. But it’s not about holiday gifts or New Year’s resolutions. I’ve had some of these devices for a while and decided to use them after consulting with a physician about hypertension. Not only have they helped me quite a bit in solving some health issues, but these devices got me to think.

(I carry several other things with me at most times. Some of my favourites include Tenqa REMXD Bluetooth headphones and the LiveScribe echo smartpen.)

One aspect is that they’re all about the so-called “quantified self”. As a qualitative researcher, I tend to be skeptical of quants. In this case, though, the stats I’m collecting about myself fit with my qualitative approach. Along with quantitative data from these devices, I’ve started collecting qualitative data about my life. The next step is to integrate all those data points automatically.

These sensors are also connected to “gamification”, a tendency I find worrisome, preferring playfulness. Though game mechanics are applied to the use of these sensors, I choose to rely on my intrinsic motivation, not paying much attention to scores and badges.

But the part which pushed me to start taking the most notes was that all these sensors connect with my iOS ()and Android) devices. And this is where the “wearable hub” comes into play. None of these devices is autonomous. They’re all part of my personal “arsenal”, the equipment I have on my me on most occasions. Though there are many similarities between them, they still serve different purposes, which are much more limited than those “wearable computers” might have been expected to serve. Without a central device serving as a type of “hub”, these sensors wouldn’t be very useful. This “hub” needs not be a smartphone, despite the fact that, by default, smartphones are taken to be the key piece in this kind of setup.

In my personal scenario, I do use a smartphone as a hub. But I also use tablets. And I could easily use an existing device of another type (say, an iPod touch), or even a new type of device meant to serve as a wearable hub. Smartphones’ “hub” affordances aren’t exclusive.

From Digital Hub to Wearable Hub

Most of the devices which would likely serve as hubs for wearable sensors can be described as “Post-PC”. They’re clearly “personal” and they’re arguably “computers”. Yet they’re significantly different from the “Personal Computers” which have been so important at the end of last century (desktop and laptop computers not used as servers, regardless of the OS they run).

Wearability is a key point, here. But it’s not just a matter of weight or form factor. A wearable hub needs to be wireless in at least two important ways: independent from a power source and connected to other devices through radio waves. The fact that they’re worn at all times also implies a certain degree of integration with other things carried throughout the day (wallets, purses, backpacks, pockets…). These devices may also be more “personal” than PCs because they may be more apparent and more amenable to customization than PCs.

Smartphones fit the bill as wearable hubs. Their form factors and battery life make them wearable enough. Bluetooth (or ANT+, Nike+, etc.) has been used to pair them wirelessly with sensors. Their connectivity to GPS and cellular networking as well as their audio and visual i/o can have interesting uses (mapping a walk, data updates during a commute, voice feedback…). And though they’re far from ubiquitous, smartphones have become quite common in key markets.

Part of the reason I keep thinking about “hubs” has to do with comments made in 2001 by then Apple CEO Steve Jobs about the “digital lifestyle” age in “PC evolution” (video of Jobs’s presentation; as an anthropologist, I’ll refrain from commenting on the evolutionary analogies):

We believe the PC, or more… importantly, the Mac can become the “digital hub” of our emerging digital lifestyle, with the ability to add tremendous value to … other digital devices.

… like camcorders, portable media players, cellphones, digital cameras, handheld organizers, etc. (Though they weren’t mentioned, other peripherals like printers and webcams also connect to PCs.)

The PC was thus going to serve as a hub, “not only adding value to these devices but interconnecting them, as well”.

At the time, key PC affordances which distinguished them from those other digital devices:

  • Big screen affording more complex user interfaces
  • Large, inexpensive hard disk storage
  • Burning DVDs and CDs
  • Internet connectivity, especially broadband
  • Running complex applications (including media processing software like the iLife suite)

Though Jobs pinpointed iLife applications as the basis for this “digital hub” vision, it sounds like FireWire was meant to be an even more important part of this vision. Of course, USB has supplanted FireWire in most use cases. It’s interesting, then, to notice that Apple only recently started shipping Macs with USB 3. In fact, DVD burning is absent from recent Macs. In 2001, the Mac might have been at the forefront of this “digital lifestyle” age. In 2013, the Mac has moved away from its role as “digital hub”.

In the meantime, the iPhone has become one of the best known examples of what I’m calling “wearable hubs”. It has a small screen and small, expensive storage (by today’s standards). It also can’t burn DVDs. But it does have nearly-ubiquitous Internet connectivity and can run fairly complex applications, some of which are adapted from the iLife suite. And though it does have wired connectivity (through Lightning or the “dock connector”), its main hub affordances have to do with Bluetooth.

It’s interesting to note that the same Steve Jobs, who used the “digital hub” concept to explain that the PC wasn’t dead in 2001, is partly responsible for popularizing the concept of “post-PC devices” six years later. One might perceive hypocrisy in this much delayed apparent flip-flop. On the other hand, Steve Jobs’s 2007 comments (video) were somewhat nuanced, as to the role of post-PC devices. What’s more interesting, though, is to think about the implications of the shift between two views of digital devices, regardless of Apple’s position through that shift.

Some post-PC devices (including the iPhone, until quite recently) do require a connection to a PC. In this sense, a smartphone might maintain its position with regards to the PC as digital hub. Yet, some of those devices are used independently of PCs, including by some people who never owned PCs.

Post-Smartphone Hubs

It’s possible to imagine a wearable hub outside of the smartphone (and tablet) paradigm. While smartphones are a convenient way to interconnect wearables, their hub-related affordances still sound limited: they lack large displays and their storage space is quite expensive. Their battery life may also be something to consider in terms of serving as hubs. Their form factors make some sense, when functioning as phones. Yet they have little to do with their use as hubs.

Part of the realization, for me, came from the fact that I’ve been using a tablet as something of an untethered hub. Since I use Bluetooth headphones, I can listen to podcasts and music while my tablet is in my backpack without being entangled in a cable. Sounds trivial but it’s one of these affordances I find quite significant. Delegating music playing functions to my tablet relates in part to battery life and use of storage. The tablet’s display has no importance in this scenario. In fact, given some communication between devices, my smartphone could serve as a display for my tablet. So could a “smartwatch” or “smartglasses”.

The Body Hub

Which led me to think about other devices which would work as wearable hubs. I originally thought about backpackable and pocketable devices.

But a friend had a more striking idea:

Under Armour’s Recharge Energy Suit may be an extreme version of this, one which would fit nicely among things Cathi Bond likes to discuss with Nora Young on The Sniffer. Nora herself has been discussing wearables on her blog as well as on her radio show. Sure, part of this concept is quite futuristic. But a sensor mesh undershirt is a neat idea for several reasons.

  • It’s easy to think of various sensors it may contain.
  • Given its surface area, it could hold enough battery power to supplement other devices.
  • It can be quite comfortable in cold weather and might even help diffuse heat in warmer climates.
  • Though wearable, it needs not be visible.
  • Thieves would probably have a hard time stealing it.
  • Vibration and haptic feedback on the body can open interesting possibilities.

Not that it’s the perfect digital hub and I’m sure there are multiple objections to a connected undershirt (including issues with radio signals). But I find the idea rather fun to think, partly because it’s so far away from the use of phones, glasses, and watches as smart devices.

Another thing I find neat, and it may partly be a coincidence, is the very notion of a “mesh”.

The Wearable Mesh

Mesh networking is a neat concept, which generates more hype than practical uses. As an alternative to WiFi access points and cellular connectivity, it’s unclear that it may “take the world by storm”. But as a way to connect personal devices, it might have some potential. After all, as Bernard Benhamou recently pointed out on France Culture’s Place de la toile, the Internet of Things may not require always-on full-bandwith connectivity. Typically, wearable sensors use fairly little bandwidth or only use it for limited amounts of time. A wearable mesh could connect wearable devices to one another while also exchanging data through the Internet itself.

Or with local devices. Smart cities, near field communication, and digital appliances occupy interesting positions among widely-discussed tendencies in the tech world. They may all have something to do with wearable devices. For instance, data exchanged between transit systems and their users could go through wearable devices. And while mobile payment systems can work through smartphones and other cellphones, wallet functions can also be fulfilled by other wearable devices.

Alternative Futures

Which might provide an appropriate segue into the ambivalence I feel toward the “wearable hub” concept I’m describing. Though I propose these ideas as if I were enthusiastic about them, they all give me pause. As a big fan of critical thinking, I like to think about “what might be” to generate questions and discussions exposing a diversity of viewpoints about the future.

Mass media discussions about these issues tend to focus on such things as privacy, availability, norms, and usefulness. Google Glass has generated quite a bit of buzz about all four. Other wearables may mainly raise issues for one or two of these broad dimensions. But the broad domain of wearable computing raises a lot more issues.

Technology enthusiasts enjoy discussing issues through the dualism between dystopia and utopia. An obvious issue with this dualism is that humans disagree about the two categories. Simply put, one person’s dystopia can be another person’s utopia, not to mention the nuanced views of people who see complex relationships between values and social change.

In such a context, a sociologist’s reflex may be to ask about the implications of these diverse values and opinions. For instance:

  • How do people construct these values?
  • Who decides which values are more important?
  • How might social groups cope with changes in values?

Discussing these issues and more, in a broad frame, might be quite useful. Some of the trickiest issues are raised after some changes in technology have already happened. From writing to cars, any technological context has unexpected implications. An ecological view of these implications could broaden the discussion.

I tend to like the concept of the “drift-off moment”, during which listeners (or readers) start thinking about the possibilities afforded a new tool (or concept). In the context of a sales pitch, the idea is that these possibilities are positive, a potential buyer is thinking about the ways she might use a newfangled device. But I also like the deeper process of thinking about all sorts of implications, regardless of their value.

So…

What might be the implications of a wearable hub?

Timeline of Apple’s Online Services

[I’d like people’s help in completing a timeline of Apple’s online services.]

[Update: deleted the WordPress shortcode, which seems not to be working.]

As kind of a followup to yesterday’s post about some early rumours and speculations about the iPhone, I thought about posting some info about Apple’s online services. Part of the reason is that Asymco’s Horace Dediu has frequently talked about what we could call “Apple’s data play”, for instance in this post about the iCloud data centre in North Carolina. I was also thinking about Mike Davidson’s comments about Apple’s presence (and “dominance”) in such diverse fields as hardware, software, licensing, and commerce. The trigger for this post, though, was from this Steve Jobs comment, which appeared in a recent NYT piece about the Apple Maps fiasco:

The MobileMe launch clearly demonstrates that we have more to learn about Internet services

The overall context for this quote as well as a number of discussions about Apple is the consensus that Apple does a poor job with online services. MobileMe and iTunes Ping are often used in these discussions and it seems clear to most people (including Apple executives and insiders, it sounds like) that the “computer company turned consumer electronics vendor” has a lot to learn about online services.

The reason I find this so interesting is that Apple seems insistent on pushing at least some of its online services. A bit less of a “betting the farm” strategy as Google’s “Emerald Sea” initiative, but an intriguing strategy for such a large and still-successful company. Dediu’s frequent reference to Clay Christensen’s concept of “Disruptive Innovation” might apply, here. Apple might be “disrupting itself into” an online services company, at least in part.

There are several things I find intriguing about this strategy.

As opposed to most other enterprises’ “online plays”, Apple’s model tends not to be based on ad revenues. The divide between Google and Apple couldn’t be stronger when we talk about ad-supported free/freemium services as opposed to paid services or services attached to other purposes. It’s likely an irreconcilable difference between fans of  both teams.

Online services are clearly not Apple’s strong suit. It often sounds like Apple is missing a “magic touch” with online services, the same way other companies are said to lack Apple’s design sense. This is more similar to Google+ given the consensus that “Google doesn’t know how to do ‘social’”. But it’s still surprising.

Though Apple may not have a “knack” for online services, it’s been trying to do it for quite a while. I keep thinking about eWorld as a precursor to the whole thing. It’s one thing for a company to try its hand at something new or to pivot into a strong business. It’s another thing entirely to shift more energies into something which has so far proven to be mostly a lost cause.

Adding to my thoughts on this was a podcast conversation (I think between John Siracusa and Dan Benjamin, though it might have been between Marco Arment and John Gruber) during which comments were made about those Apple employees working on online services.

So, basically, Apple’s online services have been on my mind. But I couldn’t find an exhaustive list. Tried Wikipedia but it doesn’t really separate online services from other things Apple does. And I ended up thinking about what would define “online services” in Apple’s case. Everything Apple does which incurs some bandwidth costs would be my working definition. Basically, it’s something to do with Apple investing in data centres and such. Some of these seem like very small costs (hosting data about podcasts, instead of the podcasts themselves, for instance). Given Apple’s size, these costs and the infrastructure behind all of this can be quite big.

So I started listing some of these services and organizing them in a sort of timeline, first in MultiMarkdown format in nvAlt, then in a Google Spreadsheet. I then discovered Vérité.CO’s Timeline.JS which takes a Google Spreadsheet and makes it into a visual timeline.

A few notes:

  • It’s a quick draft and I didn’t really check any of the data points.
  • In most cases, I only added months and, in the case of “AppleLink”, I only put years.
  • I took most dates from diverse Wikipedia pages, not necessarily backtracking on the whole process.
  • On at least one occasion, there was a discrepancy between two dates.
  • Sometimes, I took the date of the service’s announcement while I used an actual launch date for other services.
  • I only added a couple of pictures to show that it can be done. Many of the relevant pix are likely to be under copyright or to constitute a trademark.
  • I tried to be as exhaustive as I could be, but I’m sure I forgot stuff.
  • Some things may not sound like they qualify as part of “Apple’s online offering” but I think they’re still relevant. My rule of thumb is that if it goes to Apple’s servers, it’s an online service.
  • I separated some services from “suites” like iCloud or iTools, partly because some of those services haven’t been kept, which is important to see in a timeline. There are several services missing, here.
  • None of this timeline is meant to be editorial. I was just curious about what Apple has been doing online since the 1980s. The reason I care can be found in my earlier notes. I consider myself neither an “Apple fanboi” nor an “Apple hater”. I just find the situation revealing of something happening in the tech world, which has an impact on the Geek Niche.

So, here goes.

Here’s the Google Spreadsheet (editable by anyone):

Apple Online Services

Here’s the timeline through an embed code:

Here’s the embed code:

<iframe src='http://embed.verite.co/timeline/?source=0AjnWdp-FPwEKdHVqOXhWVlZuZjZYajN5QnExcExuVmc&font=Bevan-PotanoSans&maptype=toner&lang=en&hash_bookmark=true&height=650' width='100%' height='650' frameborder='0'>

 

A Glocal Network of City-States?

This one should probably be in a fictive mode, maybe even in a science-fiction genre. In fact, I’m reconnecting with literature after a long hiatus and now would be an interesting time to start writing fiction. But I’ll still start this as one of those  “ramblings” blogposts that I tend to build or which tend to come to me.

The reason this should be fiction is that it might sound exceedingly naïve, especially for a social scientist. I tend to “throw ideas out there” and see what sticks to other ideas, but this broad idea about which I’ve been thinking for a while may sound rather crazy, quaint, unsophisticated.

See, while my academic background is rather solid, I don’t have formal training in political science. In fact, I’ve frequently avoided several academic activities related to political science as a discipline. Or to journalism as a discipline. Part of my reluctance to involve myself in academic activities related political science relates to my reaction to journalism. The connection may not seem obvious to everyone but I see political science as a discipline in the same frame, and participating in the same worldview, as what I find problematic in journalism.

The simplest way to contextualize this connection is the (“modern”) notion of the “Nation-State.” That context involves me personally. As an anthropologist, as a post-modernist, as a “dual citizen” of two countries, as a folklorist, as a North American with a relatively salient European background, as a “citizen of the World,” and as a member of a community which has switched in part from a “nationalist” movement to other notions of statehood. Simply put: I sincerely think that the notion of a “Nation-State” is outdated and that it will (whether it should or not) give way to other social constructs.

A candidate to replace the conceptual apparatus of the “Nation-State” is both global and local, both post-modern and ancient: a glocal network of city-states (GNoCS).

Yes, I know, it sounds awkward. No, I’m not saying that things would necessarily be better in a post-national world. And I have no idea when this shift from the “nation-states” frame to a network of city-states may happen. But I sincerely think that it could happen. And that it could happen rather quickly.

Not that the shift would be so radical as to obliterate the notion of “nation-state” overnight. In this case, I’m closer to Foucault’s épistémè than to Kuhn’s paradigm. After all, while the “Democratic Nation-State” model is global, former social structures are still present around the Globe and the very notion of a “Nation-State” takes different values in different parts of the world. What I envision has less to do with the linear view of history than with a perspective in which different currents of social change interact with one another over time, evoking shifts in polarity for those who hold a binary perspective on social issues.

I started “working on” this post four months ago. I was just taking some notes in a blog draft, in view of a blogpost, instead of simply keeping general notes, as I tend to do. This post remained on my mind and I’ve been accumulating different threads which can connect to my basic idea. I now realize that this blogpost will be more of a placeholder for further thinking than a “milestone” in my reflection on the topic. My reluctance to publish this blog entry had as much to do with an idiosyncratic sense of prudence as with time-management or any other issue. In other words, I was wary of sticking my neck out. Which might explain why this post is so personal as compared to most of my posts in English.

As uninformed as I may seem of the minutiae of national era political science, I happen to think that there’s a lot of groupthink involved in the way several people describe political systems. For instance, there’s a strong tendency for certain people, journalists especially, to “count countries.” With relatively few exceptions (especially those which have to do with specific international institutions like the United Nations or the “G20”) the number of countries involved in an event only has superficial significance. Demographic discrepancies between these national entities, not tio mention a certain degree of diversity in their social structures or even government apparatus, makes “counting countries” appear quite misleading, especially when the issue has to do with, say, social dynamics or geography. It sounds at times like people have a vague “political map of the World” in their heads and that this image preempts other approaches to global diversity. This may sound like a defensive stance on my part, as I try to position myself as “perhaps crazy but not more than others are.” But the issue goes deeper. In fact, it seems that “countries” are so ingrained  in some people’s minds and political borders are so obvious that local and regional issues are perceived as micro-version of what happens at the “national level.” This image doesn’t seem so strange when we talk about partisan politics but it appears quite inappropriate when we talk about a broad range of other subjects, from epidemiology to climate change, from online communication to geology, from language to religion.

An initial spark in my thinking about several of these issues came during Beverly Stoeltje‘s interdisciplinary Ph.D. seminar on nationalism at Indiana University Bloomington, back in 2000. Not only was this seminar edifying on many levels, but it represented a kind of epiphany moment in my reflections on not only nationalism itself (with related issues of patriotism, colonialism, and citizenship) but on a range of social issues and changes.

My initial “realization” was on the significance of the shift from Groulx-style French-Canadian nationalism to what Lévesque called «souveraineté-association» (“sovereignty-association”) and which served as the basis for the Quebec sovereignty movement.

While this all connects to well-known issues in political science and while it may (again) sound exceedingly naïve, I mean it in a very specific way which, I think, many people who discuss Quebec’s political history may rarely visit. As with other shifts about which I think, I don’t envision the one from French-Canadian nationalism (FCN) to Quebec sovereignty movement (QSM) to be radical or complete. But it was significant and broad-reaching.

Regardless of Lévesque’s personal view on nationalism (a relatively recent television series on his life had it that he became anti-nationalist after a visit to concentration camps), the very idea that there may exist a social movement oriented toward sovereignty outside of the nationalist logic seems quite important to me personally. The fact that this movement may only be represented in partisan politics as nationalism complicates the issue and may explain a certain confusion in terms of the range of Quebec’s current social movements. In other words, the fact that anti-nationalists are consistently lumped together with nationalists in the public (and journalistic) eye makes it difficult to discuss post-nationalism in this part of the Globe.

But Quebec’s history is only central to my thinking because I was born and Montreal and grew up through the Quiet Revolution. My reflections on a post-national shift are hopefully broader than historical events in a tiny part of the Globe.

In fact, my initial attempt at drafting this blogpost came after I attended a talk by Satoshi Ikeda entitled The Global Financial Crisis and the End of Neoliberalism. (November 27, 2008, Concordia University, SGW H-1125-12; found thanks to Twistory). My main idea at this point was that part of the solution to global problems were local.

But I was also thinking about The Internet.

Contrary to what technological determinists tend to say, the ‘Net isn’t changing things as much as it is part of a broad set of changes. In other words, the global communication network we now know as the Internet is embedded in historical contexts, not the ultimate cause of History. At the risk of replacing technological determinism with social determinism, one might point out that the ‘Net existed (both technologically and institutionally) long before its use became widespread. Those of us who observed a large influx of people online during the early to mid-1990s might even think that social changes were more significant in making the ‘Net what it is today than any “immanent” feature of the network as it was in, say, 1991.

Still, my thinking about the ‘Net has to do with the post-national shift. The ‘Net won’t cause the shift to new social and political structures. But it’s likely to “play a part” in that shift, to be prominently places as we move into a post-national reality.

There’s a number of practical and legal issues with a wide range of online activities which make it clear that the ‘Net fits more in a global structure than in an “international” one. Examples I have in mind include issues of copyright, broadcast rights, “national content,” and access to information, not to mention the online setting for some grassroots movements and the notion of “Internet citizenry.” In all of these cases, “Globalization” expands much beyond trade and currency-based economy.

Then, there’s the notion of “glocalization.” Every time I use the term “glocal,” I point out how “ugly” it is. The term hasn’t gained any currency (AFAICT) but I keep thinking that the concept can generate something interesting. What I personally have in mind is a movement away from national structures into both a globally connected world and a more local significance. The whole “Think Local, Act Global” idea (which I mostly encountered as “Think Global, Drink Local” as a motto). “Despite” the ‘Net, location still matters. But many people are also global-looking.

All of this is part of the setup for some of my reflections on a GNoCS. A kind of prelude/prologue. While my basic idea is very much a “pie in the sky,” I do have more precise notions about what the future may look like and the conditions in which some social changes might happen. At this point, I realize that these thoughts will be part of future blogposts, including some which might be closer to science-fiction than to this type semi- (or pseudo-) scholarly rambling.

But I might still flesh out a few notes.

Demographically, cities may matter more now than ever as the majority of the Globe’s population is urban. At least, the continued urbanization trend may fit well with a city-focused post-national model.

Some metropolitan areas have become so large as to connect with one another, constituting a kind of urban continuum. Contrary to boundaries between “nation-states,” divisions between cities can be quite blurry. In fact, a same location can be connected to dispersed centres of activity and people living in the same place can participate in more than one local sphere. Rotterdam-Amsterdam, Tokyo-Kyoto, Boston-NYC…

Somewhat counterintuitvely, urban areas tend to work relatively as the source of solutions to problems in the natural environment. For instance, some mayors have taken a lead in terms of environmental initiatives, not waiting for their national governments. And such issues as public transportations represent core competencies for municipal governments.

While transborder political entities like the European Union (EU), the African Union (AU), and the North American Free-Trade Agreement (NAFTA) are enmeshed in the national logic, they fit well with notions of globalized decentralization. As the mayor of a small Swiss town was saying on the event of Switzerland’s official 700th anniversary, we can think about «l’Europe des régions» (“Europe of regions”), beyond national borders.

Speaking of Switzerland, the confederacy/confederation model fits rather well with a network structure, perhaps more than with the idea of a “nation-state.” It also seems to go well with some forms of participatory democracy (as opposed to representative democracy). Not to mean that Switzerland or any other confederation/confederacy works as a participatory democracy. But these notions can help situate this GNoCS.

While relatively rare and unimportant “on the World Stage,” micro-states and micro-nations represent interesting cases in view of post-nationalist entities. For one thing, they may help dispel the belief that any political apart from the “nation-state” is a “reversal” to feudalism or even (Greek) Antiquity. The very existence of those entities which are “the exceptions to the rule” make it possible to “think outside of the national box.”

Demographically at the opposite end of the spectrum from microstates and micronations, the notion of a China-India union (or even a collaboration between China, India, Brazil, and Russia) may sound crazy in the current state of national politics but it would go well with a restructuring of the Globe, especially if this “New World Order” goes beyond currency-based trade.

Speaking of currency, the notion of the International Monetary Fund having its own currency is quite striking as a sign of a major shift from the “nation-state” logic. Of course, the IMF is embedded in “national” structures, but it can shift the focus away from “individual countries.”

The very notion of “democracy” has been on many lips, over the years. Now may be the time to pay more than lipservice to a notion of “Global Democracy,” which would transcend national boundaries (and give equal rights to all people across the Globe). Chances are that representative democracy may still dominate but a network structure connecting a large number of localized entities can also fit in other systems including participatory democracy, consensus culture, republicanism, and even the models of relatively egalitarian systems that some cultural anthropologists have been constructing over the years.

I still have all sorts of notes about examples and issues related to this notion of a GNoCS. But that will do for now.

My Problem With Journalism

I hate having an axe to grind. Really, I do. “It’s unlike me.” When I notice that I catch myself grinding an axe, I “get on my own case.” I can be quite harsh with my own self.

But I’ve been trained to voice my concerns. And I’ve been perceiving an important social problem for a while.

So I “can’t keep quiet about it.”

If everything goes really well, posting this blog entry might be liberating enough that I will no longer have any axe to grind. Even if it doesn’t go as well as I hope, it’ll be useful to keep this post around so that people can understand my position.

Because I don’t necessarily want people to agree with me. I mostly want them to understand “where I come from.”

So, here goes:

Journalism may have outlived its usefulness.

Like several other “-isms” (including nationalism, colonialism, imperialism, and racism) journalism is counterproductive in the current state of society.

This isn’t an ethical stance, though there are ethical positions which go with it. It’s a statement about the anachronic nature of journalism. As per functional analysis, everything in society needs a function if it is to be maintained. What has been known as journalism is now taking new functions. Eventually, “journalism as we know it” should, logically, make way for new forms.

What these new forms might be, I won’t elaborate in this post. I have multiple ideas, especially given well-publicised interests in social media. But this post isn’t about “the future of journalism.”

It’s about the end of journalism.

Or, at least, my looking forward to the end of journalism.

Now, I’m not saying that journalists are bad people and that they should just lose their jobs. I do think that those who were trained as journalists need to retool themselves, but this post isn’t not about that either.

It’s about an axe I’ve been grinding.

See, I can admit it, I’ve been making some rather negative comments about diverse behaviours and statements, by media people. It has even become a habit of mine to allow myself to comment on something a journalist has said, if I feel that there is an issue.

Yes, I know: journalists are people too, they deserve my respect.

And I do respect them, the same way I respect every human being. I just won’t give them the satisfaction of my putting them on a pedestal. In my mind, journalists are people: just like anybody else. They deserve no special treatment. And several of them have been arrogant enough that I can’t help turning their arrogance back to them.

Still, it’s not about journalist as people. It’s about journalism “as an occupation.” And as a system. An outdated system.

Speaking of dates, some context…

I was born in 1972 and, originally,I was quite taken by journalism.

By age twelve, I was pretty much a news junkie. Seriously! I was “consuming” a lot of media at that point. And I was “into” media. Mostly television and radio, with some print mixed in, as well as lots of literary work for context: this is when I first read French and Russian authors from the late 19th and early 20th centuries.

I kept thinking about what was happening in The World. Back in 1984, the Cold War was a major issue. To a French-Canadian tween, this mostly meant thinking about the fact that there were (allegedly) US and USSR “bombs pointed at us,” for reasons beyond our direct control.

“Caring about The World” also meant thinking about all sorts of problems happening across The Globe. Especially poverty, hunger, diseases, and wars. I distinctly remember caring about the famine in Ethiopia. And when We Are the World started playing everywhere, I felt like something was finally happening.

This was one of my first steps toward cynicism. And I’m happy it occured at age twelve because it allowed me to eventually “snap out of it.” Oh, sure, I can still be a cynic on occasion. But my cynicism is contextual. I’m not sure things would have been as happiness-inducing for me if it hadn’t been for that early start in cynicism.

Because, you see, The World disinterested itself quite rapidly with the plight of Ethiopians. I distinctly remember asking myself, after the media frenzy died out, what had happened to Ethiopians in the meantime. I’m sure there has been some report at the time claiming that the famine was over and that the situation was “back to normal.” But I didn’t hear anything about it, and I was looking. As a twelve-year-old French-Canadian with no access to a modem, I had no direct access to information about the situation in Ethiopia.

Ethiopia still remained as a symbol, to me, of an issue to be solved. It’s not the direct cause of my later becoming an africanist. But, come to think of it, there might be a connection, deeper down than I had been looking.

So, by the end of the Ethiopian famine of 1984-85, I was “losing my faith in” journalism.

I clearly haven’t gained a new faith in journalism. And it all makes me feel quite good, actually. I simply don’t need that kind of faith. I was already training myself to be a critical thinker. Sounds self-serving? Well, sorry. I’m just being honest. What’s a blog if the author isn’t honest and genuine?

Flash forward to 1991, when I started formal training in anthropology. The feeling was exhilarating. I finally felt like I belonged. My statement at the time was to the effect that “I wasn’t meant for anthropology: anthropology was meant for me!” And I was learning quite a bit about/from The World. At that point, it already did mean “The Whole Wide World,” even though my knowledge of that World was fairly limited. And it was a haven of critical thinking.

Ideal, I tell you. Moan all you want, it felt like the ideal place at the ideal time.

And, during the summer of 1993, it all happened: I learnt about the existence of the “Internet.” And it changed my life. Seriously, the ‘Net did have a large part to play in important changes in my life.

That event, my discovery of the ‘Net, also has a connection to journalism. The person who described the Internet to me was Kevin Tuite, one of my linguistic anthropology teachers at Université de Montréal. As far as I can remember, Kevin was mostly describing Usenet. But the potential for “relatively unmediated communication” was already a big selling point. Kevin talked about the fact that members of the Caucasian diaspora were able to use the Internet to discuss with their relatives and friends back in the Caucasus about issues pertaining to these independent republics after the fall of the USSR. All this while media coverage was sketchy at best (sounded like journalism still had a hard time coping with the new realities).

As you can imagine, I was more than intrigued and I applied for an account as soon as possible. In the meantime, I bought at 2400 baud modem, joined some local BBSes, and got to chat about the Internet with several friends, some of whom already had accounts. Got my first email account just before semester started, in August, 1993. I can still see traces of that account, but only since April, 1994 (I guess I wasn’t using my address in my signature before this). I’ve been an enthusiastic user of diverse Internet-based means of communication since then.

But coming back to journalism, specifically…

Journalism missed the switch.

During the past fifteen years, I’ve been amazed at how clueless members of mainstream media institutions have been to “the power of the Internet.” This was during Wired Magazine’s first year as a print magazine and we (some friends and I) were already commenting upon the fact that print journalists should look at what was coming. Eventually, they would need to adapt. “The Internet changes everything,” I thought.

No, I didn’t mean that the Internet would cause any of the significant changes that we have seeing around us. I tend to be against technological determinism (and other McLuhan tendencies). Not that I prefer sociological determinism yet I can’t help but think that, from ARPAnet to the current state of the Internet, most of the important changes have been primarily social: if the Internet became something, it’s because people are making it so, not because of some inexorable technological development.

My enthusiastic perspective on the Internet was largely motivated by the notion that it would allow people to go beyond the model from the journalism era. Honestly, I could see the end of “journalism as we knew it.” And I’m surprised, fifteen years later, that journalism has been among the slowest institutions to adapt.

In a sense, my main problem with journalism is that it maintains a very stratified structure which gives too much weight to the credibility of specific individuals. Editors and journalists, who are part of the “medium” in the old models of communication, have taken on a gatekeeping role despite the fact that they rarely are much more proficient thinkers than people who read them. “Gatekeepers” even constitute a “textbook case” in sociology, especially in conflict theory. Though I can easily perceive how “constructed” that gatekeeping model may be, I can easily relate to what it entails in terms of journalism.

There’s a type of arrogance embedded in journalistic self-perception: “we’re journalists/editors so we know better than you; you need us to process information for you.” Regardless of how much I may disagree with some of his words and actions, I take solace in the fact that Murdoch, a key figure in today’s mainstream media, talked directly at this arrogance. Of course, he might have been pandering. But the very fact that he can pay lip-service to journalistic arrogance is, in my mind, quite helpful.

I think the days of fully stratified gatekeeping (a “top-down approach” to information filtering) are over. Now that information is easily available and that knowledge is constructed socially, any “filtering” method can be distributed. I’m not really thinking of a “cream rises to the top” model. An analogy with water sources going through multiple layers of mountain rock would be more appropriate to a Swiss citizen such as myself. But the model I have in mind is more about what Bakhtin called “polyvocality” and what has become an ethical position on “giving voice to the other.” Journalism has taken voice away from people. I have in mind a distributed mode of knowledge construction which gives everyone enough voice to have long-distance effects.

At the risk of sounding too abstract (it’s actually very clear in my mind, but it requires a long description), it’s a blend of ideas like: the social butterfly effect, a post-encyclopedic world, and cultural awareness. All of these, in my mind, contribute to this heightened form of critical thinking away from which I feel journalism has led us.

The social butterfly effect is fairly easy to understand, especially now that social networks are so prominent. Basically, the “butterfly effect” from chaos theory applied to social networks. In this context, a “social butterfly” is a node in multiple networks of varying degrees of density and clustering. Because such a “social butterfly” can bring things (ideas, especially) from one such network to another, I argue that her or his ultimate influence (in agregate) is larger than that of someone who sits at the core of a highly clustered network. Yes, it’s related to “weak ties” and other network classics. But it’s a bit more specific, at least in my mind. In terms of journalism, the social butterfly effect implies that the way knowledge is constructed needs not come from a singular source or channel.

The “encyclopedic world” I have in mind is that of our good friends from the French Enlightenment: Diderot and the gang. At that time, there was a notion that the sum of all knowledge could be contained in the Encyclopédie. Of course, I’m simplifying. But such a notion is still discussed fairly frequently. The world in which we now live has clearly challenged this encyclopedic notion of exhaustiveness. Sure, certain people hold on to that notion. But it’s not taken for granted as “uncontroversial.” Actually, those who hold on to it tend to respond rather positively to the journalistic perspective on human events. As should be obvious, I think the days of that encyclopedic worldview are counted and that “journalism as we know it” will die at the same time. Though it seems to be built on an “encyclopedia” frame, Wikipedia clearly benefits from distributed model of knowledge management. In this sense, Wikipedia is less anachronistic than Britannica. Wikipedia also tends to be more insightful than Britannica.

The cultural awareness point may sound like an ethnographer’s pipe dream. But I perceive a clear connection between Globalization and a certain form of cultural awareness in information and knowledge management. This is probably where the Global Voices model can come in. One of the most useful representations of that model comes from a Chris Lydon’s Open Source conversation with Solana Larsen and Ethan Zuckerman. Simply put, I feel that this model challenges journalism’s ethnocentrism.

Obviously, I have many other things to say about journalism (as well as about its corrolate, nationalism).

But I do feel liberated already. So I’ll leave it at that.

The Participating Minority

[Update: The original article was about traffic, not user base. Should have read more carefully. Doh!]

Interesting stats about blogging and “viral participation” from Technorati’s Dave Sifry and Hitwise’s Bill Tancer. Also summarised on Ars Technica.

Bottom line: Despite extreme growth, only small (some would say “positively tiny”) fractions of the user base [traffic] for participatory Websites like YouTube and Flickr contribute any content. New blogs are created but a smaller proportion of them are active. Tagging, however, is taking off.

This can all be fascinating, on a social level. One thing that gets me is that those figures challenge a notion widely held among members of the participating minority itself. Even the usual figures of 10%, given for textual contributions to forums, mailing-lists, and blogs seems fairly low to those of us who write a lot, anywhere. In other words, it might well be that individual contributors are proportionally more influential than originally thought.

So, is this a trend toward less participation or are Internet users finding other ways to participate, besides contributing original content? Maybe users spend more time on social networking services like Facebook and MySpace. Even “passive participation” can be important, on SNS.

One thing people seem to forget is that private communication (email, IM, VOIP…) is alive and well. Not that I have figures to support the claim but my experience tends to tell me that a lot is happening behind closed doors. Oh, sure, it’s not “Web 2.0 culture,” it’s not even Web-based. It’s not even the sixth Internet culture, as it’s more in continuity with the fourth Internet culture of “virtual communities.” But it’s probably more influential, even in “epidemiological” terms, than “viral marketing.”

Internet Democracy

There’s been several “If X were a country” analogies, especially with MySpace as a target.

But then:

If the Internet was a country, it would be many times larger than the country of MySpace The Something Awful Forums

And then:

Nicholas Negroponte, the noted futurist and author of ‘Being Digital’, once observed that if the Internet were a country, it would be the nicest place on earth. Security and Vulnerability

Not to mention:

If the internet were a country, you’d know a relative of almost everybody. Scribd

It’d be interesting to use notions we have about actual countries to follow the analogy further. Some might think that the Internet could have a president but most of us seem to agree that the current structure of the Internet, without a specific “head of state,” works fairly well. We’ve known for a while that ordered anarchy can work:

In his classic study of the Nuer of the Southern Sudan Evans-Pritchard presents them as naked cattle-herders, seasonally nomadic, living in grass huts and supplementing their diet of animal products by horticulture. They form a congeries of tribes, sometimes gathering into loose federations but without central administration, rulers or grading of warriors or elders, and the age-sets into which they are divided have no corporate function. Evans-Pritchard speaks of ‘leopard-skin chiefs’ among them, but makes it clear that this position is backed by no coercive force. They show some specialisation but nothing amounting to a profession and cannot be said in any strict sense to have law, for there is no authority with power to adjudicate or enforce a verdict. In sum, ‘their state might be described as an ordered anarchy’. From Village to Empire

Associating the ‘Net with that anarchic model isn’t new. What seems to me a bit newer is to call that system “democratic” (especially in the context of User Generated Content, and other “Web 2.0” phenomena).

Even newer, to me at least, is the idea that the open and flexible nature of the Internet as it was originally designed might not be part of redesigns of the Internet.

Should we apply a more democratic model for the new Internet? How far should the “country” analogy affect the way we remodel the ‘Net?

The literature on nationalism and communities could help.