“Post-” Society

(Note: The following is a draft I submitted for publication in a magazine. After some back-and-forth with the editors, I decided not to publish it through that route. I was trying something different, with this text. It was meant to be heavily edited after the fact but it was also a deliberate attempt at mixing things up. A later version, after getting feedback on this draft, has a type of internal dialogue serving as commentary. Wasn’t trying to be cute, but I was playing with a few things.)

Post-industrial, post-Rock, post-colonial, post-PC, post-War, post-Impressionism, post-structural, post-digital, post-colonial…

Perhaps more than any other prefix in the English language, “post-” serves as an index of social change. While it may denote a phase in personal development or bodily function (“post-prandial” might be my favourite), it often pinpoints pivotal events of a social scope.

The issue of scope helps bring up a key characteristic of the “post-” prefix, at least in the way many authors use it. The period it defines would seem to be general, global, universal. Such an assumption of universality represents a key sign of ethnocentrism, a culturally-rooted perspective deemed to apply outside of culture. The notion that “the whole World” lives through the same social change calls for some nuance. Though World War II had impacts globally, the term “post-war” means different things in different parts of this globe. Uncritically assuming the relevance of a “post-” term regardless of context collapses a large variety of social dynamics into a simplified, “worldwide” phenomenon.

Conversely, social commentators apply several of the “post-” terms exclusively to their own contexts, using these terms to delimit a specific social system to which they assign global significance. Though different from the assumption of universality, this distinctive use of “post-” terms demonstrates further forms of ethnocentric thought. Calling one society “post-racial” to the exclusion of other societies deemed to be “racial” serves as self-congratulation more than as insightful analysis.

The case of the term “post-industrial” deserves discussion. Unlike many other “post-” terms, this term denotes a fairly straightforward phenomenon: a shift away from the manufacturing sector, occurring at the end of the 20th Century, accompanied by an emphasis on the service sector and on information technology. Though the term has attained some relevance in non-specialist contexts, some usage patterns come from academic discourse.

For instance, sociological textbooks use the term in diverse contexts. In some of those contexts, the term refers to a general principle, according to which some broad social changes accompany the end of industrialization, across the world.

In other contexts, “postindustrial” takes an evolutionary meaning, marking a key step in the inexorable evolution from “preindustrial” and ”industrial” social systems. This usage follows ideas proposed by Gerhard Lenski, who based his “societal taxonomies” on technological distinctions. Though not in itself deterministic, such an emphasis on the power of technology to define social evolution plays admirably into technological determinism. North American popular culture promotes a similar view of history, with societies advancing from one stage to the next along with appropriation of diverse technologies. Sid Meier’s Civilization games propose a clear version of this model and developers could substitute “Post-Industrial” for “Future Era” (though the presence of a “Modern Era” after the “Industrial Era” may complicate this progression).

Those usage patterns bear some of the marks of ethnocentrism. Whether the “post-industrial era” represents a worldwide or country-specific phenomenon, a notion coined in a given context comes to apply to a much broader scale.

The diversity of views on social change make this ethnocentric tendency all the more problematic. Archæological research, at least in its popular forms, proposes a view of social change occurring through a switch from one historical period to the next. Though widespread, this notion has a decidedly non-universal application. Apart from favouring a linear instead of cyclical view of time, “post-” terms imply complete, abrupt, and irreversible changes.

Here lies the crux of the “post-” concept. “Post-” implies a radical shift in social order, a point of no return, a clear boundary between an “after” and a “before”. Though used while focusing forward (toward the future), the “post-” terms cling to preceding events. “Post-Apartheid” underlines the importance of South Africa’s segregationist regime, while “post-Soviet” underscores the secession from the USSR at the end of the Cold War. Though emphasizing change, “post-” terms maintain the memory of past events all the while cutting bridges to the past. In this sense, events labelled through “post-” terms may sound like adolescent rebellion. “No, really, I’m totally a different person, now.”

Labels for artistic genres (and eras) help foreground the rebelliousness of many “post-” terms. From “Post-Impressionism” to “Post-Grunge”, artistic branding proceeds by a rejection of some artistic developments immediately preceding it. Whether or not artists themselves use those terms, their art appears as a reaction to a certain body of artistic activity. Describing novels as “Post-Romantic” and bands as “Post-Bop”, literary and music critics put artistic creativity in a special type of box. Not only do artists typically dislike the attribution of restrictive genre labels to their production, but the “post-” prefix poses a further issue to creators looking forward instead of back. Like a ball and chain, a “post-” genre label restricts movement.

Clearly, creative endeavours involve combinations of ideas and forms regardless of the eras in which they appeared original. Artistic usage of “postmodern” connotes such combinations. Though some may perceive postmodern art as a reaction to modernism, its incarnations bring together forms originating in diverse eras. In this view, postmodern art revolves around mashups, remixes, collages, and other forms of reappropriation. Such a context makes “genre-defying” into a cliché, genre labels appearing quaint to practitioners of postmodern art.

Here, the boxed-in version of social change becomes increasingly awkward. Instead of stable periods separated by radical shifts, postmodern art proposes an intermingling of old and new, of contemporary and ancient elements. Going back and forth between different sets of æsthetic precepts, the postmodern artist can break the shackles of genre labels. Though similar attempts at mixing rules may characterize transition periods between other eras in art history, postmodernism can make this type of blending into an artform.

The output of postmodern creativity can occasionally sound like that of “premodern” art. Yet, as Pierre Menard’s Quixote (described by Borges), the appropriated version differs from the original through the difference in context. Used in “Jungle” music, a sample from a funk and soul version of a gospel-inspired song takes on meanings very different from the original. Similarly, a collage including a replica of a classic painting brings new significance to that element.

This mixing of elements from different eras may make life difficult for archæologists trying to date a piece of postmodern art by focusing on external features alone. Thankfully, archæological research uses more data than an object’s mere appearance.

Postmodernism in social sciences acknowledges the potential for a blurring of distinctions, whether they concern historical periods, political entities, or social identities.

Part of this revolves around breaking rules, or at least questioning them. We can go back to the teenage idea, from earlier. Postmodernists in social science often make it their duty to revolt against authority. As with postmodern art, this reaction goes beyond the succession of historical eras. Though postmodern thinkers might perceive modernist authors as the main figureheads eliciting insurrection, the rebellion needs not focus on a given cast of characters.

Applied to social contexts, “postmodern” matches “postindustrial” as described above and several social scientists use the terms interchangeably. Postmodern societies focus on information technology and deemphasize manufacturing. Tracing back connections between industrialism and modernism, a peculiar view of history emerges.

Among “post-” terms described here, “Postcolonialism” may have the least currency in mainstream Euro-American discourse. Yet, the notion of a “postcolonial” present opens a new dimension in thinking about social change.

Unlike “postindustrial”, “postcolonial” involves no linear, preordained evolution. Few scholars would perceive the transition away from colonialism as a necessary and natural progression. Teleology needs not apply. Like post-Soviet states and post-Apartheid South Africa, postcolonial societies bear the scars of problematized pasts. Though individual members from all of these political entities (Uganda, Belarus, South Africa…) may nostalgically bemoan the loss of past structures, the general theme of independence from former powers resonates with talk of progress. Like other “post-” terms, postcolonialism marks an irrevocable transition.

Post-colonial theorists, like Frantz Fanon and Edward Saïd, go beyond the study of post-independence states. Given an association between colonial and “Western” thought, decolonialization entails a move away from Eurocentrism. Here, nuance and the acknowledgment of diversity help break the moulds in which colonialism has tried to place ideas. In this sense, postcolonialism shares with postmodern art a propensity for boundary defiance.

We can find a peculiarity of postcolonialism, among “post-” terms, in that scholars who use the term consider colonial aspects in the present time period. In other words, despite the “post-”, postcolonialism occurs in parallel with colonialism. Most “post-” terms imply a finished state, equivalent to perfect verb forms in English grammar: “We have done this (modernism, Grunge, industrialization, Impressionism…) and can now move on”. Colonialism, in post-colonial theory, moves on.

Feminism provides a useful angle from which to tackle postcolonialism. Over the last several decades, different phases of feminist thought have served as an academic equivalent to “disruptive innovation”. Though present in academic writing since its inception, feminism has only begun disrupting academia once scholars began acknowledging the impact of inequalities on their work. This acknowledgment corresponds loosely with both the onset of “Second wave feminism” and the so-called “Human Rights Movement”. Postcolonialism took roots during the same period, as diverse groups gained independence from colonial powers. Yet, more than sharing a 1960s timeframe, the feminist impact and the postcolonialist spark both demonstrate the power to challenge established order.

Postcolonialism, like Second wave feminism, problematize scholars’ complacency and obliviousness. In both cases, overwhelmingly large contingents (women and colonized people) have asked academics: “But have you thought about us?”. These contingents’ appropriate representation in academic contexts has preoccupied scholars ever since. Even more important, the notion that these contingents may have insight to share on a variety of topics served as a wake-up call for social scientists and other practitioners.

At this point, you may conceive of several “post-” terms as connected with one another to form a type of conceptual network. For instance, though postcolonial societies may differ from postindustrial ones (in fact, few societies belong to both taxonomies), links between the two concepts appear clearly once we take into consideration a basic tenet of Dependency Theory.

Similar to people associated with “post-” terms, dependency theorists rebelled against Modernization Theory which posited a linear and universal evolution toward modernity and postindustrial systems. As set out by diverse authors from Prebisch to Wallerstein, Dependency Theory claims (among other things) that material conditions of former colonies relate to the control exercised by those states most often classified as postindustrial. According to Dependency Theory, the postindustrial markets force other societies (especially postcolonial ones) to maintain industrial economies. In this context, Dependency Theory represents Modernization Theory’s “post-”.

Several “post-” terms act as temporal indices, marking the passage from one historical phase to another. Dates can then fill conceptual gaps between them. Occurring simultaneously but in distant locations (say, Estonia and South Africa), significant changes necessitating the creation of new “post-” terms may serve as contexts for one another. Because of this synchronicity, certain time periods in the recent past appear particularly significant. The formula “X occurred; meanwhile, Y happened” strike many observers’ fancy. For instance, the early 1960s marked the beginning of diverse independence movements throughout Africa and the early 1990s marked the end of both Soviet and Apartheid regimes. Causal connections exist between some of these events, especially through yet other historical events (World War II and the Cold War serving as major turning points). Unsurprisingly, connections between these significant events have provided fodder for proponents of linear views of history. At the same time, though, these connections allow for alternative views of history, some of which may not strictly follow linear thinking.

More specifically, event-based “post-” terms provide a context for debates about discontinuity and continuity. Do the shifts implied by these “post-” terms cause the disappearance of the conditions which made possible a previous orders? Can modern and postmodern, colonial and postcolonial, Rock and post-Rock coexist? What signs point to the end of a transition period from an era to its “post-”?

Siding with Michel Foucault’s episteme instead of upholding Thomas Kuhn’s “paradigm” allows us to take into account coexisting frames of reference. Both Kuhn and Foucault talked about history of ideas, the temporal dimension of epistemology. However, a key distinction between them can serve to foreground the important debate about continuity through change. While Kuhn focused on what Stephen Jay Gould called punctuated equilibria, one paradigm succeeding another, Foucault left room for the type of complex interaction we have seen in postcolonialism and postmodernism. Though we may evaluate the relative currency of diverse epistemes, one episteme needs not replace another. Difficult to conceive, a mix of epistemes could evoke a postmodern collage or the dialectic tension between postcolonial societies and imperialism.

The current trend for “post-” terms (some would call it a “fad”) tends to emphasize radical shifts instead of complex blends. Used to the archæological and often teleological views of history, social commentators prefer clear-cut concepts like “post-War” and “postindustrial” to fluid ones like “postmodern” and “postcolonial”. However, looking at continuities instead of shifts may help us situate the deeper significance of “post-” terms.

Though narrower in usage than most other “post-” terms, the so-called “Post-PC era” merits some attention, here, as it evokes a radical shift while demonstrating the value of fluidity.

At its root, the “post-PC” concept revolves around a shift in form factor. Following the lead of a certain corporate executive, industry analysts know label smartphones, tablets, and wearable computers as “post-PC devices”. These devices require a “post-” term because laptops and desktops already own the term “PC”.

Personal computing still matters, probably more than before, but new devices differ significantly from devices commonly labeled as PCs (laptops and desktops, for instance). In the 1990s, we could have labelled “personal digital assistants” as personal computers. Post-PC devices share many characteristics with PDAs. The distinction between the two types of devices relates directly to events unfolding in the meantime. To stretch the analogy, a PDA released today (with the same technical specifications as those built in the 1990s) could represent both old technology and a post-PC device.

The term “post-PC” indexes a transition, as we speak, from personal computers to other computing devices. Such transitions have occurred throughout the history of computing. Together, they provide support for Kurzweil’s (and, to a lesser extent, Vinge’s) version of “technological singularity”. The assumption of a grammatical perfect, embedded in “post-” terms, connotes a key distinction between the Post-PC era and the transition periods which preceded it. Despite comments to the contrary made by the corporate executive who popularized it, the term “Post-PC” would seem to imply an expected end to personal computers as we know them. “We have done personal computers, now we can move on.” Of course, computer makers will continue to build PCs alongside “post-PC devices” in the foreseeable future. But the radical shift announced by the “post-” term may make it impossible for PCs to retain their original significance. As we have seen with other “post-” terms, objects change their meaning through radical shifts. In practice, one might expect that PC usage will shift as “post-PC devices” increase in importance: some may become personal servers or hubs, while others could be shared in distributed systems (“Can you imagine a Beowulf cluster of these?”).

Implicitly, the post-PC concept applies globally. In line with a popular version of Modernization Theory, this concept implies that “one day, everyone will join us in the post-PC world”. As with a common reading of Everett Rogers’s Diffusion of Innovations, technophiles assume that adoption of certain tools will reach saturation once “laggards” follow the lead set by “early adopters”. Even if this pattern may help predict market penetration, it provides little insight on the human dimension of technology, which includes knowledge and usage.

Tools on their own signify close to nothing. They become meaningful technology through such things as use cases, literacy, and subversion. Not only may the same tools mean different things according to context, but people often use diverse sets of tools to accomplish the exact same goals. The concept of “technological appropriation” involves both usage and knowledge, the human components of technology. Appropriating a tool involves placing this tool in a broader context. Unintended uses of a tool represent a remarkable form of reappropriation based on usage. Skills needed to master a tool (a smartphone or a saxophone, for instance) contribute to technological; so does the tool’s design. By creating a new tool, a user displays a high level of knowledge related to that tool. Such an appropriation pattern runs at the core of Steven Levy’s “hacker ethic”.

In that sense, people around the world may surprise observers by their use of PCs and post-PC devices alike. One part of this surprise may come from John Perry Barlow and Nicholas Negroponte have bothe the “leapfrog effect” (itself a partial challenge to stricter versions of Modernization Theory). Using post-PC devices requires no prior use of PCs, the same way that cellphones need not follow landline phones. Further, as Heather Horst, Mizuko Ito, and others have demonstrated in the case of cellphones, tools find diverse applications across the World, often challenging assumptions about patterns of technological adoptions. Then, those devices which some label as “post-PC” may in fact open possibilities outside of personal computers as we know them.

On the other hand, reappropriation of personal computers may also surprise those who believe in a post-PC future. As conceived from a folkloristic perspective, one of functionalism’s key insights lies in the fact that social groups maintain a practice if (and only if) it fulfills a function. Personal computers may still fulfill a function in the future, though that function may be radically different from what we currently consider to be personal computing.

Already, shared use of PCs has shifted their place in computing. A family’s desktop computer, with or without separate user accounts, serves a purpose quite different from that of the original personal computer. A cybercafé’s terminals represent another case of PC reappropriation. You can find further examples of reappropriation in case modifications and even subvertive uses of PC cases (the “Macquarium”, for instance). Reappropriated PCs may become something new, sometimes quite unlike a PC.

Examples of creative approaches to technological appropriation abound. In Amélie, Raymond Dufayel’s use of a camcorder as a clock may appear as a waste, given relative prices of clocks and camcorders. Conversely, milk crates used as bookshelves connote thriftiness.

Some reappropriations (often imposed by external conditions) have problematic consequences, as documentaries on “E-Waste” make painfully clear. Technological appropriation takes many forms, often unexpected ones. Teleological approaches to technological development, as the one embedded in the term “post-PC”, often mislead.

Several “post-” terms lead us to think about Globalization… as a “post-national” notion. Defining “global” in contradistinction to “International” helps us along the way. Global corporations differ from multinational in working at a such a high level that national governments matter less than production, distribution, and consumption networks. Most of these enterprises may have corporate headquarters located in specific countries, but entrepreneurial activities may happen outside of these headquarters, distributed across facilities located around the World.

Globalization and postnationalism do differ. Opponents and defenders of worldwide trade can play a semantics game, as they position themselves along dividing lines. Social scientists opposed to economic globalization often support (or even predict) a move to “democracies without borders”. Conversely, economic globalization’s defenders may very well oppose a postnational system of governance, on the grounds that it could lead to totalitarianism. Still, the core notion behind both postnationalism and globalization remains stable, regardless of ideology: a significant decrease in the relevance of nation-states.

Though taken as an ancient concept, the nation-state constitutes a relatively recent innovation. Whether or not national identities resulted, as Eric Hobsbawm and Benedict Anderson have argued, from sociopolitical maneuvering, the strict association between a country and its exclusive “nation” (descendants of the same “people”) mostly came about during the last 200 years, in the Americas and Europe. Prior to this shift, states consisted mostly of kingdoms and empires, with other political entities working outside of the state model. (Since Elman Service, cultural anthropologists have defined these other models as bands, chiefdoms, and tribes. Though Service meant his typology to be evolutionary and these models display increasing complexity, most contemporary scholars use these terms without evolutionary undertones.) , Basing themselves on national identity Instead of a monarchic order, nation-states restrict ethnic diversity.

The national era shrouded key concepts in mystery. Though used interchangeably, “citizenship”, “nationality”, “ethnicity”, and “national identity” differ considerably. The potential for confusion came from the fact that nation-states focus on ancestry, with the core citizenry belonging to a single ethnic group. Thus, Italians traced back their roots to Etruscans and French people got West Africans to talk about “Our ancestors, the Gauls”. Many claims to national ancestry serve nationalistic self-interest more directly than historical accuracy. When the nationalist creation of Germany led Wilhelm and Jacob Grimm to collect fairytales, the Brothers Grimm likely had no idea that many of these tales had diffused from Asia and elsewhere.

At the present time, almost all human beings belong to sovereign states yet few of these states adhere even loosely to the nation-state model. Somalia, Japan, and Armenia have limited ethnic diversity. In this sense, most of us already live in a postnational world. National identities still matter, possibly even more than in the recent past, sustaining ethnic pride or causing conflicts. Yet nation-states now appear mostly as figments of nationalists’ imaginations.

Editors of The Alpine Review describe their publication as a “post-digital” magazine. I agree that we can only conceive of this publishing model once digital forms have taken hold. However, the “post-” term could mislead if it gave one the idea that we have already gone “beyond digital”. Of course, diverse authors propose different versions of the term. Deloitte’s usage deviate’s widely from Roy Ascott’s, though both relate to economic conditions. The core point remains that digital technology only defines part of our existence.

As “post-” terms go, though, “post-digital” relates more directly to continuous co-occurrence like “post-PC” and “postcolonial” eventful shifts like “post-Impressionism” and “post-Apartheid”. It seems clear to most that digital and print (in its post-digital version) will coexist for a while. More generally, digital and analog maintain their relevance, regardless of technocentrism. Digital technology may affect us deeply, but we remain as physical beings.

Alex Enkerli

Unless, that is, we all become “post-humans”.

Projets de réappropriation technologique

Quelques projets qui illustrent la réappropriation technologique ou comment passer au-delà de la «fracture numérique».


Fabriquer ses propres objets, c’est un peu court-circuiter les chaînes de production, les rapports inégaux à travers le globe et la notion de propriété. On va parler d’exemples concrets de FabLabs et d’innovation citoyenne, au Québec comme en Afrique pour réfléchir ensemble sur les implications sociales de ces mouvements technologiques.



Alexandre Enkerli s’est intéressé aux dimensions sociales de la technologie dès l’achat de son premier ordinateur, un Commodore VIC-20, au début des années 1980. Depuis, il a été à la fois acteur et observateur au sein de ce que l’on appelle maintenant la « culture geek ». Outre son travail de recherche en ethnographie de la technologie, il enseigne l’anthropologie et la sociologie à l’Université Concordia.

Internet and Privilege

Part of what was going through my mind, writing that Internet nostalgia post, was the notion that my being granted Internet access in August of ‘93 was a privilege. Quite literally. By backing up my request for an account on the Mistral.ERE.UMontreal.CA machine, Kevin Tuite was granting me access to a whole wide world, mostly undiscovered by other undergraduate students. Knowing how justifiably strict André Earl Paquet (UdeM SysAdmin) was, the fact that I got on ERE at such an early stage is rather significant.

It’s not the only time I was allowed access to restricted areas, “before my time”. Often with teachers. For instance, I’m still moved by a strong musical moment in which I’ve had the privilege to participate as a student in a music daycamp. The camp’s instructors were hanging out at the end of the day and I was waiting for a ride with one of them. I was the only student there and the age difference (I was 13 and they were 19 or 20) should have mattered. The point is, we all lay down on the floor with lights off and we all started… vocal improvisations over the sound of a vending machine. Deep.

Part of my privileged access to teachers might have been related to the fact that my father was a teacher and I perceived his colleagues as normal human beings. In fact, I was only a kid when I witnessed a group of teachers cry. In a tiny-scale version, it’s distantly related to African soldiers fighting alongside colonials and seeing fear in their eyes. I know how far those two situations sound, from one another. But there’s something significant about hierarchy, that it of“bten relies on flimsy masks.

But back to the Internet. I was privileged in my early access. I’m still privileged with better access to the ‘Net than a large part of the population of the planet, though there are hundreds of millions of us with such access. In this sense, I’m on “the better side of the Digital Divide”. I’m also privileged with working knowledge of a number of tools, which I acquired through many ways which are still inaccessible to most people on the planet. Not only was my university education part of this but the fact that I was getting a steady (though relatively low) salary during that Summer of 1993 meant that I could spend that formative time online.

The “classic” (almost textbook) example of privileged access to the Internet is Bill Gates. Though he’s occasionally been portrayed as a “self-made man”. Of course, the concept has a specific meaning in financial circles. But deep privilege is often hidden by the Horatio Alger connotations of that concept. Not to take anything away from Gates’s business acumen and programming chops, but I find it important to point out that, in the 1970s, it would have been extremely unlikely to have a computer mogul emerge out of a rural single-parent low-income family in the US Heartland.

“But”, I hear some sociology students say, “that’s just life! It’s not ‘privilege’! Would you say that Gates was privileged by when he was born?”

Why, yes, I probably would call that “privilege”. That’s a big part of what we mean by privilege, in sociology: arbitrary conditions which imply easier access to key resources. Even such a thing as going to a school which had decent computer labs at a time when most schools didn’t is significant privilege.

“Oh, but, but…”, some of the same students might say, “that means nothing, then. Success is still 90% hard work.”

You’re engineering majors, right?

“What does this have to do with anything?”

Depending on how you think about determinism, that might be accurate. But I’d say it’s misleading. Some people might talk about “luck” instead of privilege, and assign it a 10% influence. But it’s at least an enabling factor in this model and it might be a whole lot more. If “success” doesn’t happen without “luck”, the proportional impact of “luck” is a moot point.

“C’m’on!”, students continue, ”Bill Gates had to work hard! He wasn’t born with a silver spoon in his mouth!”

I don’t dispute that. I’d be very surprised if Gates had an actual silver spoon in his mouth at birth and I don’t think it’d have been that useful for him. But I’m saying that privilege is something we do well to put in context.

“Now you’re playing with us!”

Yep. I love to play. But there’s an important idea, here, which may help you understand sociology:

Privilege is often invisible to those who hold it.

Can you do well in a challenging situation without being told what an inspiration you are?


Check out the “Invisible Knapsack”.


It’s an assignment!


Twenty Years Online

This month marks the 20th anniversary of my first Internet account. I don’t remember the exact date but I know it was in late summer 1993, right before what became known as “Eternal September”. The Internet wasn’t new, but it still wasn’t on most people’s proverbial “radars”.

Had heard one of my professors, Kevin Tuite, talk about the Internet as a system through which people from all over the World were communicating. Among the examples Tuite gave of possibilities offered by the ‘Net were conversations among people from former Soviet Republics, during this period of broad transitions. As a specialist of Svaneti, in present-day Georgia, Kevin was particularly interested in these conversations.

During that fated Summer of ‘93, I was getting ready to begin the last year of my B.Sc. in anthropology, specializing in linguistic anthropology and ethnomusicology. As I had done during previous summers, I was working BOH at a French restaurant. But, in my free time, I was exploring a brand new world.

In retrospect, it might not be a complete coincidence that my then-girlfriend of four years left me during that Fall 1993 semester.

It started with a local BBS, WAJU (“We Are Joining You”). I’m not exactly sure when I got started, but I remember being on WAJU in July. Had first been lent a 300 baud modem but I quickly switched to a 2400 baud one. My current ISP plan is 15Mbps, literally 50,000 times faster than my original connection.

By August 1993, thanks to the aforementioned Kevin Tuite, I was able to get an account on UdeM’s ERE network, meant for teaching and research (it stood for «Environnement de recherche et d’enseignement»). That network was running on SGI machines which weren’t really meant to handle large numbers of external connections. But it worked for my purpose of processing email (through Pine), Usenet newsgroups, FTP downloads (sometimes through Archie), IRC sessions, individual chats (though Talk), Gopher sites, and other things via Telnet. As much as possible, I did all of these things from campus, through one of the computer rooms, which offered amazingly fast connections (especially compared to my 2.4kbps modem). I spent enough time in those computer rooms that I still remember a distinct smell from them.

However, at some point during that period, I was able to hack a PPP connection going through my ERE account. In fact, I ended up helping some other people (including a few professors) do the same. It then meant we could use native applications to access the ’Net from home and, eventually, browse the Web graphically.

But I’m getting ahead of myself.

By the time I got online, NCSA Mosaic hadn’t been released. In fact, it took a little while before I even heard of the “World Wide Web”. I seem to remember that I only started browsing the Web in 1994. At the same time, I’m pretty sure one of my most online-savvy friends (likely Alex Burton or Martin Dupras) had told me about the Web as soon as version 1.0 of Mosaic was out, or even before.

The Web was a huge improvement, to be sure. But it was neither the beginning nor the end of the ‘Net, for those of us who had been there a little while. Yes, even a few months. Keep in mind that, at the time, there weren’t that many sites, on the Web. Sure, most universities had a Web presence and many people with accounts on university networks had opportunities to create homepages. But there’s a reason there could be Web directories (strongly associated with Yahoo!, now, but quite common at the time). Pages were “static” and there wasn’t much which was “social” on the Web, at the time.

But the ’Net as a whole was very social. At least, for the budding ethnographer that I was, the rest of the ‘Net was a much more interesting context for observation than the Web. Especially newsgroups and mailinglists.

Especially since the ‘Net was going through one of its first demographic explosions. Some AOLers were flooding the ‘Net. Perhaps more importantly, newbie bashing was peaking and comments against AOL or other inexperienced “Netizens” were frequently heard. I personally heard a lot more from people complaining about AOL than from anyone accessing the ’Net through AOL.

Something about the influx which was clear, though, is that the “democratization” was being accompanied by commercialization. A culture of open sharing was being replaced by corporate culture. Free culture was being preempted by a culture of advertising. The first .com domains were almost a novelty, in a ‘Net full of country-specific domains along with lots of .edu, .net, .org, .gov, and even .mil servers.

The ‘Net wasn’t yet about “paying for content”. That would come a few years later, when media properties pushed “user-generated content” into its own category (instead of representing most of what was available online). The ‘Net of the mid-1990s was about gaining as much attention as possible. We’re still in that mode, of course. But the contrast was striking. Casual conversations were in danger of getting drowned by megaphones. The billboard overtook the café. With the shift, a strong sense of antagonism emerged. The sense of belonging to a community of early adopters increased with the sense of being attacked by old “media types”. People less interested in sharing knowledge and more interested in conveying their own corporate messages. Not that individuals had been agenda-free until that point. But there was a big difference between geeks arguing about strongly-held opinions and “brands” being pushed onto the scene.

Early on, the thing I thought the Internet would most likely disrupt was journalism. I had a problem with journalism so, when I saw how the ‘Net could provide increased access to information, I was sure it’d imply a reappropriation of news by people themselves, with everything this means in the spread of critical thinking skills. Some of this has happened, to an extent. But media consolidation had probably a more critical role to play in journalism’s current crisis than online communication. Although, I like to think of these things as complex systems of interrelated trends and tendencies instead of straightforward causal scenarios.

In such a situation, the ‘Net becoming more like a set of conventional mass media channels was bad news. More specifically, the logic of “getting your corporate message across” was quite offputting to a crowd used to more casual (though often heated and loud) conversations. What comes to mind is a large agora with thousands of people having thousands of separate conversations being taken over by a massive PA system. Regardless of the content of the message being broadcast by this PA system, the effect is beyond annoying.

Through all of this, I distinctly remember mid-April, 1994. At that time, the Internet changed.  One might say it never recovered.

At that time, two unscrupulous lawyers sent the first commercial spam on Usenet newsgroups. They apparently made a rather large sum of money from their action but, more importantly, they ended the “Netiquette” era. From this point on, a conflict has emerged between those who use and those who abuse the ‘Net. Yes, strong words. But I sincerely think they’re fitting. Spammers are like Internet’s cancer. They may “serve a function” and may inspire awe. Mostly, though, they’re “cells gone rogue”. Not that I’m saying the ‘Net was free of disease before this “Green Card lottery” moment. For one thing, it’s possible (though unlikely) that flamewars were somewhat more virulent then than they are now. It’s just that the list of known online woes expanded quickly with the addition of cancer-like diseases. From annoying Usenet spam, we went rather rapidly to all sorts of malevolent large-scale actions. Whatever we end up doing online, we carry the shadow of such actions.

Despite how it may sound, my stance isn’t primarily moral. It’s really about a shift from a “conversational” mode to a “mass media” one. Spammers exploited Usenet by using it as a “mass media” channel, at a time when most people online were using it as a large set of “many-to-many” channels.

The distinction between Usenet spam and legitimate advertising may be extremely important, to a very large number of people. But the gates spammers opened were the same ones advertisers have been using ever since.

My nostalgia of the early Internet has a lot to do with this shift. I know we gained a lot, in the meantime. I enjoy many benefits from the “democratization” of the ‘Net. I wouldn’t trade the current online services and tools for those I was using in August, 1993. But I do long for a cancer-free Internet.

Confessions of an App Buyer

When it comes to apps, I’m clearly a tire kicker. After deleting a few from the US App Store (now that I live in Canada), I have 943 .ipa files in my “Mobile Applications” folder. Most of them were free. Some (especially a few music apps) were rather expensive. I have 104 apps installed on my iPad, 116 on my iPhone. There’s some overlap but actually not that much.

Apps I Use the Most


On the iPhone, several of the apps I use the most are stock apps.

Stock Apps

  • Mail
  • Alarm
  • Safari
  • Messages
  • Calendar
  • Settings
  • Find My Friends (Not officially a stock app, but close enough)
  • Camera
  • App Store
  • Phone
  • Music
  • Photos
  • Reminders

Quick Services

I use a number of apps for quick services, like looking up information or posting an update:

  1. Drafts
  2. Facebook
  3. Twitter
  4. Foursquare
  5. Weather
  6. STM Mobile
  7. Google Maps
  8. SoundHound
  9. ING Direct
  10. LinkedIn
  11. YouTube
  12. Virgin Mobile Members’ Lounge
  13. Timer
  14. Wikipanion
  15. Facebook Messenger
  16. Pushmail
  17. 1Password
  18. Jawbone UP
  19. fitbit

I don’t really use other apps on a regular basis.


On the iPad, the situation is rather different.

Stock Apps

These are the stock apps I use regularly on the iPad:

  1. Mail
  2. Safari
  3. Messages
  4. Settings
  5. Calendar
  6. App Store

Regular Apps

I use all of the following apps on a regular basis:

  1. 🙂 Sudoku +
  2. Downcast
  3. Solebon Pro
  4. Rdio
  5. Drafts
  6. Facebook
  7. Twitter
  8. Dropbox
  9. Wikipanion
  10. iBiker
  11. 1Password
  12. YouTube
  13. Google Maps
  14. Day One

Apps for Teaching and Research

When I teach and/or am active in research, I use these apps on a regular basis:

  1. Keynote
  2. GoodReader
  3. iThoughts HD
  4. Notability
  5. OmniOutliner

App Value

If I get to think about value and cost, there are some clear differences. Some of the apps I use regularly are part of a paid service (Virgin Mobile…), have to do with a hardware device (Jawbone UP and fitbit), or come with a freemium service (Rdio and Dropbox). Other apps have to do with ad-based services (YouTube, Facebook, Twitter…).

And then, there are the one-time purchases:

  1. Keynote
  2. iThoughts HD
  3. GoodReader
  4. Notability
  5. Solebon Pro
  6. 🙂 Sudoku +
  7. Downcast
  8. Drafts
  9. iBiker
  10. OmniOutliner
  11. 1Password
  12. Wikipanion
  13. Day One
  14. STM Mobile
  15. Timer

The first ten are particularly interesting, I find. They’re pretty much in decreasing order of value, but not in decreasing order of price. OmniOutliner is the most expensive but, if I’m honest with myself, I don’t draw that much value from it. Maybe the situation will change when OmniOutliner 4 for Mac comes out, but I doubt it. I’d rather use an iPad version of FoldingText.

Teaching and Research Workflow

In some ways, Apple’s Keynote is part of the justification for me to have an iPad. I don’t have a laptop, anymore, and I use “slides” in the classroom. Not really as part of the “presentation”, more as a way to structure the class meeting. It’s really ideal, but it ends up working quite well in my workflow. I’ve been thinking about, looking for, and using several other solutions over the years. For instance, I used to create printable and screen-friendly PDF files using OmniOutliner and LaTeX. And I’ve used the classroom desktop to edit some slides during class time. For instance, I might ask students to create exam questions and I’d add them to the slides during class time. But presentation software (including PDFs) never really covers my whole teaching workflow.

In this sense, iThoughts HD is a neat addition to my workflow, and some students have commented on it. I don’t really use it for “traditional” mindmapping. In my case, it’s more of a tool for brainstorming with students. For instance, I can ask the class for some themes connected to the material with which they’ve been working. I might rearrange some of these, or group them. Used to do this on slides, but the mapping format helps a bit. Plus, it’s easy to export those items to a list that I then add to our course site.

GoodReader is also part of my teaching (and research) workflow. For some of my courses, most of the texts we use are available as PDFs. Using GoodReader, I annotate these texts in my own “special” way, which makes it easy afterwards to create outlines or other material for the class meetings. In fact, this process is so useful that I’ve been scanning several texts to make sure I could use GoodReader with it. As I also use GoodReader for research-related texts, I might also start transforming Web content to PDFs. (GoodReader used to be even more useful to me as, before the Dropbox for iOS came about, I was using it as a “deposit box” for PDFs.)

Notability is also part of my research and teaching workflow. I’ve used it in the field as an alternative to my LiveScribe “smartpen”, as I can take notes paired with audio recordings, which is a particularly useful thing to do during an open-ended interview or a meeting. I’ve used it in class in the same way, when I’ve had guests. I kind of wish I could use it to create “ProfCasts” during class time.

Speaking of wishlists, I would probably “pay good money” for the optimal tool in my teaching and research workflow. Not an “everything but the kitchen sink one-stop shop for all of my needs”. That’s usually painful-to-use bloatware. But something which fits my workflow like the ideal mattress or slipper. Part of what I’m thinking about is the way Horace Dediu uses the Perspective app, which was partly developed with his workflow in mind. My own workflow is almost the complete opposite of Horace’s. Basically, though I do use “presentation software”, I try not to “present” material that I previously created. In fact, my dream scenario has a bit more to do with the Donahue app than with Perspective. It could even have something to do with web>clicker, though I’ve been on the record about my distaste for these proprietary solutions.

Games and Podcasts

Though it may sound trivial, I do draw quite a bit of value from the two casual games and the podcatcher on my list. In fact, a very common behaviour for me on my iPad is to switch between the two casual games as I listen to podcasts. Downcast is my current podcatcher, but the value I derive from it has to do with the podcasts themselves. Like weather apps and many productivity apps, no app is the ideal solution for me. I could imagine a Netflix-like subscription service which would add a lot of value to my podcast listening. Solebon Pro and 🙂 Sudoku + are my favourite casual games by far. I’ve been using Solebon Solitaire apps since my PalmOS days. In some ways, I feel bad that I haven’t paid more for those apps but I probably wouldn’t have paid more. However, I’d gladly support a crowdfunding campaign from either of these developers.

Other Neat Apps

The Drafts app is an interesting case. I only discovered it fairly recently, but it’s the kind of app which makes me rethink my workflow. I already get quite a bit of value from it, but I know I could do more with it. For instance, by creating an “action” to append content to a plain text file in Dropbox, I’ve made it into the ideal tool for me to send tasks to my “GTD inbox”. This is an app for which I could imagine “extras”, including paid ones. Could be tricky, but there might be something there.

Unlike fitbit and Jawbone UP, the iBiker app is a standalone third-party app. Despite the name, it’s not just about biking. I’ve chosen it as the app I use to track my workouts, especially walking and exercise biking. It connects with my ANT+ sensors (a heartrate strap and a footpod) via a Wahoo Fitness dongle. It’s similar to many other apps, but I chose it over others because it’s available on the iPad. Partly because of battery use, I prefer using my iPad for these things. This is an app which connects with a freemium service but, unlike Dropbox and Rdio, most of its value comes from the app itself (at least in my case). I do use it to sync with fitbit, but there could (and perhaps should) be better ways to do this.

OmniOutliner for Mac used to be a very important app, for me. I derived quite a bit of value from that desktop app and my teaching workflow was even tied to it, for a while. I’ve since switched much of my Mac OS outlining to Hog Bay Software’s FoldingText which, like the Drafts app for iOS, is unfolding as a really neat solution. I’ve tried a number of outliners on iOS and, for a while, I was quite happy with Hog Bay Software’s TaskPaper. However, because Jesse Grosjean is now focusing on FoldingText, I’ve mostly abandoned TaskPaper. I feel like we’re in a transition period before we can get a FoldingText(-friendly) app on iOS. In the meantime, I’ve been using OmniOutliner for iOS a bit more. The fact that I’m beta-testing OmniOutliner 4 on Mac OS is also part of it. But, unfortunately, I can’t say OmniOutliner is that useful to me right now.

App Costs

App developers are fond of talking about the App Store. Marco Arment (whose posts about the App Store prompted this post), has devoted a significant portion of his (dearly missed) Build and Analyze podcast to questions surrounding the App Store. Before releasing Vesper, John Gruber linked to items preemptively defending his app’s price. And I’ve read from enough versions of the “app buyers are cheap” attitude that pressure has been building up.

So, in this sense, this post is a follow-up to the following posts on app prices and business models:

The last one is about the Mac App Store, and I have a lot more to say about Mac software, in general, and the MAS specifically. But that will have to wait for another post. App bundles will probably be a significant part.

“App Discovery” Is Expensive

During the past five years, I’ve spent quite a bit of money on software (both on iOS and on Mac). Probably not nearly as much as I’ve spent on hardware, but still a significant amount. And, quite likely, more than I had spent in the previous twenty years. Altogether, the software from which I derive the most value has probably cost me a small fraction of the what I’ve spent overall. Which means that most of the money I’ve spent on software is for things from which I derive little to no value. In other words, my benefit/cost ratio in apps is fairly low. It’s as if I had paid several times more money than I actually did, for these few apps that I really find useful in my workflow. Developers of those valuable apps didn’t get that money from me. But other developers (and, in the case of App Store apps, Apple) did get some of my money for things that I don’t use. You could say that this money was spent in “app discovery”. If you add the inordinate amount of time spent trying these apps, the lost value is actually pretty high. In fact, because of the time and effort in finding and trying apps, it makes little difference whether those apps are paid or not.

You might blame me for my app buying behaviour, for making bad purchasing decisions. In the end, though, I almost feel like I’m getting the raw end of a lousy deal. Of course I entered that deal with some insight into the situation. I could simply stick to a few well-known apps, the way people did when Microsoft was so dominant. And I do derive some value from the “app discovery” process, as I get to think about possibilities. Yet I find a problem with the way the whole system works, in terms of finding the software I might find useful. App stores themselves are supposed to be solutions to the “app discovery” problem and it’s clear to me that they’re far from ideal. Software available at no initial cost (including shareware, demoware, and FLOSS) may not be the solution either, given the effort needed to try them. Some podcasts do provide some help, especially Mac Power Users and Systematic (both on 5by5), but they’re also “part of the problem” as they get me to buy some of the software I end up not using much.

Speaking of Systematic, host Brett Terpstra is an interesting figure, in this whole thing. He’s an app developer with at least one paid app Marked ($4) in the Mac App Store. But he’s mostly a developer of “solutions”. His projects are quite diverse and many of the things he’s created are free to use. In fact, he’s created a number of “one-off” solutions which aren’t part of that project list but remain useful (for instance, he created a script for me to convert lists from one text format to another). Pretty much a “scratch your own itch” kind of person, he’s someone who can “develop his way out” of a number of situations. More than with many other developers, I wish I had even a tiny fraction of his skills. Yet Brett’s “Top Three” lists have contributed to making me spend more time (and money) on “app discovery” than I probably should reasonably spend.

A fairly obvious analogy can be made between app developers (like Brett) and auto mechanics. Way back when, most car drivers were also mechanics and most computer users were coders. I don’t drive but I do use computers a fair bit.

I Am Not a Coder

Yoga and Community in Contemporary North America

Last night, Matthew Remski’s chapter on yoga “culture” served as the basis for a conversation on yoga and communities. Roseanne Harvey had invited some panelists and like-minded people to join her at Andrew Gordon Middleton’s and Michael O’Brien’s Studio Flow Space in Verdun.

After the conversation, I started reading Remski’s chapter in 21st Century Yoga: Culture, Politics, and Practice, the collected essays that Roseanne has edited with Carol Horton.

Several things transpired from this conversation and, though I’m still a yoga newbie, I thought I’d post a few thoughts.

Most important, to me, is the realization that yoga may be antithetical to community development. Remski’s chapter made some of this painfully clear and I had such a feeling of recognition while reading the first part of this chapter that I almost clapped. (It’d have been weird, since I was in the métro.)

Yoga, like transcendentalism, focuses on individualism. As Margaret Fuller with transcendentalism, I find something unsatisfying in this. While I can understand the value of therapeutic self-centredness, I can only handle it for short periods of time. As an extrovert, I need some level of social interaction, especially if I can help others. Navigating either Nietzsche or Thoreau, I quickly feel trapped in a limited world.

Which brings me to Catholicism. The topic ended up being a significant piece of the backdrop to last night’s conversation. Though I wasn’t baptized (and, therefore, not officially a member of the Catholic community), I was raised in a quickly-secularizing Catholic context (Québécois society during the Quiet Revolution). Culturally, I associate more directly with the Catholic Play Ethic (or with the Sensual Ethic) than with what Weber called the Protestant Work Ethic (PWE). Sounds like Remski may be in a similar situation. And so were some participants in last night’s conversation. Not that no Catholic subscribes to PWE or that all Protestants are caught in it. But it’s remarkable how “key scenarios” may differ along such lines. I’d rather have a picnic with Manet (or Monet) or a food fight with Gwen Stefani and the band than a success story written by Horatio Alger. Just don’t get me started about the difference between Fellini and Bergman.

What does this have to do with yoga? Precious little. Yoga is about self-improvement and introspection… until it becomes about interdependence, intersubjectivity, and projecting the mind outside the self. Only then does yoga reach a sense of community. But this sense of community isn’t local, social, cultural, spatial. It’s sense of universal community of mind, beyond such pesky little things as families, cities, countries, and social movements. In “loving kindness” meditation, the shift from individuals to the Whole Earth doesn’t sound very gradual. Sure, “the community” can be there as a target for meditation. But the difference in kind between a neighbourhood community and, say, the community of spirit between humans and locusts affords little discussion, in such a context.

Playing the social scientist during yesterday’s convo, I couldn’t help but point out two key features of communities, from a social science perspective: sense of belonging and interdependency. Though they apply to any community, they may be especially useful in the yoga context. I don’t know enough about yoga to be sure about this, but comments made after I mentioned these two dimensions did sound like they resonated with my simple description.

Interdependency is a recent addition to my definition of community. A student in my “Cyberspace Sociology” course added it as a key feature, and it really helps to bring things in focus. One aspect of this dimension is that community isn’t necessarily something we choose. We may choose some of our neighbours but we may be affected by many community members who’d otherwise have “nothing to do with us”. Also, given issues surrounding our natural environment, the ecological principles behind communities are easy to describe: we can “do our part” but the system can still be dysfunctional if some people don’t. As both victims of climate change and perpetrators of pollution which takes part in it, we can perceive the implications of being dependent on one another. Not to mention that interdependence is an important concept in yoga.

The sense of belonging part may afford more unpacking. Sure, hippies have reappropriated “kumbaya” as the mushy version of belonging. That one fits in the “community of spirits” model. In anthropology, we tend to focus on the “community of experience” model (if not on the “community of practise” one). To do so, some of us refer to Victor Turner’s communitas, based on the liminal phase in initiation rituals. Through this concept, we identify a space for intense relationships among equals, typical of people subjected to a difficult experience together. The concept also involves a separation from the rest of the social system.

By extension, we can think about the divisive nature of social identity: if there’s an us, there’s also a them. Quite frequently, this them is a particular group, with which the community entertains a rivalry. Montreal may be Quebec City’s “Other”, even though Montrealers care very little about the “national capital”. Fans of the Maple Leafs may also perceive Montreal as the other, although I’ve heard more anti-Boston sentiment in my youth than anything about Toronto.

Yoga’s communities are peculiar. It sounds like it may be possible to create a sense of belonging through yoga retreats and other occasions for shared experiences. Yet the embedded hierarchy of many yoga instruction models may shift the communitas away from “practice”. Bonding works remarkably well when people have a common foe (an initiator causing harm would be an appropriate figure, here). However authoritative they may be, yoga instructors may not desire this type of antagonism.

Though (as was clear from last night’s discussion) some yoga studios enter in direct competition as businesses, yoga communities may not be ideal places for impassioned rivalries. The “slippery slope” from opposition between groups and outright conflict may make peace-loving yoginis and yogis think twice about this type of chants and cheers.

Which isn’t to say that the yoga world lacks distinction. In fact, yoga sociology has a lot to explore. From the outside, the internal structure of the North American yogasphere is fascinating. But that structure, it sounds like, rarely made explicit. It also sounds like it’s inward-looking, to a fairly large extent. The yogasphere includes all sorts of yoga practitioners, but it’s focused on yoga teachers and other experts, not necessarily on the local embedding of yoga practice. Yoga studios, in this model, are like havens of peace in a fastpaced world. The them group comprises a large number of people who don’t get yoga.

Personally, I’m more interested in how communities can appropriate yoga. Yes, it involves the adaptation of yoga practice, which implies some level of inauthenticity. Thanks to the association between yoga and New Age (a drone under 21st Century Yoga), yoga specialists may shy away from this type of reappropriation. Yet, empowering communities through yoga-inspired practice could be a worthy cause for yogactivists.

Yoga needs space. A key theme during yesterday’s discussion was space: studio rent, overhead, location, sense of place, neighbourhoods as markets… In North American cities, yoga doesn’t own much space, and that’s the crux of the problem.

This is where we can go back to Catholicism, where Remski started his essay on yoga “culture”. It was an underlying theme through the discussion. (Funnily enough, the conversation was structured by a key figure who invited four “evangelists” and we were eight “disciples”.)

The Catholic Church does own space. In fact, a large part of the “business model” for the Catholic clergy relates to real estate. As many of these properties are being sold, there may be opportunities for community ownership of prime space. In fact, I’m a research associate for a community organization involved in a community-based project surrounding the reappropriation of a church. Wasn’t thinking about yoga in that space, but I’m sure some other people have been. Last summer, Yoga en rouge was happening (led by Audrey Béliveau) in Parc Molson, next door to that church. And it’s clearly a grassroots community affair.

I’m not (officially) Catholic and I’m a n00b to yoga. I’m finally grokking the difficulties to develop community membership through yoga. So I’ll continue doing my yoga practice at home, by myself, away from other people’s gaze. Still feels good.

Obligatory Nexus7 Test Post

Got my Nexus 7 a while ago,  but I wasn’t finding a use case for it. Thanks to a friend advising me to give Swiftkey a try,  I might actually make it work.
Something I might find especially useful about Swiftkey is the fact that I can mix languages,  quelque-chose que je fais assez souvent sur iOS mais qui demande un changement constant de clavier. Since I like Android’s speech recognition,  a combination of SwiftKey and speech might allow me to work efficiently.
Un truc que je remarque rapidement,  par contre,  c’est que le fait de passer d’un système à l’autre demande un certain temps de transfert de mots de passe. J’utilise des outils pour conserver des mots de passe sécuritaires,  et ils existent sur plusieurs plates-formes,  mais ça demande quand même un certain temps.
We’ll see how things go,  after a while. I do want to like Android’s and,  contrary to popular belief, I can be pretty open minded about such things. But I need appropriate contexts to try out different use cases. Otherwise,  having people yell at me because I’m yet to be sold on Android hasn’t been helpful.
Ok,  the test is enough for now. Having issues with the Swiftkey spacebar in landscape,  but I’m sure I’ll get used to it. Let’s post this and edit later.

A bilingual blog on disparate subjects. / Un blogue disparate bilingue.