Category Archives: techno enthusiasts

Back in Mac: Low End Edition

Part of the series.
(Series created on August 13, 2011, and applied retroactively…)

Today, I’m buying an old Mac mini G4 1.25GHz. Yes, a low end computer from 2005. It’ll be great to be back in Mac after spending most of my computer life on XP for three years.

This mini is slower than my XP desktop (emachines H3070). But that doesn’t really matter for what I want to do.

There’s something to be said about computers being “fast enough.” Gamers and engineers may not grok this concept, since they always want more. But there’s a point at which computers don’t really need to be faster, for some categories of uses.

Car analogies are often made, in computer discussions, and this case seems fairly obvious. Some cars are still designed to “push the envelope,” in terms of performance. Yet most cars, including some relatively inexpensive ones, are already fast enough to run on highways beyond the speed limits in North America. Even in Europe, most drivers don’t tend to push their cars to the limit. Something vaguely similar happens with computers, though there are major differences. For instance, the difference in cost between fast driving and normal driving is a factor with cars while it isn’t so much of a factor with computers. With computers, the need for cooling and battery power (on laptops) do matter but, even if they were completely solved, there’s a limit to the power needed for casual computer use.

This isn’t contradicting Moore’s Law directly. Chips do increase exponentially in speed-to-cost ratio. But the effects aren’t felt the same way through all uses of computers, especially if we think about casual use of desktop and laptop “personal computers.” Computer chips in other devices (from handheld devices to cars or DVD players) benefit from Moore’s Law, but these are not what we usually mean by “computer,” in daily use.
The common way to put it is something like “you don’t need a fast machine to do email and word processing.”

The main reason I needed a Mac is that I’ll be using iMovie to do simple video editing. Video editing does push the limits of a slow computer and I’ll notice those limits very readily. But it’ll still work, and that’s quite interesting to think about, in terms of the history of personal computing. A Mac mini G4 is a slug, in comparison with even the current Mac mini Core 2 Duo. But it’s fast enough for even some tasks which, in historical terms, have been processor-intensive.

None of this is meant to say that the “need for speed” among computer users is completely manufactured. As computers become more powerful, some applications of computing technologies which were nearly impossible at slower speeds become easy to do. In fact, there certainly are things which we don’t even imagine becoming which will be easy to do in the future, thanks to improvements in computer chip performance. Those who play processor-intensive games always want faster machines and they certainly feel the “need for speed.” But, it seems to me, the quest for raw speed isn’t the core of personal computing, anymore.

This all reminds me of the Material Culture course I was teaching in the Fall: the Social Construction of Technology, Actor-Network Theory, the Social Shaping of Technology, etc.

So, a low end computer makes sense.

While iMovie is the main reason I decided to get a Mac at this point, I’ve been longing for Macs for three years. There were times during which I was able to use somebody else’s Mac for extended periods of time but this Mac mini G4 will be the first Mac to which I’ll have full-time access since late 2005, when my iBook G3 died.

As before, I’m happy to be “back in Mac.” I could handle life on XP, but it never felt that comfortable and I haven’t been able to adapt my workflow to the way the Windows world works. I could (and probably should) have worked on Linux, but I’m not sure it would have made my life complete either.

Some things I’m happy to go back to:

  • OmniOutliner
  • GarageBand
  • Keynote
  • Quicksilver
  • Nisus Thesaurus
  • Dictionary
  • Preview
  • Terminal
  • TextEdit
  • BibDesk
  • iCal
  • Address Book
  • Mail
  • TAMS Analyzer
  • iChat

Now I need to install some RAM in this puppy.

Google for Educational Contexts

Interesting wishlist, over at tbarrett’s classroom ICT blog.

11 Google Apps Improvements for the Classroom | ICT in my Classroom.

In a way, Google is in a unique position in terms of creating the optimal set of classroom tools. And Google teams have an interest in educational projects (as made clear by Google for Educators, Google Summer of Code, Google Apps for schools…).
What seems to be missing is integration. Maybe Google is taking its time before integrating all of its services and apps. After all, the integration of Google Notebook and Google Bookmarks was fairly recent (and we can easily imagine a further integration with Google Reader). But some of us are a bit impatient. Or too enthusiastic about tools.

Because I just skimmed through the Google Chrome comicbook, I get to think that, maybe, Google is getting ready to integrate its tools in a neat way. Not specifically meant for schools but, in the end, an integrated Google platform can be developed into an education-specific set of applications.
After all, apart from Google Scholar, we’re talking about pretty much the same tools as those used outside of educational contexts.

What tools am I personally thinking about? Almost everything Google does or has done could be useful in educational contexts. From Google Apps (which includes Google Docs, Gmail, Google Sites, GTalk, Gcal…) to Google Books and Google Scholar or even Google Earth, Google Translate, and Google Maps. Not to mention OpenSocial, YouTube, Android, Blogger, Sketchup, Lively

Not that Google’s versions of all of these tools and services are inherently more appropriate for education than those developed outside of Google. But it’s clear that Google has an edge in terms of its technology portfolio. Can’t we just imagine a new kind of Learning Management System leveraging all the neat Google technologies and using a social networking model?

Educational contexts do have some specific requirements. Despite Google’s love affair with “openness,” schools typically require protection for different types of data. Some would also say that Google’s usual advertisement-supported model may be inappropriate for learning environments. So it might be a sign that Google does understand school-focused requirements that Google Apps are ad-free for students, faculty, and staff.

Ok, I’m thinking out loud. But isn’t this what wishlists are about?

Enthused Tech

Yesterday, I held a WiZiQ session on the use of online tech in higher education:

Enthusing Higher Education: Getting Universities and Colleges to Play with Online Tools and Services


[slideshare id=528283&doc=enthusinghighered-1217010739916970-8&w=425]

(Full multimedia recording available here)

During the session, Nellie Deutsch shared the following link:

Diffusion of Innovations, by Everett Rogers (1995)

Haven’t read Rogers’s book but it sounds like a contextually easy to understand version of ideas which have been quite clear in Boasian disciplines (cultural anthropology, folkloristics, cultural ecology…) for a while. But, in this sometimes obsessive quest for innovation, it might in fact be useful to go back to basic ideas about the social mechanisms which can be observed in the adoption of new tools and techniques. It’s in fact the thinking behind this relatively recent blogpost of mine:

Technology Adoption and Active Reading

My emphasis during the WiZiQ session was on enthusiasm. I tend to think a lot about occasions in which, thinking about possibilities afforded technology relates to people getting “psyched up.” In a way, this is exactly how I can define myself as a tech enthusiast: I get easy psyched up in the context of discussions about technology.

What’s funny is that I’m no gadget freak. I don’t care about the tool. I just love to dream up possibilities. And I sincerely think that I’m not alone. We might even guess that a similar dream-induced excitement animates true gadget freaks, who must have the latest tool. Early adopters are a big part of geek culture and, though still small, geek culture is still a niche.

Because I know I’ll keep on talking about these things on other occasions, I can “leave it at that,” for now.

RERO‘s my battle cry.


Visualizing Touch Devices in Education

Took me a while before I watched this concept video about iPhone use on campus.

Connected: The Movie – Abilene Christian University

Sure, it’s a bit campy. Sure, some features aren’t available on the iPhone yet. But the basic concepts are pretty much what I had in mind.

Among things I like in the video:

  • The very notion of student empowerment runs at the centre of it.
  • Many of the class-related applications presented show an interest in the constructivist dimensions of learning.
  • Material is made available before class. Face-to-face time is for engaging in the material, not rehashing it.
  • The technology is presented as a way to ease the bureaucratic aspects of university life, relieving a burden on students (and, presumably, on everyone else involved).
  • The “iPhone as ID” concept is simple yet powerful, in context.
  • Social networks (namely Facebook and MySpace, in the video) are embedded in the campus experience.
  • Blended learning (called “hybrid” in the video) is conceived as an option, not as an obligation.
  • Use of the technology is specifically perceived as going beyond geek culture.
  • The scenarios (use cases) are quite realistic in terms of typical campus life in the United States.
  • While “getting an iPhone” is mentioned as a perk, it’s perfectly possible to imagine technology as a levelling factor with educational institutions, lowering some costs while raising the bar for pedagogical standards.
  • The shift from “eLearning” to “mLearning” is rather obvious.
  • ACU already does iTunes U.
  • The video is released under a Creative Commons license.

Of course, there are many directions things can go, from here. Not all of them are in line with the ACU dream scenario. But I’m quite hope judging from some apparently random facts: that Apple may sell iPhones through universities, that Apple has plans for iPhone use on campuses,  that many of the “enterprise features” of iPhone 2.0 could work in institutions of higher education, that the Steve Jobs keynote made several mentions of education, that Apple bundles iPod touch with Macs, that the OLPC XOXO is now conceived more as a touch handheld than as a laptop, that (although delayed) Google’s Android platform can participate in the same usage scenarios, and that browser-based computing apparently has a bright future.

The Need for Social Science in Social Web/Marketing/Media (Draft)

[Been sitting on this one for a little while. Better RERO it, I guess.]

Sticking My Neck Out (Executive Summary)

I think that participants in many technology-enthusiastic movements which carry the term “social” would do well to learn some social science. Furthermore, my guess is that ethnographic disciplines are very well-suited to the task of teaching participants in these movements something about social groups.


Despite the potentially provocative title and my explicitly stating a position, I mostly wish to think out loud about different things which have been on my mind for a while.

I’m not an “expert” in this field. I’m just a social scientist and an ethnographer who has been observing a lot of things online. I do know that there are many experts who have written many great books about similar issues. What I’m saying here might not seem new. But I’m using my blog as a way to at least write down some of the things I have in mind and, hopefully, discuss these issues thoughtfully with people who care.

Also, this will not be a guide on “what to do to be social-savvy.” Books, seminars, and workshops on this specific topic abound. But my attitude is that every situation needs to be treated in its own context, that cookie-cutter solutions often fail. So I would advise people interested in this set of issues to train themselves in at least a little bit of social science, even if much of the content of the training material seems irrelevant. Discuss things with a social scientist, hire a social scientist in your business, take a course in social science, and don’t focus on advice but on the broad picture. Really.


Though they are all different, enthusiastic participants in “social web,” “social marketing,” “social media,” and other “social things online” do have some commonalities. At the risk of angering some of them, I’m lumping them all together as “social * enthusiasts.” One thing I like about the term “enthusiast” is that it can apply to both professional and amateurs, to geeks and dabblers, to full-timers and part-timers. My target isn’t a specific group of people. I just observed different things in different contexts.


Shameless Self-Promotion

A few links from my own blog, for context (and for easier retrieval):

Shameless Cross-Promotion

A few links from other blogs, to hopefully expand context (and for easier retrieval):

Some raw notes

  • Insight
  • Cluefulness
  • Openness
  • Freedom
  • Transparency
  • Unintended uses
  • Constructivism
  • Empowerment
  • Disruptive technology
  • Innovation
  • Creative thinking
  • Critical thinking
  • Technology adoption
  • Early adopters
  • Late adopters
  • Forced adoption
  • Attitudes to change
  • Conservatism
  • Luddites
  • Activism
  • Impatience
  • Windmills and shelters
  • Niche thinking
  • Geek culture
  • Groupthink
  • Idea horizon
  • Intersubjectivity
  • Influence
  • Sphere of influence
  • Influence network
  • Social butterfly effect
  • Cog in a wheel
  • Social networks
  • Acephalous groups
  • Ego-based groups
  • Non-hierarchical groups
  • Mutual influences
  • Network effects
  • Risk-taking
  • Low-stakes
  • Trial-and-error
  • Transparency
  • Ethnography
  • Epidemiology of ideas
  • Neural networks
  • Cognition and communication
  • Wilson and Sperber
  • Relevance
  • Global
  • Glocal
  • Regional
  • City-State
  • Fluidity
  • Consensus culture
  • Organic relationships
  • Establishing rapport
  • Buzzwords
  • Viral
  • Social
  • Meme
  • Memetic marketplace
  • Meta
  • Target audience

Let’s Give This a Try

The Internet is, simply, a network. Sure, technically it’s a meta-network, a network of networks. But that is pretty much irrelevant, in social terms, as most networks may be analyzed at different levels as containing smaller networks or being parts of larger networks. The fact remains that the ‘Net is pretty easy to understand, sociologically. It’s nothing new, it’s just a textbook example of something social scientists have been looking at for a good long time.

Though the Internet mostly connects computers (in many shapes or forms, many of them being “devices” more than the typical “personal computer”), the impact of the Internet is through human actions, behaviours, thoughts, and feelings. Sure, we can talk ad nauseam about the technical aspects of the Internet, but these topics have been covered a lot in the last fifteen years of intense Internet growth and a lot of people seem to be ready to look at other dimensions.

The category of “people who are online” has expanded greatly, in different steps. Here, Martin Lessard’s description of the Internet’s Six Cultures (Les 6 cultures d’Internet) is really worth a read. Martin’s post is in French but we also had a blog discussion in English, about it. Not only are there more people online but those “people who are online” have become much more diverse in several respects. At the same time, there are clear patterns on who “online people” are and there are clear differences in uses of the Internet.

Groups of human beings are the very basic object of social science. Diversity in human groups is the very basis for ethnography. Ethnography is simply the description of (“writing about”) human groups conceived as diverse (“peoples”). As simple as ethnography can be, it leads to a very specific approach to society which is very compatible with all sorts of things relevant to “social * enthusiasts” on- and offline.

While there are many things online which may be described as “media,” comparing the Internet to “The Mass Media” is often the best way to miss “what the Internet is all about.” Sure, the Internet isn’t about anything (about from connecting computers which, in turn, connect human beings). But to get actual insight into the ‘Net, one probably needs to free herself/himself of notions relating to “The Mass Media.” Put bluntly, McLuhan was probably a very interesting person and some of his ideas remain intriguing but fallacies abound in his work and the best thing to do with his ideas is to go beyond them.

One of my favourite examples of the overuse of “media”-based concepts is the issue of influence. In blogging, podcasting, or selling, the notion often is that, on the Internet as in offline life, “some key individuals or outlets are influential and these are the people by whom or channels through which ideas are disseminated.” Hence all the Technorati rankings and other “viewer statistics.” Old techniques and ideas from the times of radio and television expansion are used because it’s easier to think through advertising models than through radically new models. This is, in fact, when I tend to bring back my explanation of the “social butterfly effect“: quite frequently, “influence” online isn’t through specific individuals or outlets but even when it is, those people are influential through virtue of connecting to diverse groups, not by the number of people they know. There are ways to analyze those connections but “measuring impact” is eventually missing the point.

Yes, there is an obvious “qual. vs. quant.” angle, here. A major distinction between non-ethnographic and ethnographic disciplines in social sciences is that non-ethnographic disciplines tend to be overly constrained by “quantitative analysis.” Ultimately, any analysis is “qualitative” but “quantitative methods” are a very small and often limiting subset of the possible research and analysis methods available. Hence the constriction and what some ethnographers may describe as “myopia” on the part of non-ethnographers.

Gone Viral

The term “viral” is used rather frequently by “social * enthusiasts” online. I happen to think that it’s a fairly fitting term, even though it’s used more by extension than by literal meaning. To me, it relates rather directly to Dan Sperber’s “epidemiological” treatment of culture (see Explaining Culture) which may itself be perceived as resembling Dawkins’s well-known “selfish gene” ideas made popular by different online observers, but with something which I perceive to be (to use simple semiotic/semiological concepts) more “motivated” than the more “arbitrary” connections between genetics and ideas. While Sperber could hardly be described as an ethnographer, his anthropological connections still make some of his work compatible with ethnographic perspectives.

Analysis of the spread of ideas does correspond fairly closely with the spread of viruses, especially given the nature of contacts which make transmission possible. One needs not do much to spread a virus or an idea. This virus or idea may find “fertile soil” in a given social context, depending on a number of factors. Despite the disadvantages of extending analogies and core metaphors too far, the type of ecosystem/epidemiology analysis of social systems embedded in uses of the term “viral” do seem to help some specific people make sense of different things which happen online. In “viral marketing,” the type of informal, invisible, unexpected spread of recognition through word of mouth does relate somewhat to the spread of a virus. Moreover, the metaphor of “viral marketing” is useful in thinking about the lack of control the professional marketer may have on how her/his product is perceived. In this context, the term “viral” seems useful.

The Social

While “viral” seems appropriate, the even more simple “social” often seems inappropriately used. It’s not a ranty attitude which makes me comment negatively on the use of the term “social.” In fact, I don’t really care about the use of the term itself. But I do notice that use of the term often obfuscates what is the obvious social character of the Internet.

To a social scientist, anything which involves groups is by definition “social.” Of course, some groups and individuals are more gregarious than others, some people are taken to be very sociable, and some contexts are more conducive to heightened social interactions. But social interactions happen in any context.
As an example I used (in French) in reply to this blog post, something as common as standing in line at a grocery store is representative of social behaviour and can be analyzed in social terms. Any Web page which is accessed by anyone is “social” in the sense that it establishes some link, however tenuous and asymmetric, between at least two individuals (someone who created the page and the person who accessed that page). Sure, it sounds like the minimal definition of communication (sender, medium/message, receiver). But what most people who talk about communication seem to forget (unlike Jakobson), is that all communication is social.

Sure, putting a comment form on a Web page facilitates a basic social interaction, making the page “more social” in the sense of “making that page easier to use explicit social interaction.” And, of course, adding some features which facilitate the act of sharing data with one’s personal contacts is a step above the contact form in terms of making certain type of social interaction straightforward and easy. But, contrary to what Google Friend Connect implies, adding those features doesn’t suddenly make the site social. The site itself isn’t really social and, assuming some people visited it, there was already a social dimension to it. I’m not nitpicking on word use. I’m saying that using “social” in this way may blind some people to social dimensions of the Internet. And the consequences can be pretty harsh, in some cases, for overlooking how social the ‘Net is.

Something similar may be said about the “Social Web,” one of the many definitions of “Web 2.0” which is used in some contexts (mostly, the cynic would say, “to make some tool appear ‘new and improved'”). The Web as a whole was “social” by definition. Granted, it lacked the ease of social interaction afforded such venerable Internet classics as Usenet and email. But it was already making some modes of social interaction easier to perceive. No, this isn’t about “it’s all been done.” It’s about being oblivious to the social potential of tools which already existed. True, the period in Internet history known as “Web 2.0” (and the onset of the Internet’s sixth culture) may be associated with new social phenomena. But there is little evidence that the association is causal, that new online tools and services created a new reality which suddenly made it possible for people to become social online. This is one reason I like Martin Lessard’s post so much. Instead of postulating the existence of a brand new phenomenon, he talks about the conditions for some changes in both Internet use and the form the Web has taken.

Again, this isn’t about terminology per se. Substitute “friendly” for “social” and similar issues might come up (friendship and friendliness being disconnected from the social processes which underline them).

Adoptive Parents

Many “social * enthusiasts” are interested in “adoption.” They want their “things” to be adopted. This is especially visible among marketers but even in social media there’s an issue of “getting people on board.” And some people, especially those without social science training, seem to be looking for a recipe.

Problem is, there probably is no such thing as a recipe for technology adoption.

Sure, some marketing practises from the offline world may work online. Sometimes, adapting a strategy from the material world to the Internet is very simple and the Internet version may be more effective than the offline version. But it doesn’t mean that there is such a thing as a recipe. It’s a matter of either having some people who “have a knack for this sort of things” (say, based on sensitivity to what goes on online) or based on pure luck. Or it’s a matter of measuring success in different ways. But it isn’t based on a recipe. Especially not in the Internet sphere which is changing so rapidly (despite some remarkably stable features).

Again, I’m partial to contextual approaches (“fully-customized solutions,” if you really must). Not just because I think there are people who can do this work very efficiently. But because I observe that “recipes” do little more than sell “best-selling books” and other items.

So, what can we, as social scientists, say about “adoption?” That technology is adopted based on the perceived fit between the tools and people’s needs/wants/goals/preferences. Not the simple “the tool will be adopted if there’s a need.” But a perception that there might be a fit between an amorphous set of social actors (people) and some well-defined tools (“technologies”). Recognizing this fit is extremely difficult and forcing it is extremely expensive (not to mention completely unsustainable). But social scientists do help in finding ways to adapt tools to different social situations.

Especially ethnographers. Because instead of surveys and focus groups, we challenge assumptions about what “must” fit. Our heads and books are full of examples which sound, in retrospect, as common sense but which had stumped major corporations with huge budgets. (Ask me about McDonald’s in Brazil or browse a cultural anthropology textbook, for more information.)

Recently, while reading about issues surrounding the OLPC’s original XO computer, I was glad to read the following:

John Heskett once said that the critical difference between invention and innovation was its mass adoption by users. (Niti Bhan The emperor has designer clothes)

Not that this is a new idea, for social scientists. But I was glad that the social dimension of technology adoption was recognized.

In marketing and design spheres especially, people often think of innovation as individualized. While some individuals are particularly adept at leading inventions to mass adoption (Steve Jobs being a textbook example), “adoption comes from the people.” Yes, groups of people may be manipulated to adopt something “despite themselves.” But that kind of forced adoption is still dependent on a broad acceptance, by “the people,” of even the basic forms of marketing. This is very similar to the simplified version of the concept of “hegemony,” so common in both social sciences and humanities. In a hegemony (as opposed to a totalitarian regime), no coercion is necessary because the logic of the system has been internalized by people who are affected by it. Simple, but effective.

In online culture, adept marketers are highly valued. But I’m quite convinced that pre-online marketers already knew that they had to “learn society first.” One thing with almost anything happening online is that “the society” is boundless. Country boundaries usually make very little sense and the social rules of every local group will leak into even the simplest occasion. Some people seem to assume that the end result is a cultural homogenization, thereby not necessitating any adaptation besides the move from “brick and mortar” to online. Others (or the same people, actually) want to protect their “business models” by restricting tools or services based on country boundaries. In my mind, both attitudes are ineffective and misleading.

Sometimes I Feel Like a Motherless Child

I think the Cluetrain Manifesto can somehow be summarized through concepts of freedom, openness, and transparency. These are all very obvious (in French, the book title is something close to “the evident truths manifesto”). They’re also all very social.

Social scientists often become activists based on these concepts. And among social scientists, many of us are enthusiastic about the social changes which are happening in parallel with Internet growth. Not because of technology. But because of empowerment. People are using the Internet in their own ways, the one key feature of the Internet being its lack of centralization. While the lack of centralized control may be perceived as a “bad thing” by some (social scientists or not), there’s little argument that the ‘Net as a whole is out of the control of specific corporations or governments (despite the large degree of consolidation which has happened offline and online).

Especially in the United States, “freedom” is conceived as a basic right. But it’s also a basic concept in social analysis. As some put it: “somebody’s rights end where another’s begin.” But social scientists have a whole apparatus to deal with all the nuances and subtleties which are bound to come from any situation where people’s rights (freedom) may clash or even simply be interpreted differently. Again, not that social scientists have easy, ready-made answers on these issues. But we’re used to dealing with them. We don’t interpret freedom as a given.

Transparency is fairly simple and relates directly to how people manage information itself (instead of knowledge or insight). Radical transparency is giving as much information as possible to those who may need it. Everybody has a “right to learn” a lot of things about a given institution (instead of “right to know”), when that institution has a social impact. Canada’s Access to Information Act is quite representative of the move to transparency and use of this act has accompanied changes in the ways government officials need to behave to adapt to a relatively new reality.

Openness is an interesting topic, especially in the context of the so-called “Open Source” movement. Radical openness implies participation by outsiders, at least in the form of verbal feedback. The cluefulness of “opening yourself to your users” is made obvious in the context of successes by institutions which have at least portrayed themselves as open. What’s in my mind unfortunate is that many institutions now attempt to position themselves on the openness end of the “closed/proprietary to open/responsive” scale without much work done to really open themselves up.


Mottoes, slogans, and maxims like “build it and they will come,” “there’s a sucker born every minute,” “let them have cake,” and “give them what they want” all fail to grasp the basic reality of social life: “they” and “we” are linked. We’re all different and we’re all connected. We all take parts in groups. These groups are all associated with one another. We can’t simply behave the same way with everyone. Identity has two parts: sense of belonging (to an “in-group”) and sense of distinction (from an “out-group”). “Us/Them.”

Within the “in-group,” if there isn’t any obvious hierarchy, the sense of belonging can take the form that Victor Turner called “communitas” and which happens in situations giving real meaning to the notion of “community.” “Community of experience,” “community of practise.” Eckert and Wittgenstein brought to online networks. In a community, contacts aren’t always harmonious. But people feel they fully belong. A network isn’t the same thing as a community.

The World Is My Oyster

Despite the so-called “Digital Divide” (or, more precisely, the maintenance online of global inequalities), the ‘Net is truly “Global.” So is the phone, now that cellphones are accomplishing the “leapfrog effect.” But this one Internet we have (i.e., not Internet2 or other such specialized meta-network) is reaching everywhere through a single set of compatible connections. The need for cultural awareness is increased, not alleviated by online activities.

Release Early, Release Often

Among friends, we call it RERO.

The RERO principle is a multiple-pass system. Instead of waiting for the right moment to release a “perfect product” (say, a blogpost!), the “work in progress” is provided widely, garnering feedback which will be integrated in future “product versions.” The RERO approach can be unnerving to “product developers,” but it has proved its value in online-savvy contexts.

I use “product” in a broad sense because the principle applies to diverse contexts. Furthermore, the RERO principle helps shift the focus from “product,” back into “process.”

The RERO principle may imply some “emotional” or “psychological” dimensions, such as humility and the acceptance of failure. At some level, differences between RERO and “trial-and-error” methods of development appear insignificant. Those who create something should not expect the first try to be successful and should recognize mistakes to improve on the creative process and product. This is similar to the difference between “rehearsal” (low-stakes experimentation with a process) and “performance” (with responsibility, by the performer, for evaluation by an audience).

Though applications of the early/often concept to social domains are mostly satirical, there is a social dimension to the RERO principle. Releasing a “product” implies a group, a social context.

The partial and frequent “release” of work to “the public” relates directly to openness and transparency. Frequent releases create a “relationship” with human beings. Sure, many of these are “Early Adopters” who are already overrepresented. But the rapport established between an institution and people (users/clients/customers/patrons…) can be transfered more broadly.

Releasing early seems to shift the limit between rehearsal and performance. Instead of being able to do mistakes on your own, your mistakes are shown publicly and your success is directly evaluated. Yet a somewhat reverse effect can occur: evaluation of the end-result becomes a lower-stake rating at different parts of the project because expectations have shifted to the “lower” end. This is probably the logic behind Google’s much discussed propensity to call all its products “beta.”

While the RERO principle does imply a certain openness, the expectation that each release might integrate all the feedback “users” have given is not fundamental to releasing early and frequently. The expectation is set by a specific social relationship between “developers” and “users.” In geek culture, especially when users are knowledgeable enough about technology to make elaborate wishlists, the expectation to respond to user demand can be quite strong, so much so that developers may perceive a sense of entitlement on the part of “users” and grow some resentment out of the situation. “If you don’t like it, make it yourself.” Such a situation is rather common in FLOSS development: since “users” have access to the source code, they may be expected to contribute to the development project. When “users” not only fail to fulfil expectations set by open development but even have the gumption to ask developers to respond to demands, conflicts may easily occur. And conflicts are among the things which social scientists study most frequently.

Putting the “Capital” Back into “Social Capital”

In the past several years, ”monetization” (transforming ideas into currency) has become one of the major foci of anything happening online. Anything which can be a source of profit generates an immediate (and temporary) “buzz.” The value of anything online is measured through typical currency-based economics. The relatively recent movement toward ”social” whatever is not only representative of this tendency, but might be seen as its climax: nowadays, even social ties can be sold directly, instead of being part of a secondary transaction. As some people say “The relationship is the currency” (or “the commodity,” or “the means to an end”). Fair enough, especially if these people understand what social relationships entail. But still strange, in context, to see people “selling their friends,” sometimes in a rather literal sense, when social relationships are conceived as valuable. After all, “selling the friend” transforms that relationship, diminishes its value. Ah, well, maybe everyone involved is just cynical. Still, even their cynicism contributes to the system. But I’m not judging. Really, I’m not. I’m just wondering
Anyhoo, the “What are you selling anyway” question makes as much sense online as it does with telemarketers and other greed-focused strangers (maybe “calls” are always “cold,” online). It’s just that the answer isn’t always so clear when the “business model” revolves around creating, then breaking a set of social expectations.
Me? I don’t sell anything. Really, not even my ideas or my sense of self. I’m just not good at selling. Oh, I do promote myself and I do accumulate social capital. As social butterflies are wont to do. The difference is, in the case of social butterflies such as myself, no money is exchanged and the social relationships are, hopefully, intact. This is not to say that friends never help me or never receive my help in a currency-friendly context. It mostly means that, in our cases, the relationships are conceived as their own rewards.
I’m consciously not taking the moral high ground, here, though some people may easily perceive this position as the morally superior one. I’m not even talking about a position. Just about an attitude to society and to social relationships. If you will, it’s a type of ethnographic observation from an insider’s perspective.

Makes sense?

Handhelds for the Rest of Us?

Ok, it probably shouldn’t become part of my habits but this is another repost of a blog comment motivated by the OLPC XO.

This time, it’s a reply to Niti Bhan’s enthusiastic blogpost about the eeePC: Perspective 2.0: The little eeePC that could has become the real “iPod” of personal computing

This time, I’m heavily editing my comments. So it’s less of a repost than a new blogpost. In some ways, it’s partly a follow-up to my “Ultimate Handheld Device” post (which ended up focusing on spatial positioning).

Given the OLPC context, the angle here is, hopefully, a culturally aware version of “a handheld device for the rest of us.”

Here goes…

I think there’s room in the World for a device category more similar to handhelds than to subnotebooks. Let’s call it “handhelds for the rest of us” (HftRoU). Something between a cellphone, a portable gaming console, a portable media player, and a personal digital assistant. Handheld devices exist which cover most of these features/applications, but I’m mostly using this categorization to think about the future of handhelds in a globalised World.

The “new” device category could serve as the inspiration for a follow-up to the OLPC project. One thing about which I keep thinking, in relation to the “OLPC” project, is that the ‘L’ part was too restrictive. Sure, laptops can be great tools for students, especially if these students are used to (or need to be trained in) working with and typing long-form text. But I don’t think that laptops represent the most “disruptive technology” around. If we think about their global penetration and widespread impact, cellphones are much closer to the leapfrog effect about which we all have been writing.

So, why not just talk about a cellphone or smartphone? Well, I’m trying to think both more broadly and more specifically. Cellphones are already helping people empower themselves. The next step might to add selected features which bring them closer to the OLPC dream. Also, since cellphones are widely distributed already, I think it’s important to think about devices which may complement cellphones. I have some ideas about non-handheld tools which could make cellphones even more relevant in people’s lives. But they will have to wait for another blogpost.

So, to put it simply, “handhelds for the rest of us” (HftRoU) are somewhere between the OLPC XO-1 and Apple’s original iPhone, in terms of features. In terms of prices, I dream that it could be closer to that of basic cellphones which are in the hands of so many people across the globe. I don’t know what that price may be but I heard things which sounded like a third of the price the OLPC originally had in mind (so, a sixth of the current price). Sure, it may take a while before such a low cost can be reached. But I actually don’t think we’re in a hurry.

I guess I’m just thinking of the electronics (and global) version of the Ford T. With more solidarity in mind. And cultural awareness.

Google’s Open Handset Alliance (OHA) may produce something more appropriate to “global contexts” than Apple’s iPhone. In comparison with Apple’s iPhone, devices developed by the OHA could be better adapted to the cultural, climatic, and economic conditions of those people who don’t have easy access to the kind of computers “we” take for granted. At the very least, the OHA has good representation on at least three continents and, like the old OLPC project, the OHA is officially dedicated to openness.

I actually care fairly little about which teams will develop devices in this category. In fact, I hope that new manufacturers will spring up in some local communities and that major manufacturers will pay attention.

I don’t care about who does it, I’m mostly interested in what the devices will make possible. Learning, broadly speaking. Communicating, in different ways. Empowering themselves, generally.

One thing I have in mind, and which deviates from the OLPC mission, is that there should be appropriate handheld devices for all age-ranges. I do understand the focus on 6-12 year-olds the old OLPC had. But I don’t think it’s very productive to only sell devices to that age-range. Especially not in those parts of the world (i.e., almost anywhere) where generation gaps don’t imply that children are isolated from adults. In fact, as an anthropologist, I react rather strongly to the thought that children should be the exclusive target of a project meant to empower people. But I digress, as always.

I don’t tend to be a feature-freak but I have been thinking about the main features the prototypical device in this category should have. It’s not a rigid set of guidelines. It’s just a way to think out loud about technology’s integration in human life.

The OS and GUI, which seem like major advantages of the eeePC, could certainly be of the mobile/handheld type instead of the desktop/laptop type. The usual suspects: Symbian, NewtonOS, Android, Zune, PalmOS, Cocoa Touch, embedded Linux, Playstation Portable, WindowsCE, and Nintendo DS. At a certain level of abstraction, there are so many commonalities between all of these that it doesn’t seem very efficient to invent a completely new GUI/OS “paradigm,” like OLPC’s Sugar was apparently trying to do.

The HftRoU require some form of networking or wireless connectivity feature. WiFi (802.11*), GSM, UMTS, WiMAX, Bluetooth… Doesn’t need to be extremely fast, but it should be flexible and it absolutely cannot be cost-prohibitive. IP might make much more sense than, say, SMS/MMS, but a lot can be done with any kind of data transmission between devices. XO-style mesh networking could be a very interesting option. As VoIP has proven, voice can efficiently be transmitted as data so “voice networks” aren’t necessary.

My sense is that a multitouch interface with an accelerometer would be extremely effective. Yes, I’m thinking of Apple’s Touch devices and MacBooks. As well as about the Microsoft Surface, and Jeff Han’s Perceptive Pixel. One thing all of these have shown is how “intuitive” it can be to interact with a machine using gestures. Haptic feedback could also be useful but I’m not convinced it’s “there yet.”

I’m really not sure a keyboard is very important. In fact, I think that keyboard-focused laptops and tablets are the wrong basis for thinking about “handhelds for the rest of us.” Bear in mind that I’m not thinking about devices for would-be office workers or even programmers. I’m thinking about the broadest user base you can imagine. “The Rest of Us” in the sense of, those not already using computers very directly. And that user base isn’t that invested in (or committed to) touch-typing. Even people who are very literate don’t tend to be extremely efficient typists. If we think about global literacy rates, typing might be one thing which needs to be leapfrogged. After all, a cellphone keypad can be quite effective in some hands and there are several other ways to input text, especially if typing isn’t too ingrained in you. Furthermore, keyboards aren’t that convenient in multilingual contexts (i.e., in most parts of the world). I say: avoid the keyboard altogether, make it available as an option, or use a virtual one. People will complain. But it’s a necessary step.

If the device is to be used for voice communication, some audio support is absolutely required. Even if voice communication isn’t part of it (and I’m not completely convinced it’s the one required feature), audio is very useful, IMHO (I’m an aural guy). In some parts of the world, speakers are much favoured over headphones or headsets. But I personally wish that at least some HftRoU could have external audio inputs/outputs. Maybe through USB or an iPod-style connector.

A voice interface would be fabulous, but there still seem to be technical issues with both speech recognition and speech synthesis. I used to work in that field and I keep dreaming, like Bill Gates and others do, that speech will finally take the world by storm. But maybe the time still hasn’t come.

It’s hard to tell what size the screen should be. There probably needs to be a range of devices with varying screen sizes. Apple’s Touch devices prove that you don’t need a very large screen to have an immersive experience. Maybe some HftRoU screens should in fact be larger than that of an iPhone or iPod touch. Especially if people are to read or write long-form text on them. Maybe the eeePC had it right. Especially if the devices’ form factor is more like a big handheld than like a small subnotebook (i.e., slimmer than an eeePC). One reason form factor matters, in my mind, is that it could make the devices “disappear.” That, and the difference between having a device on you (in your pocket) and carrying a bag with a device in it. Form factor was a big issue with my Newton MessagePad 130. As the OLPC XO showed, cost and power consumption are also important issues regarding screen size. I’d vote for a range of screens between 3.5 inch (iPhone) and 8.9 inch (eeePC 900) with a rather high resolution. A multitouch version of the XO’s screen could be a major contribution.

In terms of both audio and screen features, some consideration should be given to adaptive technologies. Most of us take for granted that “almost anyone” can hear and see. We usually don’t perceive major issues in the fact that “personal computing” typically focuses on visual and auditory stimuli. But if these devices truly are “for the rest of us,” they could help empower visually- or hearing-impaired individuals, who are often marginalized. This is especially relevant in the logic of humanitarianism.

HftRoU needs a much autonomy from a power source as possible. Both in terms of the number of hours devices can be operated without needing to be connected to a power source and in terms of flexibility in power sources. Power management is a major technological issue, with portable, handheld, and mobile devices. Engineers are hard at work, trying to find as many solutions to this issue as they can. This was, obviously, a major area of research for the OLPC. But I’m not even sure the solutions they have found are the only relevant ones for what I imagine HftRoU to be.

GPS could have interesting uses, but doesn’t seem very cost-effective. Other “wireless positioning systems” (à la Skyhook) might reprsent a more rational option. Still, I think positioning systems are one of the next big things. Not only for navigation or for location-based targeting. But for a set of “unintended uses” which are the hallmark of truly disruptive technology. I still remember an article (probably in the venerable Wired magazine) about the use of GPS/GIS for research into climate change. Such “unintended uses” are, in my mind, much closer to the constructionist ideal than the OLPC XO’s unified design can ever get.

Though a camera seems to be a given in any portable or mobile device (even the OLPC XO has one), I’m not yet that clear on how important it really is. Sure, people like taking pictures or filming things. Yes, pictures taken through cellphones have had a lasting impact on social and cultural events. But I still get the feeling that the main reason cameras are included on so many devices is for impulse buying, not as a feature to be used so frequently by all users. Also, standalone cameras probably have a rather high level of penetration already and it might be best not to duplicate this type of feature. But, of course, a camera could easily be a differentiating factor between two devices in the same category. I don’t think that cameras should be absent from HftRoU. I just think it’s possible to have “killer apps” without cameras. Again, I’m biased.

Apart from networking/connectivity uses, Bluetooth seems like a luxury. Sure, it can be neat. But I don’t feel it adds that much functionality to HftRoU. Yet again, I could be proven wrong. Especially if networking and other inter-device communication are combined. At some abstract level, there isn’t that much difference between exchanging data across a network and controlling a device with another device.

Yes, I do realize I pretty much described an iPod touch (or an iPhone without camera, Bluetooth, or cellphone fees). I’ve been lusting over an iPod touch since September and it does colour my approach. I sincerely think the iPod touch could serve as an inspiration for a new device type. But, again, I care very little about which company makes that device. I don’t even care about how open the operating system is.

As long as our minds are open.

Touch Thoughts: Apple's Handheld Strategy

I’m still on the RDF.
Apple‘s March 6, 2008 event was about enterprise and development support for its iPhone and iPod touch lines of handheld devices. Lots to think about.

(For convenience’s sake, I’ll lump together the iPod touch and the iPhone under the name “Touch,” which seems consistent with Apple’s “Cocoa Touch.”)

Been reading a fair bit about this event. Interesting reactions across the board.

My own thoughts on the whole thing.
I appreciate the fact that Phil Schiller began the “enterprise” section of the event with comments about a university. Though universities need not be run like profit-hungry corporations, linking Apple’s long-standing educational focus with its newly invigorated enterprise focus makes sense. And I had a brief drift-off moment as I was thinking about Touch products in educational contexts.

I’m surprised at how enthusiastic I get about the enterprise features. Suddenly, I can see Microsoft’s Exchange make sense.

I get the clear impression that even more things will come into place at the end of June than has been said by Apple. Possibly new Touch models or lines. Probably the famous 3G iPhone. Apple-released apps. Renewed emphasis on server technology (XServe, Mac OS X Server, XSan…). New home WiFi products (AirPort, Time Capsule, Apple TV…). New partnerships. Cool VC-funded startups. New features on the less aptly named “iTunes” store.

Though it was obvious already, the accelerometer is an important feature. It seems especially well-adapted to games and casual gamers like myself are likely to enjoy games this feature makes possible. It can also lead to very interesting applications. In fact, the “Etch and Sketch” demo was rather convincing as a display of some core Touch features. These are exactly the features which help sell products.
Actually, I enjoyed the “wow factor” of the event’s demos. I’m convinced that it will energize developers and administrators, whether or not they plan on using Touch products. Some components of Apple’s Touch strategy are exciting enough that the more problematic aspects of this strategy may matter a bit less. Those of us dreaming about Android, OpenMoko, or even a revived NewtonOS can still find things to get inspired by in Apple’s roadmap.

What’s to come, apart from what was announced? No idea. But I do daydream about all of this.
I’m especially interested in the idea of Apple Touch as “mainstream, WiFi, mobile platform.” There’s a lot of potential for Apple-designed, WiFi-enabled handhelds. Whether or not they include a cellphone.
At this point, Apple only makes five models of Touch products: three iPod touches and two iPhones. Flash memory is the main differentiating factor within a line. It makes it relatively easy to decide which device to get but some product diversity could be interesting. While some people expect/hope that Apple will release radically new form factors for Touch devices (e.g., a tablet subnotebook), it’s quite likely that other features will help distinguish Apple’s Touch hardware.
Among features I’d like to get through software, add-ons, or included in a Touch product? Number of things, some alluded to in the “categories” for this post. Some of these I had already posted.

  • Quality audio recording (to make it the ideal fieldwork audio tool).
  • eBook support (to compete with Amazon’s Kindle).
  • Voice support (including continuous dictation, voice interface…).
  • Enhanced support for podcasting (interacting with podcasts, sending audio/video responses…)
  • Video conferencing (been thinking about this for a while).
  • GPS (location will be big).
  • Mesh networking (a neat feature of OLPC’s XO).
  • Mobile WiMAX (unlikely, but it could be neat).
  • Battery pack (especially for long trips in remote regions).
  • Add-on flash memory (unlikely, but it could be useful, especially for backup).
  • Offline storage of online content (likely, but worth noting).
  • Inexpensive model (especially for “emerging markets”).
  • Access to 3G data networks without cellular “voice plan” (unlikely, but worth a shot).
  • Alternative input methods (MessagEase, Graffiti, adaptive keyboard, speech recognition…).
  • Use as Mac OS X “host” (kind of like a user partition).
  • Bluetooth/WiFi data transfer (no need for cables and docks).
  • MacBook Touch (unlikely, especially with MacBook Air, but it could be fun).
  • Automatic cell to VoIP-over-WiFi switching (saving cell minutes).

Of course, there are many obvious ones which will likely be implemented in software. I’m already impressed by the Omni Group’s pledge to develop a Touch version of their flagship GTD app.

The Geek Niche (Draft)

As explained before, I am not a “visual” thinker. Unlike some other people, I don’t draw witty charts all the time. However, I do occasionally think visually. In this case, I do “see” Venn diagrams and other cutesy graphics. What I’m seeing is the proportion of “geeks” in the world. And, to be honest, it’s relatively clear for me. I may be completely off, but I still see it clearly.

Of course, much of it is about specifying what we mean by “geek.” Which isn’t easy for someone used to looking at culture’s near-chaotic intricacy and intricacies. At this point, I’m reluctant to define too clearly what I mean by “geek” because some people (self-professed geeks, especially) are such quick nitpickers that anything I say about the term is countered by more authorized definitions. I even expect comments to this blog entry to focus on how inaccurate my perception of geeks is, regardless of any other point I make.

Ah, well…

My intention isn’t to stereotype a group of people. And I don’t want to generalize. I just try to describe a specific situation which I find very interesting. In and of itself, the term “geek” carries a lot of baggage, much of which is problematic for anyone who is trying to understand an important part of the world in which we all live. But the term is remarkably useful as a way to package an ethos, a style, a perspective, an approach, a worldview, a personality type. Among those who could be called “geeks” are very diverse people. There might not even a single set of criteria to define who should legitimately be called a “geek.” But “geekness” is now a reference for some actions, behaviors, markets, and even language varieties. Describing “geeks” as a group makes some sense, even if some people get very sensitive about the ways geeks are described.

For the record, I don’t really consider myself a geek. At the same time, I do enjoy geekness and I consider myself geek-friendly. It’s just that I’m not an actual insider to the geek coterie.

Thinking demographically has some advantages in terms of simplification. Simple is reassuring, especially in geek culture. So, looking at geek demographics on a broad scale…

First, the upper demographic limit for geekery. At the extreme, the Whole Wide World. What’s geeky about The World?

Number of things, actually. Especially in terms of some key technologies. Those technologies some people call “the tech world.” Consumer electronics, digital gadgets, computers…

An obvious tech factor for the upper limit of geekness is the ‘Net. The Internet is now mainstream. Not that everyone, everywhere truly lives online but the ‘Net is having a tremendous impact on the world as a whole. And Internet penetration is shaping up, in diverse parts of the world. This type of effect goes well with a certain type of “low-level geekness.” Along with widespread online communication, a certain approach to the world has become more prominent. A techno-enthusiastic and troubleshooting approach I often associate with engineering. Not that all engineers uses this type of approach or that everyone who uses this type of approach is an engineer. But, in my mind, it’s an “engineering worldview” similar to an updated set of mechanistic metaphors.

Another obvious example of widespread geek-friendly technology is the cellphone. Obvious because extremely widespread (apparently, close to half of the human population of the planet is cellphoned). Yet, cellphones are the geekiest technology item available. What makes them geeky, in my eyes, is the way they’re embedded in a specific social dynamic emphasizing efficiency, mobility, and “always-on connectivity” along with work/life, group/individual, and public/private dichotomies.

The world’s geekiness can also be observed through other lenses, more concerned with the politic and the social drive of human behavior. Meritocracies, relatively non-judgemental ethics, post-national democracies, neo-liberal libertarianism, neo-Darwinian progress-mindedness, networked identities… Figures on populations “affected” by these geeky dimensions of socio-political life are hard to come by and it’s difficult to tell apart these elements from simple “Westernization.” But it’s easy to conceive of a geeky version of the world in which all of these elements are linked. In a way, it’s as if the world were dominated by geekdom.

Which brings me to the lower demographic limit for geekiness: How many “true geeks” are there? What’ are the figures for the “alpha geek” population?

My honest guesstimate? Five to ten million worldwide, concentrated in a relatively small number of urban areas in North America and Eurasia. I base this range on a number of hunches I got throughout the years. In fact, my impression is that there are about two million people in (or “oriented toward”) the United States who come close enough to the geek stereotype to qualify as “alpha geeks.” Haven’t looked at academic literature on the subject but judging from numbers of early adopters in “geeky tech,” looking at FLOSS movements, thinking about desktop Linux, listening to the “tech news” I don’t think this figure is so far off. On top of these U.S. geeks are “worldwide geeks” who are much harder to count. Especially since geekness itself is a culture-specific concept. But, for some reason, I get the impression that those outside the United States who would be prototypical geeks number something like five million people, plus or minus two million.

All this surely sounds specious. In fact, I’m so not a quant dude, I really don’t care about the exact figure. But my feeling, here, is that this ultra-geeky population is probably comparable to a large metropolitan area.

Of course, geeks are dispersed throughout the world. Though there are “geek meccas” like Bangalore and the San Francisco Bay Area, geeks are often modern cosmopolitans. They are typically not “of a place” and they navigate through technology institutions rather than through native locales. Thanks to telecommuting, some geeks adopt a glocal lifestyle making connections outside of their local spheres yet constructing local realities, at least in their minds. In some cases, übergeeks are resolute loners who consciously try to avoid being tied to local circles.

Thanks in part to the “tech industry” connections of geek society, geek-friendly regions compete with one another on the world stage.

Scattered geeks have an impact on local communities and this impact can be disproportionately large in comparison to the size of the geek population.

Started this post last week, after listening to Leo Laporte’s  TWiT “netcast.” 

The TWiT Netcast Network with Leo Laporte


I wanted to finish this post but never got a round tuit. I wanted to connect this post with a few things about the connection between “geek culture” in the computer/tech industry and the “craft beer” and “coffee geek” movements. There was also the obvious personal connection to the subject. I’m now a decent ethnographic insider-outsider to geek culture. Despite (thanks to) the fact that, as a comment-spammer was just saying, I’m such a n00b.

Not to mention that I wanted to expand upon JoCo‘s career, attitude, and character (discussed during the TWiT podcast). And that was before I learned that JoCo himself was coming to Austin during but not through the expensive South by Southwest film/music/interactive festivals.

If I don’t stop myself, I even get the urge to talk about the politics of geek groups, especially in terms of idealism

This thoughtful blogpost questioning the usefulness of the TED conference makes me want to push the “publish” button, even though this post isn’t ready. My comments about TED aren’t too dissimilar to many of the things which have appeared in the past couple of days. But I was going to focus on the groupthink, post-Weberian neo-liberalism, Well/Wired/GBN links, techy humanitarianism, etc.


Ah, well… 

Guess I should just RERO it and hope for the best. Maybe I’ll be able to leave those topics behind. RSN


Free As In Beer: The Case for No-Cost Software

To summarize the situation:

  1. Most of the software for which I paid a fee, I don’t really use.
  2. Most of the software I really use, I haven’t paid a dime for.
  3. I really like no-cost software.
  4. You might want to call me “cheap” but, if you’re developing “consumer software,” you may need to pay attention to the way people like me think about software.

No, I’m not talking about piracy. Piracy is wrong on a very practical level (not to mention legal and moral issues). Piracy and anti-piracy protection are in a dynamic that I don’t particularly enjoy. In some ways, forms of piracy are “ruining it for everyone.” So this isn’t about pirated software.

I’m not talking about “Free/Libre/Open Source Software” (FLOSS) either. I tend to relate to some of the views held by advocates of “Free as in Speech” or “Open” developments but I’ve had issues with FLOSS projects, in the past. I will gladly support FLOSS in my own ways but, to be honest, I ended up losing interest in some of the most promising projects out there. Not saying they’re not worth it. After all, I do rely on many of those projects But in talking about “no-cost software,” I’m not talking about Free, Libre, or Open Source development. At least, not directly.

Basically, I was thinking about the complex equation which, for any computer user, determines the cash value of a software application. Most of the time, this equation is somehow skewed. And I end up frustrated when I pay for software and almost giddy when I find good no-cost software.

An old but representative example of my cost-software frustration: QuickTime Pro. I paid for it a number of years ago, in preparation for a fieldwork trip. It seemed like a reasonable thing to do, especially given the fact that I was going to manipulate media files. When QuickTime was updated, my license stopped working. I was basically never able to use the QuickTime Pro features. And while it’s not a huge amount of money, the frustration of having paid for something I really didn’t need left me surprisingly bitter. It was a bad decision at that time so I’m now less likely to buy software unless I really need it and I really know how I will use it.

There’s an interesting exception to my frustration with cost-software: OmniOutliner (OO). I paid for it and have used it extensively for years. When I was “forced” to switch to Windows XP, OO was possibly the piece of software I missed the most from Mac OS X. And as soon as I was able to come back to the Mac, it’s one of the first applications I installed. But, and this is probably an important indicator, I don’t really use it anymore. Not because it lacks features I found elsewhere. But because I’ve had to adapt my workflow to OO-less conditions. I still wish there were an excellent cross-platform outliner for my needs. And, no, Microsoft OneNote isn’t it.

Now, I may not be a typical user. If the term weren’t so self-aggrandizing, I’d probably call myself a “Power User.” And, as I keep saying, I am not a coder. Therefore, I’m neither the prototypical “end user” nor the stereotypical “code monkey.” I’m just someone spending inordinate amounts of time in front of computers.

One dimension of my computer behavior which probably does put me in a special niche is that I tend to like trying out new things. Even more specifically, I tend to get overly enthusiastic about computer technology to then become disillusioned by said technology. Call me a “dreamer,” if you will. Call me “naïve.” Actually, “you can call me anything you want.” Just don’t call me to sell me things. 😉

Speaking of pressure sales. In a way, if I had truckloads of money, I might be a good target for software sales. But I’d be the most demanding user ever. I’d require things to work exactly like I expect them to work. I’d be exactly what I never am in real life: a dictator.

So I’m better off as a user of no-cost software.

I still end up making feature requests, on occasion. Especially with Open Source and other open development projects. Some developers might think I’m just complaining as I’m not contributing to the code base or offering solutions to a specific usage problem. Eh.

Going back to no-cost software. The advantage isn’t really that we, users, spend less money on the software distribution itself. It’s that we don’t really need to select the perfect software solution. We can just make do with what we have. Which is a huge “value-add proposition” in terms of computer technology, as counter-intuitive as this may sound to some people.

To break down a few no-cost options.

  • Software that came with your computer. With an Eee PC, iPhone, XO, or Mac, it’s actually an important part of the complete computing experience. Sure, there are always ways to expand the software offering. But the included software may become a big part of the deal. After all, the possibilities are already endless. Especially if you have ubiquitous Internet access.
  • Software which comes through a volume license agreement. This often works for Microsoft software, at least at large educational institutions. Even if you don’t like it so much, you end up using Microsoft Office because you have it on your computer for free and it does most of the things you want to do.
  • Software coming with a plan or paid service. Including software given by ISPs. These tend not to be “worth it.” Yet the principle (or “business model,” depending on which end of the deal you’re on) isn’t so silly. You already pay for a plan of some kind, you might as well get everything you need from that plan. Nobody (not even AT&T) has done it yet in such a way that it would be to everyone’s advantage. But it’s worth a thought.
  • “Webware” and other online applications. Call it “cloud computing” if you will (it was a buzzphrase, a few days ago). And it changes a lot of things. Not only does it simplify things like backup and migration, but it often makes for a seamless computer experience. When it works really well, the browser effectively disappears and you just work in a comfortable environment where everything you need (content, tools) is “just there.” This category is growing rather rapidly at this point but many tech enthusiasts were predicting its success a number of years ago. Typical forecasting, I guess.
  • Light/demo versions. These are actually less common than they once were, especially in terms of feature differentiation. Sure, you may still play the first few levels of a game in demo version and some “express” or “lite” versions of software are still distributed for free as teaser versions of more complete software. But, like the shareware model, demo and light software may seem to have become much less prominent a part of the typical computer user’s life than just a few years ago.
  • Software coming from online services. I’m mostly thinking about Skype but it’s a software category which would include any program with a desktop component (a “download”) and an online component, typically involving some kind of individual account (free or paid). Part subscription model, part “Webware companion.” Most of Google’s software would qualify (Sketchup, Google Earth…). If the associated “retail software” were free, I wouldn’t hesitate to put WoW in this category.
  • Actual “freeware.” Much freeware could be included in other categories but there’s still an idea of a “freebie,” in software terms. Sometimes, said freeware is distributed in view of getting people’s attention. Sometimes the freeware is just the result of a developer “scratching her/his own itch.” Sometimes it comes from lapsed shareware or even lapsed commercial software. Sometimes it’s “donationware” disguised as freeware. But, if only because there’s a “freeware” category in most software catalogs, this type of no-cost software needs to be mentioned.
  • “Free/Libre/Open Source Software.” Sure, I said earlier this was not what I was really talking about. But that was then and this is now. 😉 Besides, some of the most useful pieces of software I use do come from Free Software or Open Source. Mozilla Firefox is probably the best example. But there are many other worthy programs out there, including BibDesk, TeXShop, and FreeCiv. Though, to be honest, Firefox and Flock are probably the ones I use the most.
  • Pirated software (aka “warez”). While software piracy can technically let some users avoid the cost of purchasing a piece of software, the concept is directly tied with commercial software licenses. (It’s probably not piracy if the software distribution is meant to be open.) Sure, pirates “subvert” the licensing system for commercial software. But the software category isn’t “no-cost.” To me, there’s even a kind of “transaction cost” involved in the piracy. So even if the legal and ethical issues weren’t enough to exclude pirated software from my list of no-cost software options, the very practicalities of piracy put pirated software in the costly column, not in the “no-cost” one.

With all but the last category, I end up with most (but not all) of the software solutions I need. In fact, there are ways in which I’m better served now with no-cost software than I have ever been with paid software. I should probably make a list of these, at some point, but I don’t feel like it.

I mostly felt like assessing my needs, as a computer user. And though there always are many things I wish I could do but currently can’t, I must admit that I don’t really see the need to pay for much software.

Still… What I feel I need, here, is the “ultimate device.” It could be handheld. But I’m mostly thinking about a way to get ideas into a computer-friendly format. A broad set of issues about a very basic thing.

The spark for this blog entry was a reflection about dictation software. Not only have I been interested in speech technology for quite a while but I still bet that speech (recognition/dictation and “text-to-speech”) can become the killer app. I just think that speech hasn’t “come true.” It’s there, some people use it, the societal acceptance for it is likely (given cellphone penetration most anywhere). But its moment hasn’t yet come.

No-cost “text-to-speech” (TTS) software solutions do exist but are rather impractical. In the mid-1990s, I spent fifteen months doing speech analysis for a TTS research project in Switzerland. One of the best periods in my life. Yet, my enthusiasm for current TTS systems has been dampened. I wish I could be passionate about TTS and other speech technology again. Maybe the reason I’m notis that we don’t have a “voice desktop,” yet. But, for this voice desktop (voicetop?) to happen, we need high quality, continuous speech recognition. IOW, we need a “personal dictation device.” So, my latest 2008 prediction: we will get a voice device (smartphone?) which adapts to our voices and does very efficient and very accurate transcription of our speech. (A correlated prediction: people will complain about speech technology for a while before getting used to the continuous stream of public soliloquy.)

Dictation software is typically quite costly and complicated. Most users don’t see a need for dictation software so they don’t see a need for speech technology in computing. Though I keep thinking that speech could improve my computing life, I’ve never purchased a speech processing package. Like OCR (which is also dominated by Nuance, these days) it seems to be the kind of thing which could be useful to everyone but ends up being limited to “vertical markets.” (As it so happens, I did end up being an OCR program at some point and kept hoping my life would improve as the result of being able to transform hardcopies into searchable files. But I almost never used OCR (so my frustration with cost-software continues).)

Ah, well…

I Want It All: The Ultimate Handheld Device?

In a way, this is a short version of a couple of posts I’ve been planning. RERO‘s better than keeping drafts.

So, what do I want in the ultimate handheld device? Basically, everything. More specifically, I’ve been thinking about the advantages of merging technologies.

At first, I was mostly thinking about “wireless” in general. Something which could bring together WiFi (802.11), WiMAX, and (3G) cellular networks. The idea being that you can get the advantages from all of these so that the device can be online pretty much all the time. It’s a pipedream, of course, but it’s a fun dream to have.

And then, the release of location services on the iPhone and iPod touch made me think about some kind of hybrid positioning system, using GPS, Google’s cellphone-based positioning, and Skyhook‘s Wi-Fi Positioning System (WPS).

A recent article in USA Today explains Skyhook’s strategy:

Jobs, iPhone have Skyhook pointed in right direction –

And the Skyhook site itself has some interesting scenarios for WPS use in navigation, social networking, content management, location-specific marketing, gaming, and tracking. It seems rather clear to me that positioning systems in general have a rather bright future. I also don’t really see a reason for one positioning system to exclude the others (apart from technological and financial issues).

Positioning will be especially useful if it ever becomes really commonplace. Part network effect, part glocalization.

Of course, there are still several issues to solve. Including privacy and safety concerns. But a good system would make it possible for the user to control her/his positioning information (when and where the user’s coordinates are made available, and how precise they are allowed to be). Even without positioning systems, many of us have been using online mapping services (including Google Maps) to reveal some details about our movements. Typically, we’re fine with even perfect strangers knowing that we’ve been through a public space in the past yet we may only provide precise and up-to-date location details to people we trust. There’s no reason a positioning system on a handheld device should only work in one situation.

Now, I’m not saying that positioning is the “ultimate handheld device’s killer app.” But positioning is the kind of feature which opens up all sorts of possibilities.

And, actually, I’ve been thinking about GPS devices for quite a while. Unfortunately, most of them are either quite expensive or meant almost exclusively for car navigation or for outdoor activities. As a non-wealthy compulsive pedestrian who hasn’t been doing much outdoors in recent years, a dedicated GPS device never seemed that reasonable a purchase.

But as a semi-nomadic ethnographer, I often wished I had an easy way to record where I was. In fact, a positioning-enabled handheld device could be quite useful in ethnographic fieldwork. Several things could be made easier if we were able to geotag field material (including fieldnotes, still pictures, and audio recordings). And, of course, colleagues in archeology have been using GPS and GIS for quite a while.

Of course, any smartphone with a positioning system could help. Apple’s iPhone is one and we already know that smartphones compatible with Google’s Android will be able to have location-based functionalities. Given Google’s lead in terms of maps and cellphone-based positioning, those Android devices do sound rather close to the ultimate handheld device.