Category Archives: Google

Privilege: Library Edition

When I came out against privilege, over a month ago, I wasn’t thinking about libraries. But, last week, while running some errands at three local libraries (within an hour), I got to think about library privileges.

During that day, I first started thinking about library privileges because I was renewing my CREPUQ card at Concordia. With that card, graduate students and faculty members at a university in Quebec are able to get library privileges at other universities, a nice “perk” that we have. While renewing my card, I was told (or, more probably, reminded) that the card now gives me borrowing privileges at any university library in Canada through CURBA (Canadian University Reciprocal Borrowing Agreement).

My gut reaction: “Aw-sum!” (I was having a fun day).

It got me thinking about what it means to be an academic in Canada. Because I’ve also spent part of my still short academic career in the United States, I tend to compare the Canadian academe to US academic contexts. And while there are some impressive academic consortia in the US, I don’t think that any of them may offer as wide a set of library privileges as this one. If my count is accurate, there are 77 institutions involved in CURBA. University systems and consortia in the US typically include somewhere between ten and thirty institutions, usually within the same state or region. Even if members of both the “UC System” and “CalState” have similar borrowing privileges, it would only mean 33 institutions, less than half of CURBA (though the population of California is about 20% more than that of Canada as a whole). Some important university consortia through which I’ve had some privileges were the CIC (Committee on Institutional Cooperation), a group of twelve Midwestern universities, and the BLC (Boston Library Consortium), a group of twenty university in New England. Even with full borrowing privileges in all three groups of university libraries, an academic would only have access to library material from 65 institutions.

Of course, the number of institutions isn’t that relevant if the libraries themselves have few books. But my guess is that the average size of a Canadian university’s library collection is quite comparable to its US equivalents, including in such well-endowed institutions as those in the aforementioned consortia and university systems. What’s more, I would guess that there might be a broader range of references across Canadian universities than in any region of the US. Not to mention that BANQ (Quebec’s national library and archives) are part of CURBA and that their collections overlap very little with a typical university library.

So, I was thinking about access to an extremely wide range of references given to graduate students and faculty members throughout Canada. We get this very nice perk, this impressive privilege, and we pretty much take it for granted.

Which eventually got me to think about my problem with privilege. Privilege implies a type of hierarchy with which I tend to be uneasy. Even (or especially) when I benefit from a top position. “That’s all great for us but what about other people?”

In this case, there are obvious “Others” like undergraduate students at Canadian institutions,  Canadian non-academics, and scholars at non-Canadian institutions. These are very disparate groups but they are all denied something.

Canadian undergrads are the most direct “victims”: they participate in Canada’s academe, like graduate students and faculty members, yet their access to resources is severely limited by comparison to those of us with CURBA privileges. Something about this strikes me as rather unfair. Don’t undegrads need access as much as we do? Is there really such a wide gap between someone working on an honour’s thesis at the end of a bachelor’s degree and someone starting work on a master’s thesis that the latter requires much wider access than the former? Of course, the main rationale behind this discrepancy in access to library material probably has to do with sheer numbers: there are many undergraduate students “fighting for the same resources” and there are relatively few graduate students and faculty members who need access to the same resources. Or something like that. It makes sense but it’s still a point of tension, as any matter of privilege.

The second set of “victims” includes Canadians who happen to not be affiliated directly with an academic institution. While it may seem that their need for academic resources are more limited than those of students, many people in this category have a more unquenchable “thirst for knowledge” than many an academic. In fact, there are people in this category who could probably do a lot of academically-relevant work “if only they had access.” I mostly mean people who have an academic background of some sort but who are currently unaffiliated with formal institutions. But the “broader public” counts, especially when a specific topic becomes relevant to them. These are people who take advantage of public libraries but, as mentioned in the BANQ case, public and university libraries don’t tend to overlap much. For instance, it’s quite unlikely that someone without academic library privileges would have been able to borrow Visual Information Processing (Chase, William 1973), a proceedings book that I used as a source for a recent blogpost on expertise. Of course, “the public” is usually allowed to browse books in most university libraries in North America (apart from Harvard). But, depending on other practical factors, borrowing books can be much more efficient than browsing them in a library. I tend to hear from diverse people who would enjoy some kind of academic status for this very reason: library privileges matter.

A third category of “victims” of CURBA privileges are non-Canadian academics. Since most of them may only contribute indirectly to Canadian society, why should they have access to Canadian resources? As any social context, the national academe defines insiders and outsiders. While academics are typically inclusive, this type of restriction seems to make sense. Yet many academics outside of Canada could benefit from access to resources broadly available to Canadian academics. In some cases, there are special agreements to allow outside scholars to get temporary access to local, regional, or national resources. Rather frequently, these agreements come with special funding, the outside academic being a special visitor, sometimes with even better access than some local academics.  I have very limited knowledge of these agreements (apart from infrequent discussions with colleagues who benefitted from them) but my sense is that they are costly, cumbersome, and restrictive. Access to local resources is even more exclusive a privilege in this case than in the CURBA case.

Which brings me to my main point about the issue: we all need open access.

When I originally thought about how impressive CURBA privileges were, I was thinking through the logic of the physical library. In a physical library, resources are scarce, access to resources need to be controlled, and library privileges have a high value. In fact, it costs an impressive amount of money to run a physical library. The money universities invest in their libraries is relatively “inelastic” and must figure quite prominently in their budgets. The “return” on that investment seems to me a bit hard to measure: is it a competitive advantage, does a better-endowed library make a university more cost-effective, do university libraries ever “recoup” any portion of the amounts spent?

Contrast all of this with a “virtual” library. My guess is that an online collection of texts costs less to maintain than a physical library by any possible measure. Because digital data may be copied at will, the notion of “scarcity” makes little sense online. Distributing millions of copies of a digital text doesn’t make the original text unavailable to anyone. As long as the distribution system is designed properly, the “transaction costs” in distributing a text of any length are probably much less than those associated with borrowing a book.  And the differences between “browsing” and “borrowing,” which do appear significant with physical books, seem irrelevant with digital texts.

These are all well-known points about online distribution. And they all seem to lead to the same conclusion: “information wants to be free.” Not “free as in beer.” Maybe not even “free as in speech.” But “free as in unchained.”

Open access to academic resources is still a hot topic. Though I do consider myself an advocate of “OA” (the “Open Access movement”), what I mean here isn’t so much about OA as opposed to TA (“toll-access”) in the case of academic journals. Physical copies of periodicals may usually not be borrowed, regardless of library privileges, and online resources are typically excluded from borrowing agreements between institutions. The connection between OA and my perspective on library privileges is that I think the same solution could solve both issues.

I’ve been thinking about a “global library” for a while. Like others, the Library of Alexandria serves as a model but texts would be online. It sounds utopian but my main notion, there, is that “library privileges” would be granted to anyone. Not only senior scholars at accredited academic institutions. Anyone. Of course, the burden of maintaining that global library would also be shared by anyone.

There are many related models, apart from the Library of Alexandria: French «Encyclopédistes» through the Englightenment, public libraries, national libraries (including the Library of Congress), Tim Berners-Lee’s original “World Wide Web” concept, Brewster Kahle’s Internet Archive, Google Books, etc. Though these models differ, they all point to the same basic idea: a “universal” collection with the potential for “universal” access. In historical perspective, this core notion of a “universal library” seems relatively stable.

Of course, there are many obstacles to a “global” or “universal” library. Including issues having to do with conflicts between social groups across the Globe or the current state of so-called “intellectual property.” These are all very tricky and I don’t think they can be solved in any number of blogposts. The main thing I’ve been thinking about, in this case, is the implications of a global library in terms of privileges.

Come to think of it, it’s possible that much of the resistance to a global library have to do with privilege: unlike me, some people enjoy privilege.

Answers on Expertise

As a follow-up on my previous post…

Quest for Expertise « Disparate.

(I was looking for the origin of the “10 years or 10,000 hours to be an expert” claim.)

Interestingly enough, that post is getting a bit of blog attention.

I’m so grateful about this attention that it made me tweet the following:

Trackbacks, pings, and blog comments are blogger gifts.

I also posted a question about this on Mahalo Answers (after the first comment, by Alejna, appeared on my blog, but before other comments and trackbacks appeared). I selected glaspell’s answer as the best answer
(glaspell also commented on my blog entry).

At this point, my impression is that what is taken as a “rule” on expertise is a simplification of results from a larger body of research with an emphasis on work by K. Anders Ericsson but with little attention paid to primary sources.
The whole process is quite satisfying, to me. Not just because we might all gain a better understanding of how this “claim” became so generalized, but because the process as a whole shows both powers and limitations of the Internet. I tend to claim (publicly) that the ‘Net favours critical thinking (because we eventually all claims with grains of salt). But it also seems that, even with well-known research done in English, it can be rather difficult to follow all the connections across the literature. If you think about more obscure work in non-dominant languages, it’s easy to realize that Google’s dream of organizing the world’s information isn’t yet true.

By the by, I do realize that my quest was based on a somewhat arbitrary assumption: that this “rule of thumb” is now understood as a solid rule. But what I’ve noticed in popular media since 2006 leads me to believe that the claim is indeed taken as a hard and fast rule.

I’m not blaming anyone, in this case. I don’t think that anyone’s involvement in the “chain of transmission” was voluntarily misleading and I don’t even think that it was that essential. As with many other ideas, what “sticks” is what seems to make sense in context. Actually, this strong tendency for “convenient” ideas to be more widely believed relates to a set of tricky issues with which academics have to deal, on a daily basis. Sagan’s well-known “baloney detector” is useful, here. But it’s also in not so wide use.

One thing which should also be clear: I’m not saying that Ericsson and other researchers have done anything shoddy or inappropriate. Their work is being used outside of its original context, which is often an issue.

Mass media coverage of academic research was the basis of series of entries on the original Language Log, including one of my favourite blogposts, Mark Liberman’s Language Log: Raising standards — by lowering them. The main point, I think, is that secluded academics in the Ivory Tower do little to alleviate this problem.

But I digress.
And I should probably reply to the other comments on the entry itself.

Google for Educational Contexts

Interesting wishlist, over at tbarrett’s classroom ICT blog.

11 Google Apps Improvements for the Classroom | ICT in my Classroom.

In a way, Google is in a unique position in terms of creating the optimal set of classroom tools. And Google teams have an interest in educational projects (as made clear by Google for Educators, Google Summer of Code, Google Apps for schools…).
What seems to be missing is integration. Maybe Google is taking its time before integrating all of its services and apps. After all, the integration of Google Notebook and Google Bookmarks was fairly recent (and we can easily imagine a further integration with Google Reader). But some of us are a bit impatient. Or too enthusiastic about tools.

Because I just skimmed through the Google Chrome comicbook, I get to think that, maybe, Google is getting ready to integrate its tools in a neat way. Not specifically meant for schools but, in the end, an integrated Google platform can be developed into an education-specific set of applications.
After all, apart from Google Scholar, we’re talking about pretty much the same tools as those used outside of educational contexts.

What tools am I personally thinking about? Almost everything Google does or has done could be useful in educational contexts. From Google Apps (which includes Google Docs, Gmail, Google Sites, GTalk, Gcal…) to Google Books and Google Scholar or even Google Earth, Google Translate, and Google Maps. Not to mention OpenSocial, YouTube, Android, Blogger, Sketchup, Lively

Not that Google’s versions of all of these tools and services are inherently more appropriate for education than those developed outside of Google. But it’s clear that Google has an edge in terms of its technology portfolio. Can’t we just imagine a new kind of Learning Management System leveraging all the neat Google technologies and using a social networking model?

Educational contexts do have some specific requirements. Despite Google’s love affair with “openness,” schools typically require protection for different types of data. Some would also say that Google’s usual advertisement-supported model may be inappropriate for learning environments. So it might be a sign that Google does understand school-focused requirements that Google Apps are ad-free for students, faculty, and staff.

Ok, I’m thinking out loud. But isn’t this what wishlists are about?

The Need for Social Science in Social Web/Marketing/Media (Draft)

[Been sitting on this one for a little while. Better RERO it, I guess.]

Sticking My Neck Out (Executive Summary)

I think that participants in many technology-enthusiastic movements which carry the term “social” would do well to learn some social science. Furthermore, my guess is that ethnographic disciplines are very well-suited to the task of teaching participants in these movements something about social groups.

Disclaimer

Despite the potentially provocative title and my explicitly stating a position, I mostly wish to think out loud about different things which have been on my mind for a while.

I’m not an “expert” in this field. I’m just a social scientist and an ethnographer who has been observing a lot of things online. I do know that there are many experts who have written many great books about similar issues. What I’m saying here might not seem new. But I’m using my blog as a way to at least write down some of the things I have in mind and, hopefully, discuss these issues thoughtfully with people who care.

Also, this will not be a guide on “what to do to be social-savvy.” Books, seminars, and workshops on this specific topic abound. But my attitude is that every situation needs to be treated in its own context, that cookie-cutter solutions often fail. So I would advise people interested in this set of issues to train themselves in at least a little bit of social science, even if much of the content of the training material seems irrelevant. Discuss things with a social scientist, hire a social scientist in your business, take a course in social science, and don’t focus on advice but on the broad picture. Really.

Clarification

Though they are all different, enthusiastic participants in “social web,” “social marketing,” “social media,” and other “social things online” do have some commonalities. At the risk of angering some of them, I’m lumping them all together as “social * enthusiasts.” One thing I like about the term “enthusiast” is that it can apply to both professional and amateurs, to geeks and dabblers, to full-timers and part-timers. My target isn’t a specific group of people. I just observed different things in different contexts.

Links

Shameless Self-Promotion

A few links from my own blog, for context (and for easier retrieval):

Shameless Cross-Promotion

A few links from other blogs, to hopefully expand context (and for easier retrieval):

Some raw notes

  • Insight
  • Cluefulness
  • Openness
  • Freedom
  • Transparency
  • Unintended uses
  • Constructivism
  • Empowerment
  • Disruptive technology
  • Innovation
  • Creative thinking
  • Critical thinking
  • Technology adoption
  • Early adopters
  • Late adopters
  • Forced adoption
  • OLPC XO
  • OLPC XOXO
  • Attitudes to change
  • Conservatism
  • Luddites
  • Activism
  • Impatience
  • Windmills and shelters
  • Niche thinking
  • Geek culture
  • Groupthink
  • Idea horizon
  • Intersubjectivity
  • Influence
  • Sphere of influence
  • Influence network
  • Social butterfly effect
  • Cog in a wheel
  • Social networks
  • Acephalous groups
  • Ego-based groups
  • Non-hierarchical groups
  • Mutual influences
  • Network effects
  • Risk-taking
  • Low-stakes
  • Trial-and-error
  • Transparency
  • Ethnography
  • Epidemiology of ideas
  • Neural networks
  • Cognition and communication
  • Wilson and Sperber
  • Relevance
  • Global
  • Glocal
  • Regional
  • City-State
  • Fluidity
  • Consensus culture
  • Organic relationships
  • Establishing rapport
  • Buzzwords
  • Viral
  • Social
  • Meme
  • Memetic marketplace
  • Meta
  • Target audience

Let’s Give This a Try

The Internet is, simply, a network. Sure, technically it’s a meta-network, a network of networks. But that is pretty much irrelevant, in social terms, as most networks may be analyzed at different levels as containing smaller networks or being parts of larger networks. The fact remains that the ‘Net is pretty easy to understand, sociologically. It’s nothing new, it’s just a textbook example of something social scientists have been looking at for a good long time.

Though the Internet mostly connects computers (in many shapes or forms, many of them being “devices” more than the typical “personal computer”), the impact of the Internet is through human actions, behaviours, thoughts, and feelings. Sure, we can talk ad nauseam about the technical aspects of the Internet, but these topics have been covered a lot in the last fifteen years of intense Internet growth and a lot of people seem to be ready to look at other dimensions.

The category of “people who are online” has expanded greatly, in different steps. Here, Martin Lessard’s description of the Internet’s Six Cultures (Les 6 cultures d’Internet) is really worth a read. Martin’s post is in French but we also had a blog discussion in English, about it. Not only are there more people online but those “people who are online” have become much more diverse in several respects. At the same time, there are clear patterns on who “online people” are and there are clear differences in uses of the Internet.

Groups of human beings are the very basic object of social science. Diversity in human groups is the very basis for ethnography. Ethnography is simply the description of (“writing about”) human groups conceived as diverse (“peoples”). As simple as ethnography can be, it leads to a very specific approach to society which is very compatible with all sorts of things relevant to “social * enthusiasts” on- and offline.

While there are many things online which may be described as “media,” comparing the Internet to “The Mass Media” is often the best way to miss “what the Internet is all about.” Sure, the Internet isn’t about anything (about from connecting computers which, in turn, connect human beings). But to get actual insight into the ‘Net, one probably needs to free herself/himself of notions relating to “The Mass Media.” Put bluntly, McLuhan was probably a very interesting person and some of his ideas remain intriguing but fallacies abound in his work and the best thing to do with his ideas is to go beyond them.

One of my favourite examples of the overuse of “media”-based concepts is the issue of influence. In blogging, podcasting, or selling, the notion often is that, on the Internet as in offline life, “some key individuals or outlets are influential and these are the people by whom or channels through which ideas are disseminated.” Hence all the Technorati rankings and other “viewer statistics.” Old techniques and ideas from the times of radio and television expansion are used because it’s easier to think through advertising models than through radically new models. This is, in fact, when I tend to bring back my explanation of the “social butterfly effect“: quite frequently, “influence” online isn’t through specific individuals or outlets but even when it is, those people are influential through virtue of connecting to diverse groups, not by the number of people they know. There are ways to analyze those connections but “measuring impact” is eventually missing the point.

Yes, there is an obvious “qual. vs. quant.” angle, here. A major distinction between non-ethnographic and ethnographic disciplines in social sciences is that non-ethnographic disciplines tend to be overly constrained by “quantitative analysis.” Ultimately, any analysis is “qualitative” but “quantitative methods” are a very small and often limiting subset of the possible research and analysis methods available. Hence the constriction and what some ethnographers may describe as “myopia” on the part of non-ethnographers.

Gone Viral

The term “viral” is used rather frequently by “social * enthusiasts” online. I happen to think that it’s a fairly fitting term, even though it’s used more by extension than by literal meaning. To me, it relates rather directly to Dan Sperber’s “epidemiological” treatment of culture (see Explaining Culture) which may itself be perceived as resembling Dawkins’s well-known “selfish gene” ideas made popular by different online observers, but with something which I perceive to be (to use simple semiotic/semiological concepts) more “motivated” than the more “arbitrary” connections between genetics and ideas. While Sperber could hardly be described as an ethnographer, his anthropological connections still make some of his work compatible with ethnographic perspectives.

Analysis of the spread of ideas does correspond fairly closely with the spread of viruses, especially given the nature of contacts which make transmission possible. One needs not do much to spread a virus or an idea. This virus or idea may find “fertile soil” in a given social context, depending on a number of factors. Despite the disadvantages of extending analogies and core metaphors too far, the type of ecosystem/epidemiology analysis of social systems embedded in uses of the term “viral” do seem to help some specific people make sense of different things which happen online. In “viral marketing,” the type of informal, invisible, unexpected spread of recognition through word of mouth does relate somewhat to the spread of a virus. Moreover, the metaphor of “viral marketing” is useful in thinking about the lack of control the professional marketer may have on how her/his product is perceived. In this context, the term “viral” seems useful.

The Social

While “viral” seems appropriate, the even more simple “social” often seems inappropriately used. It’s not a ranty attitude which makes me comment negatively on the use of the term “social.” In fact, I don’t really care about the use of the term itself. But I do notice that use of the term often obfuscates what is the obvious social character of the Internet.

To a social scientist, anything which involves groups is by definition “social.” Of course, some groups and individuals are more gregarious than others, some people are taken to be very sociable, and some contexts are more conducive to heightened social interactions. But social interactions happen in any context.
As an example I used (in French) in reply to this blog post, something as common as standing in line at a grocery store is representative of social behaviour and can be analyzed in social terms. Any Web page which is accessed by anyone is “social” in the sense that it establishes some link, however tenuous and asymmetric, between at least two individuals (someone who created the page and the person who accessed that page). Sure, it sounds like the minimal definition of communication (sender, medium/message, receiver). But what most people who talk about communication seem to forget (unlike Jakobson), is that all communication is social.

Sure, putting a comment form on a Web page facilitates a basic social interaction, making the page “more social” in the sense of “making that page easier to use explicit social interaction.” And, of course, adding some features which facilitate the act of sharing data with one’s personal contacts is a step above the contact form in terms of making certain type of social interaction straightforward and easy. But, contrary to what Google Friend Connect implies, adding those features doesn’t suddenly make the site social. The site itself isn’t really social and, assuming some people visited it, there was already a social dimension to it. I’m not nitpicking on word use. I’m saying that using “social” in this way may blind some people to social dimensions of the Internet. And the consequences can be pretty harsh, in some cases, for overlooking how social the ‘Net is.

Something similar may be said about the “Social Web,” one of the many definitions of “Web 2.0” which is used in some contexts (mostly, the cynic would say, “to make some tool appear ‘new and improved'”). The Web as a whole was “social” by definition. Granted, it lacked the ease of social interaction afforded such venerable Internet classics as Usenet and email. But it was already making some modes of social interaction easier to perceive. No, this isn’t about “it’s all been done.” It’s about being oblivious to the social potential of tools which already existed. True, the period in Internet history known as “Web 2.0” (and the onset of the Internet’s sixth culture) may be associated with new social phenomena. But there is little evidence that the association is causal, that new online tools and services created a new reality which suddenly made it possible for people to become social online. This is one reason I like Martin Lessard’s post so much. Instead of postulating the existence of a brand new phenomenon, he talks about the conditions for some changes in both Internet use and the form the Web has taken.

Again, this isn’t about terminology per se. Substitute “friendly” for “social” and similar issues might come up (friendship and friendliness being disconnected from the social processes which underline them).

Adoptive Parents

Many “social * enthusiasts” are interested in “adoption.” They want their “things” to be adopted. This is especially visible among marketers but even in social media there’s an issue of “getting people on board.” And some people, especially those without social science training, seem to be looking for a recipe.

Problem is, there probably is no such thing as a recipe for technology adoption.

Sure, some marketing practises from the offline world may work online. Sometimes, adapting a strategy from the material world to the Internet is very simple and the Internet version may be more effective than the offline version. But it doesn’t mean that there is such a thing as a recipe. It’s a matter of either having some people who “have a knack for this sort of things” (say, based on sensitivity to what goes on online) or based on pure luck. Or it’s a matter of measuring success in different ways. But it isn’t based on a recipe. Especially not in the Internet sphere which is changing so rapidly (despite some remarkably stable features).

Again, I’m partial to contextual approaches (“fully-customized solutions,” if you really must). Not just because I think there are people who can do this work very efficiently. But because I observe that “recipes” do little more than sell “best-selling books” and other items.

So, what can we, as social scientists, say about “adoption?” That technology is adopted based on the perceived fit between the tools and people’s needs/wants/goals/preferences. Not the simple “the tool will be adopted if there’s a need.” But a perception that there might be a fit between an amorphous set of social actors (people) and some well-defined tools (“technologies”). Recognizing this fit is extremely difficult and forcing it is extremely expensive (not to mention completely unsustainable). But social scientists do help in finding ways to adapt tools to different social situations.

Especially ethnographers. Because instead of surveys and focus groups, we challenge assumptions about what “must” fit. Our heads and books are full of examples which sound, in retrospect, as common sense but which had stumped major corporations with huge budgets. (Ask me about McDonald’s in Brazil or browse a cultural anthropology textbook, for more information.)

Recently, while reading about issues surrounding the OLPC’s original XO computer, I was glad to read the following:

John Heskett once said that the critical difference between invention and innovation was its mass adoption by users. (Niti Bhan The emperor has designer clothes)

Not that this is a new idea, for social scientists. But I was glad that the social dimension of technology adoption was recognized.

In marketing and design spheres especially, people often think of innovation as individualized. While some individuals are particularly adept at leading inventions to mass adoption (Steve Jobs being a textbook example), “adoption comes from the people.” Yes, groups of people may be manipulated to adopt something “despite themselves.” But that kind of forced adoption is still dependent on a broad acceptance, by “the people,” of even the basic forms of marketing. This is very similar to the simplified version of the concept of “hegemony,” so common in both social sciences and humanities. In a hegemony (as opposed to a totalitarian regime), no coercion is necessary because the logic of the system has been internalized by people who are affected by it. Simple, but effective.

In online culture, adept marketers are highly valued. But I’m quite convinced that pre-online marketers already knew that they had to “learn society first.” One thing with almost anything happening online is that “the society” is boundless. Country boundaries usually make very little sense and the social rules of every local group will leak into even the simplest occasion. Some people seem to assume that the end result is a cultural homogenization, thereby not necessitating any adaptation besides the move from “brick and mortar” to online. Others (or the same people, actually) want to protect their “business models” by restricting tools or services based on country boundaries. In my mind, both attitudes are ineffective and misleading.

Sometimes I Feel Like a Motherless Child

I think the Cluetrain Manifesto can somehow be summarized through concepts of freedom, openness, and transparency. These are all very obvious (in French, the book title is something close to “the evident truths manifesto”). They’re also all very social.

Social scientists often become activists based on these concepts. And among social scientists, many of us are enthusiastic about the social changes which are happening in parallel with Internet growth. Not because of technology. But because of empowerment. People are using the Internet in their own ways, the one key feature of the Internet being its lack of centralization. While the lack of centralized control may be perceived as a “bad thing” by some (social scientists or not), there’s little argument that the ‘Net as a whole is out of the control of specific corporations or governments (despite the large degree of consolidation which has happened offline and online).

Especially in the United States, “freedom” is conceived as a basic right. But it’s also a basic concept in social analysis. As some put it: “somebody’s rights end where another’s begin.” But social scientists have a whole apparatus to deal with all the nuances and subtleties which are bound to come from any situation where people’s rights (freedom) may clash or even simply be interpreted differently. Again, not that social scientists have easy, ready-made answers on these issues. But we’re used to dealing with them. We don’t interpret freedom as a given.

Transparency is fairly simple and relates directly to how people manage information itself (instead of knowledge or insight). Radical transparency is giving as much information as possible to those who may need it. Everybody has a “right to learn” a lot of things about a given institution (instead of “right to know”), when that institution has a social impact. Canada’s Access to Information Act is quite representative of the move to transparency and use of this act has accompanied changes in the ways government officials need to behave to adapt to a relatively new reality.

Openness is an interesting topic, especially in the context of the so-called “Open Source” movement. Radical openness implies participation by outsiders, at least in the form of verbal feedback. The cluefulness of “opening yourself to your users” is made obvious in the context of successes by institutions which have at least portrayed themselves as open. What’s in my mind unfortunate is that many institutions now attempt to position themselves on the openness end of the “closed/proprietary to open/responsive” scale without much work done to really open themselves up.

Communitas

Mottoes, slogans, and maxims like “build it and they will come,” “there’s a sucker born every minute,” “let them have cake,” and “give them what they want” all fail to grasp the basic reality of social life: “they” and “we” are linked. We’re all different and we’re all connected. We all take parts in groups. These groups are all associated with one another. We can’t simply behave the same way with everyone. Identity has two parts: sense of belonging (to an “in-group”) and sense of distinction (from an “out-group”). “Us/Them.”

Within the “in-group,” if there isn’t any obvious hierarchy, the sense of belonging can take the form that Victor Turner called “communitas” and which happens in situations giving real meaning to the notion of “community.” “Community of experience,” “community of practise.” Eckert and Wittgenstein brought to online networks. In a community, contacts aren’t always harmonious. But people feel they fully belong. A network isn’t the same thing as a community.

The World Is My Oyster

Despite the so-called “Digital Divide” (or, more precisely, the maintenance online of global inequalities), the ‘Net is truly “Global.” So is the phone, now that cellphones are accomplishing the “leapfrog effect.” But this one Internet we have (i.e., not Internet2 or other such specialized meta-network) is reaching everywhere through a single set of compatible connections. The need for cultural awareness is increased, not alleviated by online activities.

Release Early, Release Often

Among friends, we call it RERO.

The RERO principle is a multiple-pass system. Instead of waiting for the right moment to release a “perfect product” (say, a blogpost!), the “work in progress” is provided widely, garnering feedback which will be integrated in future “product versions.” The RERO approach can be unnerving to “product developers,” but it has proved its value in online-savvy contexts.

I use “product” in a broad sense because the principle applies to diverse contexts. Furthermore, the RERO principle helps shift the focus from “product,” back into “process.”

The RERO principle may imply some “emotional” or “psychological” dimensions, such as humility and the acceptance of failure. At some level, differences between RERO and “trial-and-error” methods of development appear insignificant. Those who create something should not expect the first try to be successful and should recognize mistakes to improve on the creative process and product. This is similar to the difference between “rehearsal” (low-stakes experimentation with a process) and “performance” (with responsibility, by the performer, for evaluation by an audience).

Though applications of the early/often concept to social domains are mostly satirical, there is a social dimension to the RERO principle. Releasing a “product” implies a group, a social context.

The partial and frequent “release” of work to “the public” relates directly to openness and transparency. Frequent releases create a “relationship” with human beings. Sure, many of these are “Early Adopters” who are already overrepresented. But the rapport established between an institution and people (users/clients/customers/patrons…) can be transfered more broadly.

Releasing early seems to shift the limit between rehearsal and performance. Instead of being able to do mistakes on your own, your mistakes are shown publicly and your success is directly evaluated. Yet a somewhat reverse effect can occur: evaluation of the end-result becomes a lower-stake rating at different parts of the project because expectations have shifted to the “lower” end. This is probably the logic behind Google’s much discussed propensity to call all its products “beta.”

While the RERO principle does imply a certain openness, the expectation that each release might integrate all the feedback “users” have given is not fundamental to releasing early and frequently. The expectation is set by a specific social relationship between “developers” and “users.” In geek culture, especially when users are knowledgeable enough about technology to make elaborate wishlists, the expectation to respond to user demand can be quite strong, so much so that developers may perceive a sense of entitlement on the part of “users” and grow some resentment out of the situation. “If you don’t like it, make it yourself.” Such a situation is rather common in FLOSS development: since “users” have access to the source code, they may be expected to contribute to the development project. When “users” not only fail to fulfil expectations set by open development but even have the gumption to ask developers to respond to demands, conflicts may easily occur. And conflicts are among the things which social scientists study most frequently.

Putting the “Capital” Back into “Social Capital”

In the past several years, ”monetization” (transforming ideas into currency) has become one of the major foci of anything happening online. Anything which can be a source of profit generates an immediate (and temporary) “buzz.” The value of anything online is measured through typical currency-based economics. The relatively recent movement toward ”social” whatever is not only representative of this tendency, but might be seen as its climax: nowadays, even social ties can be sold directly, instead of being part of a secondary transaction. As some people say “The relationship is the currency” (or “the commodity,” or “the means to an end”). Fair enough, especially if these people understand what social relationships entail. But still strange, in context, to see people “selling their friends,” sometimes in a rather literal sense, when social relationships are conceived as valuable. After all, “selling the friend” transforms that relationship, diminishes its value. Ah, well, maybe everyone involved is just cynical. Still, even their cynicism contributes to the system. But I’m not judging. Really, I’m not. I’m just wondering
Anyhoo, the “What are you selling anyway” question makes as much sense online as it does with telemarketers and other greed-focused strangers (maybe “calls” are always “cold,” online). It’s just that the answer isn’t always so clear when the “business model” revolves around creating, then breaking a set of social expectations.
Me? I don’t sell anything. Really, not even my ideas or my sense of self. I’m just not good at selling. Oh, I do promote myself and I do accumulate social capital. As social butterflies are wont to do. The difference is, in the case of social butterflies such as myself, no money is exchanged and the social relationships are, hopefully, intact. This is not to say that friends never help me or never receive my help in a currency-friendly context. It mostly means that, in our cases, the relationships are conceived as their own rewards.
I’m consciously not taking the moral high ground, here, though some people may easily perceive this position as the morally superior one. I’m not even talking about a position. Just about an attitude to society and to social relationships. If you will, it’s a type of ethnographic observation from an insider’s perspective.

Makes sense?

Handhelds for the Rest of Us?

Ok, it probably shouldn’t become part of my habits but this is another repost of a blog comment motivated by the OLPC XO.

This time, it’s a reply to Niti Bhan’s enthusiastic blogpost about the eeePC: Perspective 2.0: The little eeePC that could has become the real “iPod” of personal computing

This time, I’m heavily editing my comments. So it’s less of a repost than a new blogpost. In some ways, it’s partly a follow-up to my “Ultimate Handheld Device” post (which ended up focusing on spatial positioning).

Given the OLPC context, the angle here is, hopefully, a culturally aware version of “a handheld device for the rest of us.”

Here goes…

I think there’s room in the World for a device category more similar to handhelds than to subnotebooks. Let’s call it “handhelds for the rest of us” (HftRoU). Something between a cellphone, a portable gaming console, a portable media player, and a personal digital assistant. Handheld devices exist which cover most of these features/applications, but I’m mostly using this categorization to think about the future of handhelds in a globalised World.

The “new” device category could serve as the inspiration for a follow-up to the OLPC project. One thing about which I keep thinking, in relation to the “OLPC” project, is that the ‘L’ part was too restrictive. Sure, laptops can be great tools for students, especially if these students are used to (or need to be trained in) working with and typing long-form text. But I don’t think that laptops represent the most “disruptive technology” around. If we think about their global penetration and widespread impact, cellphones are much closer to the leapfrog effect about which we all have been writing.

So, why not just talk about a cellphone or smartphone? Well, I’m trying to think both more broadly and more specifically. Cellphones are already helping people empower themselves. The next step might to add selected features which bring them closer to the OLPC dream. Also, since cellphones are widely distributed already, I think it’s important to think about devices which may complement cellphones. I have some ideas about non-handheld tools which could make cellphones even more relevant in people’s lives. But they will have to wait for another blogpost.

So, to put it simply, “handhelds for the rest of us” (HftRoU) are somewhere between the OLPC XO-1 and Apple’s original iPhone, in terms of features. In terms of prices, I dream that it could be closer to that of basic cellphones which are in the hands of so many people across the globe. I don’t know what that price may be but I heard things which sounded like a third of the price the OLPC originally had in mind (so, a sixth of the current price). Sure, it may take a while before such a low cost can be reached. But I actually don’t think we’re in a hurry.

I guess I’m just thinking of the electronics (and global) version of the Ford T. With more solidarity in mind. And cultural awareness.

Google’s Open Handset Alliance (OHA) may produce something more appropriate to “global contexts” than Apple’s iPhone. In comparison with Apple’s iPhone, devices developed by the OHA could be better adapted to the cultural, climatic, and economic conditions of those people who don’t have easy access to the kind of computers “we” take for granted. At the very least, the OHA has good representation on at least three continents and, like the old OLPC project, the OHA is officially dedicated to openness.

I actually care fairly little about which teams will develop devices in this category. In fact, I hope that new manufacturers will spring up in some local communities and that major manufacturers will pay attention.

I don’t care about who does it, I’m mostly interested in what the devices will make possible. Learning, broadly speaking. Communicating, in different ways. Empowering themselves, generally.

One thing I have in mind, and which deviates from the OLPC mission, is that there should be appropriate handheld devices for all age-ranges. I do understand the focus on 6-12 year-olds the old OLPC had. But I don’t think it’s very productive to only sell devices to that age-range. Especially not in those parts of the world (i.e., almost anywhere) where generation gaps don’t imply that children are isolated from adults. In fact, as an anthropologist, I react rather strongly to the thought that children should be the exclusive target of a project meant to empower people. But I digress, as always.

I don’t tend to be a feature-freak but I have been thinking about the main features the prototypical device in this category should have. It’s not a rigid set of guidelines. It’s just a way to think out loud about technology’s integration in human life.

The OS and GUI, which seem like major advantages of the eeePC, could certainly be of the mobile/handheld type instead of the desktop/laptop type. The usual suspects: Symbian, NewtonOS, Android, Zune, PalmOS, Cocoa Touch, embedded Linux, Playstation Portable, WindowsCE, and Nintendo DS. At a certain level of abstraction, there are so many commonalities between all of these that it doesn’t seem very efficient to invent a completely new GUI/OS “paradigm,” like OLPC’s Sugar was apparently trying to do.

The HftRoU require some form of networking or wireless connectivity feature. WiFi (802.11*), GSM, UMTS, WiMAX, Bluetooth… Doesn’t need to be extremely fast, but it should be flexible and it absolutely cannot be cost-prohibitive. IP might make much more sense than, say, SMS/MMS, but a lot can be done with any kind of data transmission between devices. XO-style mesh networking could be a very interesting option. As VoIP has proven, voice can efficiently be transmitted as data so “voice networks” aren’t necessary.

My sense is that a multitouch interface with an accelerometer would be extremely effective. Yes, I’m thinking of Apple’s Touch devices and MacBooks. As well as about the Microsoft Surface, and Jeff Han’s Perceptive Pixel. One thing all of these have shown is how “intuitive” it can be to interact with a machine using gestures. Haptic feedback could also be useful but I’m not convinced it’s “there yet.”

I’m really not sure a keyboard is very important. In fact, I think that keyboard-focused laptops and tablets are the wrong basis for thinking about “handhelds for the rest of us.” Bear in mind that I’m not thinking about devices for would-be office workers or even programmers. I’m thinking about the broadest user base you can imagine. “The Rest of Us” in the sense of, those not already using computers very directly. And that user base isn’t that invested in (or committed to) touch-typing. Even people who are very literate don’t tend to be extremely efficient typists. If we think about global literacy rates, typing might be one thing which needs to be leapfrogged. After all, a cellphone keypad can be quite effective in some hands and there are several other ways to input text, especially if typing isn’t too ingrained in you. Furthermore, keyboards aren’t that convenient in multilingual contexts (i.e., in most parts of the world). I say: avoid the keyboard altogether, make it available as an option, or use a virtual one. People will complain. But it’s a necessary step.

If the device is to be used for voice communication, some audio support is absolutely required. Even if voice communication isn’t part of it (and I’m not completely convinced it’s the one required feature), audio is very useful, IMHO (I’m an aural guy). In some parts of the world, speakers are much favoured over headphones or headsets. But I personally wish that at least some HftRoU could have external audio inputs/outputs. Maybe through USB or an iPod-style connector.

A voice interface would be fabulous, but there still seem to be technical issues with both speech recognition and speech synthesis. I used to work in that field and I keep dreaming, like Bill Gates and others do, that speech will finally take the world by storm. But maybe the time still hasn’t come.

It’s hard to tell what size the screen should be. There probably needs to be a range of devices with varying screen sizes. Apple’s Touch devices prove that you don’t need a very large screen to have an immersive experience. Maybe some HftRoU screens should in fact be larger than that of an iPhone or iPod touch. Especially if people are to read or write long-form text on them. Maybe the eeePC had it right. Especially if the devices’ form factor is more like a big handheld than like a small subnotebook (i.e., slimmer than an eeePC). One reason form factor matters, in my mind, is that it could make the devices “disappear.” That, and the difference between having a device on you (in your pocket) and carrying a bag with a device in it. Form factor was a big issue with my Newton MessagePad 130. As the OLPC XO showed, cost and power consumption are also important issues regarding screen size. I’d vote for a range of screens between 3.5 inch (iPhone) and 8.9 inch (eeePC 900) with a rather high resolution. A multitouch version of the XO’s screen could be a major contribution.

In terms of both audio and screen features, some consideration should be given to adaptive technologies. Most of us take for granted that “almost anyone” can hear and see. We usually don’t perceive major issues in the fact that “personal computing” typically focuses on visual and auditory stimuli. But if these devices truly are “for the rest of us,” they could help empower visually- or hearing-impaired individuals, who are often marginalized. This is especially relevant in the logic of humanitarianism.

HftRoU needs a much autonomy from a power source as possible. Both in terms of the number of hours devices can be operated without needing to be connected to a power source and in terms of flexibility in power sources. Power management is a major technological issue, with portable, handheld, and mobile devices. Engineers are hard at work, trying to find as many solutions to this issue as they can. This was, obviously, a major area of research for the OLPC. But I’m not even sure the solutions they have found are the only relevant ones for what I imagine HftRoU to be.

GPS could have interesting uses, but doesn’t seem very cost-effective. Other “wireless positioning systems” (à la Skyhook) might reprsent a more rational option. Still, I think positioning systems are one of the next big things. Not only for navigation or for location-based targeting. But for a set of “unintended uses” which are the hallmark of truly disruptive technology. I still remember an article (probably in the venerable Wired magazine) about the use of GPS/GIS for research into climate change. Such “unintended uses” are, in my mind, much closer to the constructionist ideal than the OLPC XO’s unified design can ever get.

Though a camera seems to be a given in any portable or mobile device (even the OLPC XO has one), I’m not yet that clear on how important it really is. Sure, people like taking pictures or filming things. Yes, pictures taken through cellphones have had a lasting impact on social and cultural events. But I still get the feeling that the main reason cameras are included on so many devices is for impulse buying, not as a feature to be used so frequently by all users. Also, standalone cameras probably have a rather high level of penetration already and it might be best not to duplicate this type of feature. But, of course, a camera could easily be a differentiating factor between two devices in the same category. I don’t think that cameras should be absent from HftRoU. I just think it’s possible to have “killer apps” without cameras. Again, I’m biased.

Apart from networking/connectivity uses, Bluetooth seems like a luxury. Sure, it can be neat. But I don’t feel it adds that much functionality to HftRoU. Yet again, I could be proven wrong. Especially if networking and other inter-device communication are combined. At some abstract level, there isn’t that much difference between exchanging data across a network and controlling a device with another device.

Yes, I do realize I pretty much described an iPod touch (or an iPhone without camera, Bluetooth, or cellphone fees). I’ve been lusting over an iPod touch since September and it does colour my approach. I sincerely think the iPod touch could serve as an inspiration for a new device type. But, again, I care very little about which company makes that device. I don’t even care about how open the operating system is.

As long as our minds are open.

How Do I Facebook?

In response to David Giesberg.

How Do You Facebook? | david giesberg dot com

How have I used Facebook so far?

  • Reconnected with old friends.
    • Bringing some to Facebook
    • Noticing some mutual friends.
  • Made some new contacts.
    • Through mutual acquaintances and foafs.
    • Through random circumstances.
  • Thought about social networks from an ethnographic perspective.
    • Discussed social networks in educational context.
    • Blogged about online forms of social networking.
  • “Communicated”
    • Sent messages to contacts in a relatively unintrusive way (less “pushy” than regular email).
    • Used “wall posts” to have short, public conversations about diverse items.
  • Micro-/nanoblogged, social-bookmarked:
    • Shared content (links, videos…) with contacts.
    • Found and discussed shared items.
    • Used my “status update” to keep contacts updated on recent developments on my life (something I rarely do in my blogposts).
  • Managed something of a public persona.
    • Maintained a semi-public profile.
    • Gained some social capital.
  • Found an alternative to Linkup/Upcoming/MeetUp/GCal?
    • Kept track of several events.
    • Organized a few events.
  • Had some aimless fun:
    • Teased people through their walls.
    • Answered a few quizzes.
    • Played a few games.
    • Discovered bands through contacts who “became fans” of them (I don’t use iLike).

I Want It All: The Ultimate Handheld Device?

In a way, this is a short version of a couple of posts I’ve been planning. RERO‘s better than keeping drafts.

So, what do I want in the ultimate handheld device? Basically, everything. More specifically, I’ve been thinking about the advantages of merging technologies.

At first, I was mostly thinking about “wireless” in general. Something which could bring together WiFi (802.11), WiMAX, and (3G) cellular networks. The idea being that you can get the advantages from all of these so that the device can be online pretty much all the time. It’s a pipedream, of course, but it’s a fun dream to have.

And then, the release of location services on the iPhone and iPod touch made me think about some kind of hybrid positioning system, using GPS, Google’s cellphone-based positioning, and Skyhook‘s Wi-Fi Positioning System (WPS).

A recent article in USA Today explains Skyhook’s strategy:

Jobs, iPhone have Skyhook pointed in right direction – USATODAY.com

And the Skyhook site itself has some interesting scenarios for WPS use in navigation, social networking, content management, location-specific marketing, gaming, and tracking. It seems rather clear to me that positioning systems in general have a rather bright future. I also don’t really see a reason for one positioning system to exclude the others (apart from technological and financial issues).

Positioning will be especially useful if it ever becomes really commonplace. Part network effect, part glocalization.

Of course, there are still several issues to solve. Including privacy and safety concerns. But a good system would make it possible for the user to control her/his positioning information (when and where the user’s coordinates are made available, and how precise they are allowed to be). Even without positioning systems, many of us have been using online mapping services (including Google Maps) to reveal some details about our movements. Typically, we’re fine with even perfect strangers knowing that we’ve been through a public space in the past yet we may only provide precise and up-to-date location details to people we trust. There’s no reason a positioning system on a handheld device should only work in one situation.

Now, I’m not saying that positioning is the “ultimate handheld device’s killer app.” But positioning is the kind of feature which opens up all sorts of possibilities.

And, actually, I’ve been thinking about GPS devices for quite a while. Unfortunately, most of them are either quite expensive or meant almost exclusively for car navigation or for outdoor activities. As a non-wealthy compulsive pedestrian who hasn’t been doing much outdoors in recent years, a dedicated GPS device never seemed that reasonable a purchase.

But as a semi-nomadic ethnographer, I often wished I had an easy way to record where I was. In fact, a positioning-enabled handheld device could be quite useful in ethnographic fieldwork. Several things could be made easier if we were able to geotag field material (including fieldnotes, still pictures, and audio recordings). And, of course, colleagues in archeology have been using GPS and GIS for quite a while.

Of course, any smartphone with a positioning system could help. Apple’s iPhone is one and we already know that smartphones compatible with Google’s Android will be able to have location-based functionalities. Given Google’s lead in terms of maps and cellphone-based positioning, those Android devices do sound rather close to the ultimate handheld device.

Less Than 30 Minutes

Nice!

At 20:27 (EST) on Saturday, November 17, 2007, I post a blog entry on the archaic/rare French term «queruleuse» (one equivalent of “querulous”). At 20:54 (EST) of the same day, Google is already linking my main blog page as the first page containing the term “queruleuse” and as the fourth page containing the term “querulente.” At that point in time, the only other result for “queruleuse” was to a Google Book. Interestingly enough, a search in Google Book directly lists other Google Books containing that term, including different versions of the same passage. These other books do not currently show up on the main Google search for that term. And blogs containing links to this blog are now (over two hours after my «queruleuse» post) showing above the Google Book in search results.

Now, there’s nothing very extraordinary, here. The term «queruleuse» is probably not the proper version of the term. In fact, «querulente» seems a bit more common. Also, “querulous” and “querulent” both exist in English, and their definitions seem fairly similar to the concept to which «queruleuse» was supposed to refer. So, no magic, here.

But I do find it very interesting that it takes Google less than a half hour for Google to update its database to show my main page as the first result for a term which exists in its own Google Books database.

I guess the reason I find it so interesting is that I have thought a bit about SEO, Search Engine Optimization. I usually don’t care about such issues but a couple of things made me think about Google’s PageRank specifically.

One was that someone recently left a comment on this very blog (my main blog, among several), asking how long it took me to get a PageRank of 5. I don’t know the answer but it seems to me that my PageRank hasn’t varied since pretty much the beginning. I don’t use the Google Toolbar in my main browser so I don’t really know. But when I did look at the PR indicator on this blog, it seemed to be pretty much always at the midway point and I assumed it was just normal. What’s funny is that, after attending a couple Yulblog meetings more than a year ago, someone mentioned my PageRank, trying to interpret why it was so high. I checked that Yulblogger’s blog recently and it has a PR of 6, IIRC. Maybe even 7. (Pretty much an A-List blogger, IMHO.)

The other thing which made me think about PageRank is a discussion about it on a recent episode of the This Week in Tech (TWiT) “netcast” (or “podcast,” as everybody else would call it). On that episode, Chaos Manor author Jerry Pournelle mused about PageRank and its inability to provide a true measure of just about anything. Though most people would agree that PageRank is a less than ideal measure for popularity, influence, or even relevance, Pournelle’s point was made more strongly than “consensus opinion among bloggers.” I tend to agree with Pournelle. 😉

Of course, some people probably think that I’m a sore loser and that the reason I make claims about the irrelevance of PageRank is that I’d like to get higher in a blogosphere’s hierarchy. But, honestly, I had no idea that PR5 might be a decent rank until this commenter asked me about. Even when the aforementioned Yulblogger talked about it, I didn’t understand that it was supposed to be a rather significant number. I just thought this blogger was teasing (despite not being a teaser).

Answering the commenter’s question as to when my PR reached 5, I talked about the rarity of my name. Basically, I can always rely on my name being available on almost any service. Things might change if a distant cousin gets really famous really soon, of course… ;-) In fact, I’m wondering if talking about this on my blog might push someone to use my name for some service just to tease/annoy me. I guess there could even be more serious consequences. But, in the meantime, I’m having fun with my name’s rarity. And I’m assuming this rarity is a factor in my PageRank.

Problem is, this isn’t my only blog with my name in the domain. One of the others is on Google’s very own Blogger platform. So I’m guessing other factors contribute to this (my main) blog’s PageRank.

One factor is likely to be my absurdly long list of categories. Reason for this long list is that I was originally using them as tags, linked to Technorati tags. Actually, I recently shortened this list significantly by transforming many categories into tags. It’s funny that the PageRank-interested commenter replied to this very same post about categories and tags since I was then positing that the modification to my categories list would decrease the number of visits to this blog. Though it’s hard for me to assess an actual causal link, I do get significantly less visits since that time. And I probably do get a few more comments than before (which is exactly what I wanted). AFAICT, WordPress.com tags still work as Technorati tags so I have no idea how the change could have had an impact. Come to think of it, the impact probably is spurious.

A related factor is my absurdly long blogroll. I don’t “do it on purpose,” I just add pretty much any blog I come across. In fact, I’ve been adding most blogs authored by MyBlogLog visitors to this blog (those you see on the right, here). Kind of as a courtesy to them for having visited my blog. And I do the same thing with blogs managed by people who comment on this blog. I even do it with blogs by pretty much any Yulblogger I’ve come across, somehow. All of this is meant as a way to collect links to a wide diversity of blogs, using arbitrary selection criteria. Just because I can.

Actually, early on (before I grokked the concept of what a blogroll was really supposed to be), I started using the “Link This” bookmarklet to collect links whether they were to actual blogs or simply main pages. I wasn’t really using any Social Networking Service (SNS) at that point in time (though I had used some SNS several years prior) and I was thinking of these lists of people pretty much the same way many now conceive of SNS. Nowadays, I use Facebook as my main SNS (though I have accounts on other SNS, including MySpace). So this use of links/blogrolls has been superseded by actual SNS.

What has not been superseded and may in fact be another factor for my PageRank is the fact that I tend to keep links of much of the stuff I read. After looking at a wide variety of “social bookmarking systems,” I recently settled on Spurl (my Spurl RSS). And it’s not really that Spurl is my “favourite social bookmarking system evah.” But Spurl is the one system which fits the most in (or least disrupts) my workflow right now. In fact, I keep thinking about “social bookmarking systems” and I have lots of ideas about the ideal one. I know I’ll be posting some of these ideas someday, but many of these ideas are a bit hard to describe in writing.

At any rate, my tendency to keep links on just about anything I read might contribute to my PageRank as Google’s PageRank does measure the number of outgoing links. On the other hand, the fact that I put my Spurl feed on my main page probably doesn’t have much of an impact on my PageRank since I started doing this a while after I started this blog and I’m pretty sure my PageRank remained the same. (I’m pretty sure Google search only looks at the actual blog entries, not the complete blog site. But you never know…)

Now, another tendency I have may also be a factor. I tend to link to my own blog entries. Yeah, I know, many bloggers see this as self-serving and lame. But I do it as a matter of convenience and “thought management.” It helps me situate some of my “streams of thought” and I like the idea of backtracking my blog entries. Actually, it’s all part of a series of habits after I started blogging, 2.5 years ago. And since I basically blog for fun, I don’t really care if people think my habits are lame.

Sheesh! All this for a silly integer about which I tend not to think. But I do enjoy thinking about what brings people to specific blogs. I don’t see blog statistics on any of my other blogs and I get few enough comments or trackbacks to not get much data on other factors. So it’s not like I can use my blogs as a basis for a quantitative study of “blog influence” or “search engine relevance.”

One dimension which would interesting to explore, in relation to PageRank, is the network of citations in academic texts. We all know that Brin and Page got their PageRank idea from the academic world and the academic world is currently looking at PageRank-like measures of “citation impact” (“CitationRank” would be a cool name). I tend to care very little about the quantitative evaluation of even “citation impact” in academia, but I really am intrigued by the network analysis of citations between academic references. One fun thing there is that there seems to be a high clustering coefficient among academic papers in some research fields. In some cases, the coefficient itself could reveal something interesting but the very concept of “academic small worlds” may be important to consider. Especially since these “worlds” might integrate as apparently-coherent (and consistent) worldviews.

Groupthink, anyone? 😉

Android "Sales Pitch" and "Drift-Off"

(Google’s Android is an open software platform to be put on cellphones next year.)

There’s something to this video. Something imilar to Steve Jobs’s alleged “Reality Distortion Field,” but possibly less connected to presentation skills or perceived charisma. Though Mike seems to be a more experienced presenter than those we see in other videos about Android, and though the presentation format is much slicker than other videos about Android, there’s something special about this video, to me.

For one thing, the content of the three “Androidology” videos are easy to understand, even for a non-developer/non-coder. Sure, you need to know some basic terms. But the broad concepts are easy to understand, at least for those who have been observing the field of technology. One interesting thing about this is that these “Androidology” videos are explicitly meant for software programmers. The “you” in this context specifically refers to would-be developers of Android applications. At the same time, these videos do a better job, IMHO, of “selling Android to tech gurus” than other Android-related videos published by Google.

Now, I do find this specific video quite interesting, and my interest has to do with a specific meaning of “sales pitch.”

I keep going back to a Wired article about the “drift-off moment” during sales pitches (or demos):

When Mann gives a demo, what he’s waiting for is what salespeople call “the drift-off moment.” The client’s eyes get gooey, and they’re staring into space. They’re not bored – they’re imagining what they could do with SurveyBuilder. All tech salespeople mention this – they’ve succeeded not when they rivet the client’s attention, but when they lose it.

I apply this to teaching when I can and I specifically talked about this during a presentation about online tools for teaching.

This video on four of Android’s APIs had this effect on me. Despite not being a developer myself, I started imagining what people could do with Android. It was just a few brief moments. But very effective.

The four APIs discussed in this video are (in presentation order):

  1. Location Manager
  2. XMPP Service
  3. Notification Manager
  4. View System (including MapView and WebView)

Mike’s concise (!) explanations on all of these are quite straightforward (though I was still unclear on XMPP and on details of the three other APIs after watching the video). Yet something “clicked” in my mind while watching this. Sure, it might just be serendipitous. But there’s something about these APIs or about the way they are described which make me daydream.

Which is exactly what the “drift-off moment” is all about.

Technorati Tags: , , , , , , , , , , , , , , , , ,

[youtube=http://www.youtube.com/watch?v=MPukbH6D-lY&feature=PlayList&p=D7C64411AF40DEA5&index=2]

How Can Google Beat Facebook?

It might not be so hard:

As I see it, the biggest shortcoming of social-networking sites is their inability to play well with others. Between MySpace, Facebook, LinkedIn, Tribe, Pownce, and the numerous also-rans, it seems as if maintaining an active presence at all of these sites could erode into becoming a full-time job. If Google can somehow create a means for all of these services to work together, and seamlessly interact with the Google family, then perhaps this is the killer app that people don’t even realize they’ve been waiting for. Google gives social networking another go | Media Sphere – Josh Wolf blogs about the new information age – CNET Blogs

Some might take issue at Wolf’s presumption. Many of us have realised in 1997 that the “killer app” for social networking services is for them to work together. But the point is incredibly important and needs to be made again and again.

Social Networking Services work when people connect through it. The most intricate “network effect” you can think of. For connections to work, existing social relationships and potential social relationships need to be represented in the SNS as easily as possible. What’s more, investing effort and time in building one’s network relates quite directly with the prospective life of SNS. Faced with the eventuality of losing all connections in a snap because everybody has gone to “the next thing,” the typical SNS user is wary. Given the impression that SNS links can survive the jump to “the next one” (say, via a simple “import” function), the typical SNS user is likely to use the SNS to its fullest potential. This is probably one of several reasons for the success of Facebook. And Google can certainly put something together which benefits from this principle.

Yeah, yeah, Wolf  was referring more specifically to the “synchronisation” of activities on different SNS or SNS-like systems. That’s an important aspect of the overall “SNS interoperability” issue. Especially if SNS are important parts of people’s lives. But I prefer to think about the whole picture.

Another thing which has been mentioned is the connection Google could make between SNS and its other tools. One approach would be to build more “social networking features” (beyond sharing) into its existing services. The other could be to integrate Google tools into SNS (say, top-notch Facebook applications). Taken together, these two approaches would greatly benefit both Google and the field of social networking in general.

All in all, what I could easily see would be a way for me to bring all my SNS “content” to a Google SNS, including existing links. From a Google SNS, I would be able to use different “social-enabled” tools from Google like the new Gmail, an improved version of Google Documents, and the Blogger blogging platform. Eventually, most of my online activities would be facilitated by Google but I would still be able to use non-Google tools as I wish.

There’s a few tools I’m already thinking about, which could make sense in this “Google-enabled social platform.” For one, the “ultimate social bookmarking tool” for which I’ve been building feature wishlists. Then, there’s the obvious need for diverse applications which can use a centralised online storage system. Or the browser integration possible with something like, I don’t know, the Google toolbar… 😉

Given my interest in educational technology, I can’t help but think about online systems for course management (like Moodle and Sakai). Probably too specific, but Google could do a wonderful job at it.

Many people are certainly thinking about advertisement, revenue-sharing, p2p for media files, and other Google-friendly concepts. These aren’t that important for me.

I can’t say that I have a very clear image of what Google’s involvement in the “social networking sphere” will look like. But I can easily start listing Google products and features which are desperately calling for integration in a social context: Scholar, Web History, Docs, Reader, Browser Sync, Gcal, Gmail, Notebook, News, Mobile, YouTube, Ride Finder, Blog Comments, Music Trends, University Search, MeasureMap, Groups, Alerts, Bookmarks…

Sometimes, I really wonder why a company like Google can’t “get its act together” in making everything it does fit in a simple platform. They have the experts, the money, the users. They just need to make it happen.

Ah, well…