Category Archives: researchers

Transparency and Secrecy

[Started working on this post on December 1st, based on something which happened a few days prior. Since then, several things happened which also connected to this post. Thought the timing was right to revisit the entry and finally publish it. Especially since a friend just teased me for not blogging in a while.]

I’m such a strong advocate of transparency that I have a real problem with secrecy.

I know, transparency is not exactly the mirror opposite of secrecy. But I think my transparency-radical perspective causes some problem in terms of secrecy-management.

“Haven’t you been working with a secret society in Mali?,” you ask. Well, yes, I have. And secrecy hasn’t been a problem in that context because it’s codified. Instead of a notion of “absolute secrecy,” the Malian donsow I’ve been working with have a subtle, nuanced, complex, layered, contextually realistic, elaborate, and fascinating perspective on how knowledge is processed, “transmitted,” managed. In fact, my dissertation research had a lot to do with this form of knowledge management. The term “knowledge people” (“karamoko,” from kalan+mogo=learning+people) truly applies to members of hunter’s associations in Mali as well as to other local experts. These people make a clear difference between knowledge and information. And I can readily relate to their approach. Maybe I’ve “gone native,” but it’s more likely that I was already in that mode before I ever went to Mali (almost 11 years ago).

Of course, a high value for transparency is a hallmark of academia. The notion that “information wants to be free” makes more sense from an academic perspective than from one focused on a currency-based economy. Even when people are clear that “free” stands for “freedom”/«libre» and not for “gratis”/«gratuit» (i.e. “free as in speech, not free as in beer”), there persists a notion that “free comes at a cost” among those people who are so focused on growth and profit. IMHO, most the issues with the switch to “immaterial economies” (“information economy,” “attention economy,” “digital economy”) have to do with this clash between the value of knowledge and a strict sense of “property value.”

But I digress.

Or, do I…?

The phrase “radical transparency” has been used in business circles related to “information and communication technology,” a context in which the “information wants to be free” stance is almost the basis of a movement.

I’m probably more naïve than most people I have met in Mali. While there, a friend told me that he thought that people from the United States were naïve. While he wasn’t referring to me, I can easily acknowledge that the naïveté he described is probably characteristic of my own attitude. I’m North American enough to accept this.

My dedication to transparency was tested by an apparently banal set of circumstances, a few days before I drafted this post. I was given, in public, information which could potentially be harmful if revealed to a certain person. The harm which could be done is relatively small. The person who gave me that information wasn’t overstating it. The effects of my sharing this information wouldn’t be tragic. But I was torn between my radical transparency stance and my desire to do as little harm as humanly possible. So I refrained from sharing this information and decided to write this post instead.

And this post has been sitting in my “draft box” for a while. I wrote a good number of entries in the meantime but I still had this one at the back of my mind. On the backburner. This is where social media becomes something more of a way of life than an activity. Even when I don’t do anything on this blog, I think about it quite a bit.

As mentioned in the preamble, a number of things have happened since I drafted this post which also relate to transparency and secrecy. Including both professional and personal occurrences. Some of these comfort me in my radical transparency position while others help me manage secrecy in a thoughtful way.

On the professional front, first. I’ve recently signed a freelance ethnography contract with Toronto-based consultancy firm Idea Couture. The contract included a non-disclosure agreement (NDA). Even before signing the contract/NDA, I was asking fellow ethnographer and blogger Morgan Gerard about disclosure. Thanks to him, I now know that I can already disclose several things about this contract and that, once the results are public, I’ll be able to talk about this freely. Which all comforts me on a very deep level. This is precisely the kind of information and knowledge management I can relate to. The level of secrecy is easily understandable (inopportune disclosure could be detrimental to the client). My commitment to transparency is unwavering. If all contracts are like this, I’ll be quite happy to be a freelance ethnographer. It may not be my only job (I already know that I’ll be teaching online, again). But it already fits in my personal approach to information, knowledge, insight.

I’ll surely blog about private-sector ethnography. At this point, I’ve mostly been preparing through reading material in the field and discussing things with friends or colleagues. I was probably even more careful than I needed to be, but I was still able to exchange ideas about market research ethnography with people in diverse fields. I sincerely think that these exchanges not only add value to my current work for Idea Couture but position me quite well for the future. I really am preparing for freelance ethnography. I’m already thinking like a freelance ethnographer.

There’s a surprising degree of “cohesiveness” in my life, these days. Or, at least, I perceive my life as “making sense.”

And different things have made me say that 2009 would be my year. I get additional evidence of this on a regular basis.

Which brings me to personal issues, still about transparency and secrecy.

Something has happened in my personal life, recently, that I’m currently unable to share. It’s a happy circumstance and I’ll be sharing it later, but it’s semi-secret for now.

Thing is, though, transparency was involved in that my dedication to radical transparency has already been paying off in these personal respects. More specifically, my being transparent has been valued rather highly and there’s something about this type of validation which touches me deeply.

As can probably be noticed, I’m also becoming more public about some emotional dimensions of my life. As an artist and a humanist, I’ve always been a sensitive person, in-tune with his emotions. Specially positive ones. I now feel accepted as a sensitive person, even if several people in my life tend to push sensitivity to the side. In other words, I’ve grown a lot in the past several months and I now want to share my growth with others. Despite reluctance toward the “touchy-feely,” specially in geek and other male-centric circles, I’ve decided to “let it all loose.” I fully respect those who dislike this. But I need to be myself.

Quest for Expertise

Will at Work Learning: People remember 10%, 20%…Oh Really?.

This post was mentioned on the mailing-list for the Society for Teaching and Learning in Higher Education (STLHE-L).

In that post, Will Thalheimer traces back a well-known claim about learning to shoddy citations. While it doesn’t invalidate the base claim (that people tend to retain more information through certain cognitive processes), Thalheimer does a good job of showing how a graph which has frequently been seen in educational fields was based on faulty interpretation of work by prominent scholars, mixed with some results from other sources.

Quite interesting. IMHO, demystification and critical thinking are among the most important things we can do in academia. In fact, through training in folkloristics, I have become quite accustomed to this specific type of debunking.

I have in mind a somewhat similar claim that I’m currently trying to trace. Preliminary searches seem to imply that citations of original statements have a similar hyperbolic effect on the status of this claim.

The claim is what a type of “rule of thumb” in cognitive science. A generic version could be stated in the following way:

It takes ten years or 10,000 hours to become an expert in any field.

The claim is a rather famous one from cognitive science. I’ve heard it uttered by colleagues with a background in cognitive science. In 2006, I first heard about such a claim from Philip E. Ross, on an episode of Scientific American‘s Science Talk podcast to discuss his article on expertise. I later read a similar claim in Daniel Levitin’s 2006 This Is Your Brain On Music. The clearest statement I could find back in Levitin’s book is the following (p. 193):

The emerging picture from such studies is that ten thousand hours of practice is required to achieve the level of mastery associated with being a world-class expert – in anything.

More recently, during a keynote speech he was giving as part of his latest book tour, I heard a similar claim from presenter extraordinaire Malcolm Gladwell. AFAICT, this claim runs at the centre of Gladwell’s recent book: Outliers: The Story of Success. In fact, it seems that Gladwell uses the same quote from Levitin, on page 40 of Outliers (I just found that out).

I would like to pinpoint the origin for the claim. Contrary to Thalheimer’s debunking, I don’t expect that my search will show that the claim is inaccurate. But I do suspect that the “rule of thumb” versions may be a bit misled. I already notice that most people who set up such claims are doing so without direct reference to the primary literature. This latter comment isn’t damning: in informal contexts, constant referal to primary sources can be extremely cumbersome. But it could still be useful to clear up the issue. Who made this original claim?

I’ve tried a few things already but it’s not working so well. I’m collecting a lot of references, to both online and printed material. Apart from Levitin’s book and a few online comments, I haven’t yet read the material. Eventually, I’d probably like to find a good reference on the cognitive basis for expertise which puts this “rule of thumb” in context and provides more elaborate data on different things which can be done during that extensive “time on task” (including possible skill transfer).

But I should proceed somewhat methodically. This blogpost is but a preliminary step in this process.

Since Philip E. Ross is the first person on record I heard talk about this claim, a logical first step for me is to look through this SciAm article. Doing some text searches on the printable version of his piece, I find a few interesting things including the following (on page 4 of the standard version):

Simon coined a psychological law of his own, the 10-year rule, which states that it takes approximately a decade of heavy labor to master any field.

Apart from the ten thousand (10,000) hours part of the claim, this is about as clear a statement as I’m looking for. The “Simon” in question is Herbert A. Simon, who did research on chess at the Department of Psychology at Carnegie-Mellon University with colleague William G. Chase.  So I dig for diverse combinations of “Herbert Simon,” “ten(10)-year rule,” “William Chase,” “expert(ise),” and/or “chess.” I eventually find two primary texts by those two authors, both from 1973: (Chase and Simon, 1973a) and (Chase and Simon, 1973b).

The first (1973a) is an article from Cognitive Psychology 4(1): 55-81, available for download on ScienceDirect (toll access). Through text searches for obvious words like “hour*,” “year*,” “time,” or even “ten,” it seems that this article doesn’t include any specific statement about the amount of time required to become an expert. The quote which appears to be the most relevant is the following:

Behind this perceptual analysis, as with all skills (cf., Fitts & Posner, 1967), lies an extensive cognitive apparatus amassed through years of constant practice.

While it does relate to the notion that there’s a cognitive basis to practise, the statement is generic enough to be far from the “rule of thumb.”

The second Chase and Simon reference (1973b) is a chapter entitled “The Mind’s Eye in Chess” (pp. 215-281) in the proceedings of the Eighth Carnegie Symposium on Cognition as edited by William Chase and published by Academic Press under the title Visual Information Processing. I borrowed a copy of those proceedings from Concordia and have been scanning that chapter visually for some statements about the “time on task.” Though that symposium occurred in 1972 (before the first Chase and Simon reference was published), the proceedings were apparently published after the issue of Cognitive Psychology since the authors mention that article for background information.

I do find some interesting quotes, but nothing that specific:

By a rough estimate, the amount of time each player has spent playing chess, studying chess, and otherwise staring at chess positions is perhaps 10,000 to 50,000 hours for the Master; 1,000 to 5,000 hours for the Class A player; and less than 100 horus for the beginner. (Chase and Simon 1973b: 219)

or:

T
he organization of the Master’s elaborate repertoire of information takes thousands of hours to build up, and the same is true of any skilled task (e.g., football, music). That is why practice is the major independent variable in the acquisition of skill. (Chase and Simon 1973b: 279, emphasis in the original, last sentences in the text)

Maybe I haven’t scanned these texts properly but those quotes I find seem to imply that Simon hadn’t really devised his “10-year rule” in a clear, numeric version.

I could probably dig for more Herbert Simon wisdom. Before looking (however cursorily) at those 1973 texts, I was using Herbert Simon as a key figure in the origin of that “rule of thumb.” To back up those statements, I should probably dig deeper in the Herbert Simon archives. But that might require more work than is necessary and it might be useful to dig through other sources.

In my personal case, the other main written source for this “rule of thumb” is Dan Levitin. So, using online versions of his book, I look for comments about expertise. (I do own a copy of the book and I’m assuming the Index contains page numbers for references on expertise. But online searches are more efficient and possibly more thorough on specific keywords.) That’s how I found the statement, quoted above. I’m sure it’s the one which was sticking in my head and, as I found out tonight, it’s the one Gladwell used in his first statement on expertise in Outliers.

So, where did Levitin get this? I could possibly ask him (we’ve been in touch and he happens to be local) but looking for those references might require work on his part. A preliminary step would be to look through Levitin’s published references for Your Brain On Music.

Though Levitin is a McGill professor, Your Brain On Music doesn’t follow the typical practise in English-speaking academia of ladling copious citations onto any claim, even the most truistic statements. Nothing strange in this difference in citation practise.  After all, as Levitin explains in his Bibliographic Notes:

This book was written for the non-specialist and not for my colleagues, and so I have tried to simplify topics without oversimplifying them.

In this context, academic-style citation-fests would make the book too heavy. Levitin does, however, provide those “Bibliographic Notes” at the end of his book and on the website for the same book. In the Bibliographic Notes of that site, Levitin adds a statement I find quite interesting in my quest for “sources of claims”:

Because I wrote this book for the general reader, I want to emphasize that there are no new ideas presented in this book, no ideas that have not already been presented in scientific and scholarly journals as listed below.

So, it sounds like going through those references is a good strategy to locate at least solid references on that specific “10,000 hour” claim. Among relevant references on the cognitive basis of expertise (in Chapter 7), I notice the following texts which might include specific statements about the “time on task” to become an expert. (An advantage of the Web version of these bibliographic notes is that Levitin provides some comments on most references; I put Levitin’s comments in parentheses.)

  • Chi, Michelene T.H., Robert Glaser, and Marshall J. Farr, eds. 1988. The Nature of Expertise. Hillsdale, New Jersey: Lawrence Erlbaum Associates. (Psychological studies of expertise, including chess players)
  • Ericsson, K. A., and J. Smith, eds. 1991. Toward a General Theory of Expertise: prospects and limits. New York: Cambridge University Press. (Psychological studies of expertise, including chess players)
  • Hayes, J. R. 1985. Three problems in teaching general skills. In Thinking and Learning Skills: Research and Open Questions, edited by S. F. Chipman, J. W. Segal and R. Glaser. Hillsdale, NJ: Erlbaum. (Source for the study of Mozart’s early works not being highly regarded, and refutation that Mozart didn’t need 10,000 hours like everyone else to become an expert.)
  • Howe, M. J. A., J. W. Davidson, and J. A. Sloboda. 1998. Innate talents: Reality or myth? Behavioral & Brain Sciences 21 (3):399-442. (One of my favorite articles, although I don’t agree with everything in it; an overview of the “talent is a myth” viewpoint.)
  • Sloboda, J. A. 1991. Musical expertise. In Toward a general theory of expertise, edited by K. A. Ericcson (sic) and J. Smith. New York: Cambridge University Press. (Overview of issues and findings in musical expertise literature)

I have yet to read any of those references. I did borrow Ericsson and Smith when I first heard about Levitin’s approach to talent and expertise (probably through a radio and/or podcast appearance). But I had put the issue of expertise on the back-burner. It was always at the back of my mind and I did blog about it, back then. But it took Gladwell’s talk to wake me up. What’s funny, though, is that the “time on task” statements in (Ericsson and Smith,  1991) seem to lead back to (Chase and Simon, 1973b).

At this point, I get the impression that the “it takes a decade and/or 10,000 hours to become an expert”:

  • was originally proposed as a vague hypothesis a while ago (the year 1899 comes up);
  • became an object of some consideration by cognitive psychologists at the end of the 1960s;
  • became more widely accepted in the 1970s;
  • was tested by Benjamin Bloom and others in the 1980s;
  • was precised by Ericsson and others in the late 1980s;
  • gained general popularity in the mid-2000s;
  • is being further popularized by Malcolm Gladwell in late 2008.

Of course, I’ll have to do a fair bit of digging and reading to verify any of this, but it sounds like the broad timeline makes some sense. One thing, though, is that it doesn’t really seem that anybody had the intention of spelling it out as a “rule” or “law” in such a format as is being carried around. If I’m wrong, I’m especially surprised that a clear formulation isn’t easier to find.

As an aside, of sorts… Some people seem to associate the claim with Gladwell, at this point. Not very surprsing, given the popularity of his books, the effectiveness of his public presentations, the current context of his book tour, and the reluctance of the general public to dig any deeper than the latest source.

The problem, though, is that it doesn’t seem that Gladwell himself has done anything to “set the record straight.” He does quote Levitin in Outliers, but I heard him reply to questions and comments as if the research behind the “ten years or ten thousand hours” claim had some association with him. From a popular author like Gladwell, it’s not that awkward. But these situations are perfect opportunities for popularizers like Gladwell to get a broader public interested in academia. As Gladwell allegedly cares about “educational success” (as measured on a linear scale), I would have expected more transparency.

Ah, well…

So, I have some work to do on all of this. It will have to wait but this placeholder might be helpful. In fact, I’ll use it to collect some links.

 

Some relevant blogposts of mine on talent, expertise, effort, and Levitin.

And a whole bunch of weblinks to help me in my future searches (I have yet to really delve in any of this).

Ethnographic Disciplines

Just because this might be useful in the future…
I perceive a number of academic disciplines to be “ethnographic” in the sense that they use the conceptual and epistemic apparatus of “ethnography.” (“Ethnography” taken here as an epistemological position in human research, not as “the description of a people” in either literary or methodological uses.)

I don’t mean by this that practitioners are all expected to undertake ethnographic field research or that their methods are exclusively ethnographic. I specifically wish to point out that ethnography is not an “exclusive prerogative” of anthropology. And I perceive important connections between these disciplines.

In no particular order:

  • Ethnohistory
  • Ethnolinguistics (partly associated with Linguistic Anthropology)
  • Folkloristics
  • Ethnomusicology
  • Ethnocinematography (partly associated with Visual Anthropology)
  • Ethnology (Cultural Anthropology)

The following disciplines (the “micros”), while not ethnographic per se, often have ethnographic components at the present time.

  • Microhistory
  • Microsociology
  • Microeconomics

Health research and market research also make frequent use of ethnographic methods, these days (especially through “qualitative data analysis” software). But I’m not clear on how dedicated these researchers are to the epistemological bases for ethnography.

It may all sound very idiosyncratic. But I still think it works, as a way to provide working definitions for disciplines and approaches.

Thoughts, comments, suggestions, questions?

The Issue Is Respect

As a creative generalist, I don’t tend to emphasize expert status too much, but I do see advantages in complementarity between people who act in different spheres of social life. As we say in French, «à chacun son métier et les vaches seront bien gardées» (“to each their own profession and cows will be well-kept”).

The diversity of skills, expertise, and interest is especially useful when people of different “walks of life” can collaborate with one another. Tolerance, collegiality, dialogue. When people share ideas, the potential is much greater if their ideas are in fact different. Very simple principle, which runs through anthropology as the study of human diversity (through language, time, biology, and culture).

The problem, though, is that people from different “fields” tend not to respect one another’s work. For instance, a life scientist and a social scientist often have a hard time understanding one another because they simply don’t respect their interlocutor’s discipline. They may respect each other as human beings but they share a distrust as to the very usefulness of the other person’s field.

Case in point: entomologist Paul R. Ehrlich, who spoke at the Seminar About Long Term Thinking (SALT) a few weeks ago.

The Long Now Blog » Blog Archive » Paul Ehrlich, “The Dominant Animal: Human Evolution and the Environment”

Ehrlich seems to have a high degree of expertise in population studies and, in that SALT talk, was able to make fairly interesting (though rather commonplace) statements about human beings. For instance, he explicitly addressed the tendency, in mainstream media, to perceive genetic determinism where it has no place. Similarly, his discussion about the origins and significance of human language was thoughtful enough that it could lead other life scientists to at least take a look at language.

What’s even more interesting is that Ehrlich realizes that social sciences can be extremely useful in solving the environmental issues which concern him the most. As we learn during the question period after this talk, Ehrlich is currently talking with some economists. And, contrary to business professors, economists participate very directly in the broad field of social sciences.

All of this shows quite a bit of promise, IMVHAWISHIMVVVHO. But the problem has to do with respect, it seems.

Now, it might well be that Ehrlich esteems and respects his economist colleagues. Their methods may be sufficiently compatible with his that he actually “hears what they’re saying.” But he doesn’t seem to “extend this courtesy” to my own highly esteemed colleagues in ethnographic disciplines. Ehrlich simply doesn’t grok the very studies which he states could be the most useful for him.

There’s a very specific example during the talk but my point is broader. When that specific issue was revealed, I had already been noticing an interdisciplinary problem. And part of that problem was my own.

Ehrlich’s talk was fairly entertaining, although rather unsurprising in the typical “doom and gloom” exposé to which science and tech shows have accustomed us. Of course, it was fairly superficial on even the points about which Ehrlich probably has the most expertise. But that’s expected of this kind of popularizer talk. But I started reacting quite negatively to several of his points when he started to make the kinds of statements which make any warm-blooded ethnographer cringe. No, not the fact that his concept of “culture” is so unsophisticated that it could prevent a student of his from getting a passing grade in an introductory course in cultural anthropology. But all sorts of comments which clearly showed that his perspective on human diversity is severely restricted. Though he challenges some ideas about genetic determinism, Ehrlich still holds to a form of reductionism which social scientists would associate with scholars who died before Ehrlich was born.

So, my level of respect for Ehrlich started to fade, with each of those half-baked pronouncments about cultural diversity and change.

Sad, I know. Especially since I respect every human being equally. But it doesn’t mean that I respect all statements equally. As is certainly the case for many other people, my respect for a person’s pronouncements may diminish greatly if those words demonstrate a lack of understanding of something in which I have a relatively high degree of expertise. In other words, a heart surgeon could potentially listen to a journalist talk about “cultural evolution” without blinking an eye but would likely lose “intellectual patience” if, in the same piece, the journalist starts to talk about heart diseases. And this impatience may retroactively carry over to the discussion about “cultural evolution.” As we tend to say in the ethnography of communication, context is the thing.

And this is where I have to catch myself. It’s not because Ehrlich made statements about culture which made him appear clueless that what he said about the connections between population and environment is also clueless. I didn’t, in fact, start perceiving his points about ecology as misled for the very simple reason that we have been saying the same things, in ethnographic disciplines. But that’s dangerous: selectively accepting statements because they reinforce what you already know. Not what academic work is supposed to be about.

In fact, there was something endearing about Ehrlich. He may not understand the study of culture and he doesn’t seem to have any training in the study of society, but at least he was trying to understand. There was even a point in his talk when he something which would be so obvious to any social scientist that I could have gained a new kind of personal respect for Ehrlich’s openness, if it hadn’t been for his inappropriate statements about culture.

The saddest part is about dialogue. If a social scientist is to work with Ehrlich and she reacts the same way I did, dialogue probably won’t be established. And if Ehrlich’s attitude toward epistemological approaches different from his own are represented by the statements he made about ethnography, chances are that he will only respect those of my social science colleagues who share his own reductionist perspective.

It should be obvious that there’s an academic issue, here, in terms of inter-disciplinarity. But there’s also a personal issue. In my own life, I don’t want to restrict myself to conversations with people who think the same way I do.

"To Be Verified": Trivia and Critical Thinking

A friend posted a link to the following list of factoids on his Facebook profile: Useless facts, Weird Information, humor. It contains such intriguing statements about biology, language, inventions, etc.

Similar lists abound, often containing the same tidbits:

Several neat pieces of trivial information. Not exactly “useless.” But gratuitous and irrelevant. The type of thing you may wish to plug in a conversation. Especially at the proverbial “cocktail party.” This is, after all, an appropriate context for attention economy. But these lists are also useful as preparation for game shows and barroom competitions. The stuff of erudition.

One of my first reflexes, when I see such lists of trivia online, is to look for ways to evaluate their accuracy. This is partly due to my training in folkloristics, as “netlore” is a prolific medium for verbal folklore (folk beliefs, rumors, urban legends, myths, and jokes). My reflex is also, I think, a common reaction among academics. After all, the detective work of critical thinking is pretty much our “bread and butter.” Sure, we can become bothersome with this. “Don’t be a bore, it’s just trivia.” But many of us may react from a fear of such “trivial” thinking preventing more careful consideration.

An obvious place to start verifying these tidbits is Snopes. In fact, they do debunk several of the statements made in those lists. For instance, the one about an alleged Donald Duck “ban” in Finland found in the list my friend shared through Facebook. Unfortunately, however, many factoids are absent from Snopes, despite that site’s extensive database.

These specific trivia lists are quite interesting. They include some statements which are easy to verify. For instance, the product of two numbers. (However, many calculators are insufficiently precise for the specific example used in those factoid lists.) The ease with which one can verify the accuracy of some statements brings an air of legitimacy to the list in which those easily verified statements are included. The apparent truth-value of those statements is such that a complete list can be perceived as being on unshakable foundations. For full effectiveness, the easily verified statements should not be common knowledge. “Did you know? Two plus two equals four.”

Other statements appear to be based on hypothesis. The plausibility of such statements may be relatively difficult to assess for anyone not familiar with research in that specific field. For instance, the statement about typical life expectancy of currently living humans compared to individual longevity. At first sight, it does seem plausible that today’s extreme longevity would only benefit extremely few individuals in the future. Yet my guess is that those who do research on aging may rebut the statement that “Only one person in two billion will live to be 116 or older.” Because such statements require special training, their effect is a weaker version of the legitimizing effect of easily verifiable statements.

Some of the most difficult statements to assess are the ones which contain quantifiers, especially those for uniqueness. There may, in fact, be “only one” fish which can blink with both eyes. And it seems possible that the English language may include only one word ending in “-mt” (or, to avoid pedantic disclaimers, “only one common word”). To verify these claims, one would need to have access to an exhaustive catalog of fish species or English words. While the dream of “the Web as encyclopedia” may hinge on such claims of exhaustivity, there is a type of “black swan effect” related to the common fallacy about lack of evidence being considered sufficient evidence of lack.

I just noticed, while writing this post, a Google Answers page which not only evaluates the accuracy of several statements found in those trivia lists but also mentions ease of verifiability as a matter of interest. Critical thinking is active in many parts of the online world.

An obvious feature of those factoid lists, found online or in dead-tree print, is the lack of context. Even when those lists are concerned with a single topic (say, snails or sleep), they provide inadequate context for the information they contain. I’m using the term “context” rather loosely as it covers both the text’s internal relationships (the “immediate context,” if you will) and the broader references to the world at large. Without going into details about philosophy of language, these approaches clearly inform my perspective.

A typical academic, especially an English-speaking one, might put the context issue this way: “citation needed.” After all, the Wikipedia approach to truth is close to current academic practice (especially in English-speaking North America) with peer-review replacing audits. Even journalists are trained to cite sources, though they rarely help others apply critical thinking to those sources. In some ways, sources are conceived as the most efficient way to assess accuracy.

My own approach isn’t that far from the citation-happy one. Like most other academics, I’ve learned the value of an appropriate citation. Where I “beg to differ” is on the perceived “weight” of a citation as support. Through an awkward quirk of academic writing, some citation practices amount to fallacious appeal to authority. I’m probably overreacting about this but I’ve heard enough academics make statements equating citations with evidence that I tend to be weary of what I perceive to be excessive referencing. In fact, some of my most link-laden posts could be perceived as attempts to poke fun at citation-happy writing styles. One may even notice my extensive use of Wikipedia links. These are sometimes meant as inside jokes (to my own sorry self). Same thing with many of my blogging tags/categories, actually. Yes, blogging can be playful.

The broad concept is that, regardless of a source’s authority, critical thinking should be applied as much as possible. No more, no less.

Beer Stats in Canada

This is interesting. Was just looking for the latest figures on sales of alcoholic beverages in Canada and it turns out they were published yesterday.
Unfortunately, this report doesn’t break down the figures by beer types  (regional, craft…). Another publication, including figures by domestic and import sales should come out shortly.
A couple of quotes:
As usual, beer was by far the most popular beverage. In terms of dollar value, beer captured 50.4% of sales. However, wine accounted for 25.2% of sales compared with 24.3% for spirits, the first time wine has jumped into second place.

From 1994/1995 to 2004/2005, sales of imported beer increased at an annual average rate of 18.6%, nearly six times the rate of growth of only 3.2% for sales of domestic brands.

Of all imported beer in Canada, 23.4% came from the United States, 20.5% from Mexico and 19.3% from the Netherlands.

(So, even import beer is mostly from large breweries…) 

A few quick observations.

  • Quebec is the only province with a loss in “net income of provincial and territorial liquor authorities and revenue from the sale of alcoholic beverages” between 2k4 and 2k5. (Because of the SAQ strike.)
  • The only places where beer accounts for less than 50% of total sales of alcoholic beverages are Manitoba (46%), Alberta (47%), British Columbia (44%), and the Northwest Territories (49%).
  • Quebec is the province with the lowest percentage of spirits sales (11% of the total sales of alcoholic beverages).
  • These proportions are quite similar for 2k4.

Aren’t beer statistics cool?

Tags: , , , , , , , , , , ,

Beer Comments by a Wine Expert: Redux

CBC’s Home Run did the second part of their “crash course” on beer, with their “wine expert.” For some obscure reason, they used a wit and a lambic as the main examples for ales. Comments made during the show had more to do with personal experiences of enjoying non-wine alcohol and getting drunk than with actual qualities of fermented beverages made with grain. We have a long way to go.

See my previous blog post (on the first part of that show’s “crash course,” talking about lagers). Here are my comments about this weeks discussion of ales:

This installment of Bélanger’s beer “crash course” is somewhat more appropriate than the previous one (although, lambics are usually not considered ales as S. cerevisiae isn’t necessarily the main fermentation agent). You might still consider getting help from one of several beer writers in Montreal. Some of them write in the local beer publications mentioned in my previous message, which has been reproduced here.

Not to be flip but, in Quebec, asking a wine expert to talk about beers is like asking a rugby expert to explain hockey. In Quebec, beer is more than a simple summer beverage and the craft beer industry across the globe is taking an interest in beer people in Quebec. It would make sense to dedicate a short segment of your show to quality beers in Quebec, especially if you get one of the numerous beer writers in Montreal.