Tag Archives: scholarship

Academics and Their Publics

Misunderstood by Raffi Asdourian
Misunderstood by Raffi Asdourian

Academics are misunderstood.

Almost by definition.

Pretty much any academic eventually feels that s/he is misunderstood. Misunderstandings about some core notions in about any academic field are involved in some of the most common pet peeves among academics.

In other words, there’s nothing as transdisciplinary as misunderstanding.

It can happen in the close proximity of a given department (“colleagues in my department misunderstand my work”). It can happen through disciplinary boundaries (“people in that field have always misunderstood our field”). And, it can happen generally: “Nobody gets us.”

It’s not paranoia and it’s probably not self-victimization. But there almost seems to be a form of “onedownmanship” at stake with academics from different disciplines claiming that they’re more misunderstood than others. In fact, I personally get the feeling that ethnographers are more among the most misunderstood people around, but even short discussions with friends in other fields (including mathematics) have helped me get the idea that, basically, we’re all misunderstood at the same “level” but there are variations in the ways we’re misunderstood. For instance, anthropologists in general are mistaken for what they aren’t based on partial understanding by the general population.

An example from my own experience, related to my decision to call myself an “informal ethnographer.” When you tell people you’re an anthropologist, they form an image in their minds which is very likely to be inaccurate. But they do typically have an image in their minds. On the other hand, very few people have any idea about what “ethnography” means, so they’re less likely to form an opinion of what you do from prior knowledge. They may puzzle over the term and try to take a guess as to what “ethnographer” might mean but, in my experience, calling myself an “ethnographer” has been a more efficient way to be understood than calling myself an “anthropologist.”

This may all sound like nitpicking but, from the inside, it’s quite impactful. Linguists are frequently asked about the number of languages they speak. Mathematicians are taken to be number freaks. Psychologists are perceived through the filters of “pop psych.” There are many stereotypes associated with engineers. Etc.

These misunderstandings have an impact on anyone’s work. Not only can it be demoralizing and can it impact one’s sense of self-worth, but it can influence funding decisions as well as the use of research results. These misunderstandings can underminine learning across disciplines. In survey courses, basic misunderstandings can make things very difficult for everyone. At a rather basic level, academics fight misunderstandings more than they fight ignorance.

The  main reason I’m discussing this is that I’ve been given several occasions to think about the interface between the Ivory Tower and the rest of the world. It’s been a major theme in my blogposts about intellectuals, especially the ones in French. Two years ago, for instance, I wrote a post in French about popularizers. A bit more recently, I’ve been blogging about specific instances of misunderstandings associated with popularizers, including Malcolm Gladwell’s approach to expertise. Last year, I did a podcast episode about ethnography and the Ivory Tower. And, just within the past few weeks, I’ve been reading a few things which all seem to me to connect with this same issue: common misunderstandings about academic work. The connections are my own, and may not be so obvious to anyone else. But they’re part of my motivations to blog about this important issue.

In no particular order:

But, of course, I think about many other things. Including (again, in no particular order):

One discussion I remember, which seems to fit, included comments about Germaine Dieterlen by a friend who also did research in West Africa. Can’t remember the specifics but the gist of my friend’s comment was that “you get to respect work by the likes of Germaine Dieterlen once you start doing field research in the region.” In my academic background, appreciation of Germaine Dieterlen’s may not be unconditional, but it doesn’t necessarily rely on extensive work in the field. In other words, while some parts of Dieterlen’s work may be controversial and it’s extremely likely that she “got a lot of things wrong,” her work seems to be taken seriously by several French-speaking africanists I’ve met. And not only do I respect everyone but I would likely praise someone who was able to work in the field for so long. She’s not my heroine (I don’t really have heroes) or my role-model, but it wouldn’t have occurred to me that respect for her wasn’t widespread. If it had seemed that Dieterlen’s work had been misunderstood, my reflex would possibly have been to rehabilitate her.

In fact, there’s  a strong academic tradition of rehabilitating deceased scholars. The first example which comes to mind is a series of articles (PDF, in French) and book chapters by UWO linguistic anthropologist Regna Darnell.about “Benjamin Lee Whorf as a key figure in linguistic anthropology.” Of course, saying that these texts by Darnell constitute a rehabilitation of Whorf reveals a type of evaluation of her work. But that evaluation comes from a third person, not from me. The likely reason for this case coming up to my mind is that the so-called “Sapir-Whorf Hypothesis” is among the most misunderstood notions from linguistic anthropology. Moreover, both Whorf and Sapir are frequently misunderstood, which can make matters difficulty for many linguistic anthropologists talking with people outside the discipline.

The opposite process is also common: the “slaughtering” of “sacred cows.” (First heard about sacred cows through an article by ethnomusicologist Marcia Herndon.) In some significant ways, any scholar (alive or not) can be the object of not only critiques and criticisms but a kind of off-handed dismissal. Though this often happens within an academic context, the effects are especially lasting outside of academia. In other words, any scholar’s name is likely to be “sullied,” at one point or another. Typically, there seems to be a correlation between the popularity of a scholar and the likelihood of her/his reputation being significantly tarnished at some point in time. While there may still be people who treat Darwin, Freud, Nietzsche, Socrates, Einstein, or Rousseau as near divinities, there are people who will avoid any discussion about anything they’ve done or said. One way to put it is that they’re all misunderstood. Another way to put it is that their main insights have seeped through “common knowledge” but that their individual reputations have decreased.

Perhaps the most difficult case to discuss is that of Marx (Karl, not Harpo). Textbooks in introductory sociology typically have him as a key figure in the discipline and it seems clear that his insight on social issues was fundamental in social sciences. But, outside of some key academic contexts, his name is associated with a large series of social events about which people tend to have rather negative reactions. Even more so than for Paul de Man or  Martin Heidegger, Marx’s work is entangled in public opinion about his ideas. Haven’t checked for examples but I’m quite sure that Marx’s work is banned in a number of academic contexts. However, even some of Marx’s most ardent opponents are likely to agree with several aspects of Marx’s work and it’s sometimes funny how Marxian some anti-Marxists may be.

But I digress…

Typically, the “slaughtering of sacred cows” relates to disciplinary boundaries instead of social ones. At least, there’s a significant difference between your discipline’s own “sacred cows” and what you perceive another discipline’s “sacred cows” to be. Within a discipline, the process of dismissing a prior scholar’s work is almost œdipean (speaking of Freud). But dismissal of another discipline’s key figures is tantamount to a rejection of that other discipline. It’s one thing for a physicist to show that Newton was an alchemist. It’d be another thing entirely for a social scientist to deconstruct James Watson’s comments about race or for a theologian to argue with Darwin. Though discussions may have to do with individuals, the effects of the latter can widen gaps between scholarly disciplines.

And speaking of disciplinarity, there’s a whole set of issues having to do with discussions “outside of someone’s area of expertise.” On one side, comments made by academics about issues outside of their individual areas of expertise can be very tricky and can occasionally contribute to core misunderstandings. The fear of “talking through one’s hat” is quite significant, in no small part because a scholar’s prestige and esteem may greatly decrease as a result of some blatantly inaccurate statements (although some award-winning scholars seem not to be overly impacted by such issues).

On the other side, scholars who have to impart expert knowledge to people outside of their discipline  often have to “water down” or “boil down” their ideas and, in effect, oversimplifying these issues and concepts. Partly because of status (prestige and esteem), lowering standards is also very tricky. In some ways, this second situation may be more interesting. And it seems unavoidable.

How can you prevent misunderstandings when people may not have the necessary background to understand what you’re saying?

This question may reveal a rather specific attitude: “it’s their fault if they don’t understand.” Such an attitude may even be widespread. Seems to me, it’s not rare to hear someone gloating about other people “getting it wrong,” with the suggestion that “we got it right.”  As part of negotiations surrounding expert status, such an attitude could even be a pretty rational approach. If you’re trying to position yourself as an expert and don’t suffer from an “impostor syndrome,” you can easily get the impression that non-specialists have it all wrong and that only experts like you can get to the truth. Yes, I’m being somewhat sarcastic and caricatural, here. Academics aren’t frequently that dismissive of other people’s difficulties understanding what seem like simple concepts. But, in the gap between academics and the general population a special type of intellectual snobbery can sometimes be found.

Obviously, I have a lot more to say about misunderstood academics. For instance, I wanted to address specific issues related to each of the links above. I also had pet peeves about widespread use of concepts and issues like “communities” and “Eskimo words for snow” about which I sometimes need to vent. And I originally wanted this post to be about “cultural awareness,” which ends up being a core aspect of my work. I even had what I might consider a “neat” bit about public opinion. Not to mention my whole discussion of academic obfuscation (remind me about “we-ness and distinction”).

But this is probably long enough and the timing is right for me to do something else.

I’ll end with an unverified anecdote that I like. This anecdote speaks to snobbery toward academics.

[It’s one of those anecdotes which was mentioned in a course I took a long time ago. Even if it’s completely fallacious, it’s still inspiring, like a tale, cautionary or otherwise.]

As the story goes (at least, what I remember of it), some ethnographers had been doing fieldwork  in an Australian cultural context and were focusing their research on a complex kinship system known in this context. Through collaboration with “key informants,” the ethnographers eventually succeeded in understanding some key aspects of this kinship system.

As should be expected, these kinship-focused ethnographers wrote accounts of this kinship system at the end of their field research and became known as specialists of this system.

After a while, the fieldworkers went back to the field and met with the same people who had described this kinship system during the initial field trip. Through these discussions with their “key informants,” the ethnographers end up hearing about a radically different kinship system from the one about which they had learnt, written, and taught.

The local informants then told the ethnographers: “We would have told you earlier about this but we didn’t think you were able to understand it.”

Answers on Expertise

As a follow-up on my previous post…

Quest for Expertise « Disparate.

(I was looking for the origin of the “10 years or 10,000 hours to be an expert” claim.)

Interestingly enough, that post is getting a bit of blog attention.

I’m so grateful about this attention that it made me tweet the following:

Trackbacks, pings, and blog comments are blogger gifts.

I also posted a question about this on Mahalo Answers (after the first comment, by Alejna, appeared on my blog, but before other comments and trackbacks appeared). I selected glaspell’s answer as the best answer
(glaspell also commented on my blog entry).

At this point, my impression is that what is taken as a “rule” on expertise is a simplification of results from a larger body of research with an emphasis on work by K. Anders Ericsson but with little attention paid to primary sources.
The whole process is quite satisfying, to me. Not just because we might all gain a better understanding of how this “claim” became so generalized, but because the process as a whole shows both powers and limitations of the Internet. I tend to claim (publicly) that the ‘Net favours critical thinking (because we eventually all claims with grains of salt). But it also seems that, even with well-known research done in English, it can be rather difficult to follow all the connections across the literature. If you think about more obscure work in non-dominant languages, it’s easy to realize that Google’s dream of organizing the world’s information isn’t yet true.

By the by, I do realize that my quest was based on a somewhat arbitrary assumption: that this “rule of thumb” is now understood as a solid rule. But what I’ve noticed in popular media since 2006 leads me to believe that the claim is indeed taken as a hard and fast rule.

I’m not blaming anyone, in this case. I don’t think that anyone’s involvement in the “chain of transmission” was voluntarily misleading and I don’t even think that it was that essential. As with many other ideas, what “sticks” is what seems to make sense in context. Actually, this strong tendency for “convenient” ideas to be more widely believed relates to a set of tricky issues with which academics have to deal, on a daily basis. Sagan’s well-known “baloney detector” is useful, here. But it’s also in not so wide use.

One thing which should also be clear: I’m not saying that Ericsson and other researchers have done anything shoddy or inappropriate. Their work is being used outside of its original context, which is often an issue.

Mass media coverage of academic research was the basis of series of entries on the original Language Log, including one of my favourite blogposts, Mark Liberman’s Language Log: Raising standards — by lowering them. The main point, I think, is that secluded academics in the Ivory Tower do little to alleviate this problem.

But I digress.
And I should probably reply to the other comments on the entry itself.

Quest for Expertise

Will at Work Learning: People remember 10%, 20%…Oh Really?.

This post was mentioned on the mailing-list for the Society for Teaching and Learning in Higher Education (STLHE-L).

In that post, Will Thalheimer traces back a well-known claim about learning to shoddy citations. While it doesn’t invalidate the base claim (that people tend to retain more information through certain cognitive processes), Thalheimer does a good job of showing how a graph which has frequently been seen in educational fields was based on faulty interpretation of work by prominent scholars, mixed with some results from other sources.

Quite interesting. IMHO, demystification and critical thinking are among the most important things we can do in academia. In fact, through training in folkloristics, I have become quite accustomed to this specific type of debunking.

I have in mind a somewhat similar claim that I’m currently trying to trace. Preliminary searches seem to imply that citations of original statements have a similar hyperbolic effect on the status of this claim.

The claim is what a type of “rule of thumb” in cognitive science. A generic version could be stated in the following way:

It takes ten years or 10,000 hours to become an expert in any field.

The claim is a rather famous one from cognitive science. I’ve heard it uttered by colleagues with a background in cognitive science. In 2006, I first heard about such a claim from Philip E. Ross, on an episode of Scientific American‘s Science Talk podcast to discuss his article on expertise. I later read a similar claim in Daniel Levitin’s 2006 This Is Your Brain On Music. The clearest statement I could find back in Levitin’s book is the following (p. 193):

The emerging picture from such studies is that ten thousand hours of practice is required to achieve the level of mastery associated with being a world-class expert – in anything.

More recently, during a keynote speech he was giving as part of his latest book tour, I heard a similar claim from presenter extraordinaire Malcolm Gladwell. AFAICT, this claim runs at the centre of Gladwell’s recent book: Outliers: The Story of Success. In fact, it seems that Gladwell uses the same quote from Levitin, on page 40 of Outliers (I just found that out).

I would like to pinpoint the origin for the claim. Contrary to Thalheimer’s debunking, I don’t expect that my search will show that the claim is inaccurate. But I do suspect that the “rule of thumb” versions may be a bit misled. I already notice that most people who set up such claims are doing so without direct reference to the primary literature. This latter comment isn’t damning: in informal contexts, constant referal to primary sources can be extremely cumbersome. But it could still be useful to clear up the issue. Who made this original claim?

I’ve tried a few things already but it’s not working so well. I’m collecting a lot of references, to both online and printed material. Apart from Levitin’s book and a few online comments, I haven’t yet read the material. Eventually, I’d probably like to find a good reference on the cognitive basis for expertise which puts this “rule of thumb” in context and provides more elaborate data on different things which can be done during that extensive “time on task” (including possible skill transfer).

But I should proceed somewhat methodically. This blogpost is but a preliminary step in this process.

Since Philip E. Ross is the first person on record I heard talk about this claim, a logical first step for me is to look through this SciAm article. Doing some text searches on the printable version of his piece, I find a few interesting things including the following (on page 4 of the standard version):

Simon coined a psychological law of his own, the 10-year rule, which states that it takes approximately a decade of heavy labor to master any field.

Apart from the ten thousand (10,000) hours part of the claim, this is about as clear a statement as I’m looking for. The “Simon” in question is Herbert A. Simon, who did research on chess at the Department of Psychology at Carnegie-Mellon University with colleague William G. Chase.  So I dig for diverse combinations of “Herbert Simon,” “ten(10)-year rule,” “William Chase,” “expert(ise),” and/or “chess.” I eventually find two primary texts by those two authors, both from 1973: (Chase and Simon, 1973a) and (Chase and Simon, 1973b).

The first (1973a) is an article from Cognitive Psychology 4(1): 55-81, available for download on ScienceDirect (toll access). Through text searches for obvious words like “hour*,” “year*,” “time,” or even “ten,” it seems that this article doesn’t include any specific statement about the amount of time required to become an expert. The quote which appears to be the most relevant is the following:

Behind this perceptual analysis, as with all skills (cf., Fitts & Posner, 1967), lies an extensive cognitive apparatus amassed through years of constant practice.

While it does relate to the notion that there’s a cognitive basis to practise, the statement is generic enough to be far from the “rule of thumb.”

The second Chase and Simon reference (1973b) is a chapter entitled “The Mind’s Eye in Chess” (pp. 215-281) in the proceedings of the Eighth Carnegie Symposium on Cognition as edited by William Chase and published by Academic Press under the title Visual Information Processing. I borrowed a copy of those proceedings from Concordia and have been scanning that chapter visually for some statements about the “time on task.” Though that symposium occurred in 1972 (before the first Chase and Simon reference was published), the proceedings were apparently published after the issue of Cognitive Psychology since the authors mention that article for background information.

I do find some interesting quotes, but nothing that specific:

By a rough estimate, the amount of time each player has spent playing chess, studying chess, and otherwise staring at chess positions is perhaps 10,000 to 50,000 hours for the Master; 1,000 to 5,000 hours for the Class A player; and less than 100 horus for the beginner. (Chase and Simon 1973b: 219)

or:

T
he organization of the Master’s elaborate repertoire of information takes thousands of hours to build up, and the same is true of any skilled task (e.g., football, music). That is why practice is the major independent variable in the acquisition of skill. (Chase and Simon 1973b: 279, emphasis in the original, last sentences in the text)

Maybe I haven’t scanned these texts properly but those quotes I find seem to imply that Simon hadn’t really devised his “10-year rule” in a clear, numeric version.

I could probably dig for more Herbert Simon wisdom. Before looking (however cursorily) at those 1973 texts, I was using Herbert Simon as a key figure in the origin of that “rule of thumb.” To back up those statements, I should probably dig deeper in the Herbert Simon archives. But that might require more work than is necessary and it might be useful to dig through other sources.

In my personal case, the other main written source for this “rule of thumb” is Dan Levitin. So, using online versions of his book, I look for comments about expertise. (I do own a copy of the book and I’m assuming the Index contains page numbers for references on expertise. But online searches are more efficient and possibly more thorough on specific keywords.) That’s how I found the statement, quoted above. I’m sure it’s the one which was sticking in my head and, as I found out tonight, it’s the one Gladwell used in his first statement on expertise in Outliers.

So, where did Levitin get this? I could possibly ask him (we’ve been in touch and he happens to be local) but looking for those references might require work on his part. A preliminary step would be to look through Levitin’s published references for Your Brain On Music.

Though Levitin is a McGill professor, Your Brain On Music doesn’t follow the typical practise in English-speaking academia of ladling copious citations onto any claim, even the most truistic statements. Nothing strange in this difference in citation practise.  After all, as Levitin explains in his Bibliographic Notes:

This book was written for the non-specialist and not for my colleagues, and so I have tried to simplify topics without oversimplifying them.

In this context, academic-style citation-fests would make the book too heavy. Levitin does, however, provide those “Bibliographic Notes” at the end of his book and on the website for the same book. In the Bibliographic Notes of that site, Levitin adds a statement I find quite interesting in my quest for “sources of claims”:

Because I wrote this book for the general reader, I want to emphasize that there are no new ideas presented in this book, no ideas that have not already been presented in scientific and scholarly journals as listed below.

So, it sounds like going through those references is a good strategy to locate at least solid references on that specific “10,000 hour” claim. Among relevant references on the cognitive basis of expertise (in Chapter 7), I notice the following texts which might include specific statements about the “time on task” to become an expert. (An advantage of the Web version of these bibliographic notes is that Levitin provides some comments on most references; I put Levitin’s comments in parentheses.)

  • Chi, Michelene T.H., Robert Glaser, and Marshall J. Farr, eds. 1988. The Nature of Expertise. Hillsdale, New Jersey: Lawrence Erlbaum Associates. (Psychological studies of expertise, including chess players)
  • Ericsson, K. A., and J. Smith, eds. 1991. Toward a General Theory of Expertise: prospects and limits. New York: Cambridge University Press. (Psychological studies of expertise, including chess players)
  • Hayes, J. R. 1985. Three problems in teaching general skills. In Thinking and Learning Skills: Research and Open Questions, edited by S. F. Chipman, J. W. Segal and R. Glaser. Hillsdale, NJ: Erlbaum. (Source for the study of Mozart’s early works not being highly regarded, and refutation that Mozart didn’t need 10,000 hours like everyone else to become an expert.)
  • Howe, M. J. A., J. W. Davidson, and J. A. Sloboda. 1998. Innate talents: Reality or myth? Behavioral & Brain Sciences 21 (3):399-442. (One of my favorite articles, although I don’t agree with everything in it; an overview of the “talent is a myth” viewpoint.)
  • Sloboda, J. A. 1991. Musical expertise. In Toward a general theory of expertise, edited by K. A. Ericcson (sic) and J. Smith. New York: Cambridge University Press. (Overview of issues and findings in musical expertise literature)

I have yet to read any of those references. I did borrow Ericsson and Smith when I first heard about Levitin’s approach to talent and expertise (probably through a radio and/or podcast appearance). But I had put the issue of expertise on the back-burner. It was always at the back of my mind and I did blog about it, back then. But it took Gladwell’s talk to wake me up. What’s funny, though, is that the “time on task” statements in (Ericsson and Smith,  1991) seem to lead back to (Chase and Simon, 1973b).

At this point, I get the impression that the “it takes a decade and/or 10,000 hours to become an expert”:

  • was originally proposed as a vague hypothesis a while ago (the year 1899 comes up);
  • became an object of some consideration by cognitive psychologists at the end of the 1960s;
  • became more widely accepted in the 1970s;
  • was tested by Benjamin Bloom and others in the 1980s;
  • was precised by Ericsson and others in the late 1980s;
  • gained general popularity in the mid-2000s;
  • is being further popularized by Malcolm Gladwell in late 2008.

Of course, I’ll have to do a fair bit of digging and reading to verify any of this, but it sounds like the broad timeline makes some sense. One thing, though, is that it doesn’t really seem that anybody had the intention of spelling it out as a “rule” or “law” in such a format as is being carried around. If I’m wrong, I’m especially surprised that a clear formulation isn’t easier to find.

As an aside, of sorts… Some people seem to associate the claim with Gladwell, at this point. Not very surprsing, given the popularity of his books, the effectiveness of his public presentations, the current context of his book tour, and the reluctance of the general public to dig any deeper than the latest source.

The problem, though, is that it doesn’t seem that Gladwell himself has done anything to “set the record straight.” He does quote Levitin in Outliers, but I heard him reply to questions and comments as if the research behind the “ten years or ten thousand hours” claim had some association with him. From a popular author like Gladwell, it’s not that awkward. But these situations are perfect opportunities for popularizers like Gladwell to get a broader public interested in academia. As Gladwell allegedly cares about “educational success” (as measured on a linear scale), I would have expected more transparency.

Ah, well…

So, I have some work to do on all of this. It will have to wait but this placeholder might be helpful. In fact, I’ll use it to collect some links.

 

Some relevant blogposts of mine on talent, expertise, effort, and Levitin.

And a whole bunch of weblinks to help me in my future searches (I have yet to really delve in any of this).

Blogging Academe

LibriVox founder and Montreal geek Hugh McGuire recently posted a blog entry in which he gave a series of nine arguments for academics to blog:

Why Academics Should Blog

Hugh’s post reminded me of one of my favourite blogposts by an academic, a pointed defence of blogging by Mark Liberman, of Language Log fame.
Raising standards –by lowering them

While I do agree with Hugh’s points, I would like to reframe and rephrase them.

Clearly, I’m enthusiastic about blogging. Not that I think every academic should, needs to, ought to blog. But I do see clear benefits of blogging in academic contexts.

Academics do a number of different things, from search committees to academic advising. Here, I focus on three main dimensions of an academic’s life: research, teaching, and community outreach. Other items in a professor’s job description may benefit from blogging but these three main components tend to be rather prominent in terms of PTR (promotion, tenure, reappointment). What’s more, blogging can help integrate these dimensions of academic life in a single set of activities.

Impact

In relation to scholarship, the term “impact” often refers to the measurable effects of a scholar’s publication through a specific field. “Citation impact,” for instance, refers to the number of times a given journal article has been cited by other scholars. This kind of measurement is directly linked to Google’s PageRank algorithm which is used to assess the relevance of their search results. The very concept of “citation impact” relates very directly to the “publish or perish” system which, I would argue, does more to increase stress levels among full-time academic than to enhance scholarship. As such, it may need some rethinking. What does “citation impact” really measure? Is the most frequently cited text on a given subject necessarily the most relevant? Isn’t there a clustering effect, with some small groups of well-known scholars citing one another without paying attention to whatever else may happen in their field, especially in other languages?

An advantage of blogging is that this type of impact is easy to monitor. Most blogging platforms have specific features for “statistics,” which let bloggers see which of their posts have been visited (“hit”) most frequently. More sophisticated analysis is available on some blogging platforms, especially on paid ones. These are meant to help bloggers monetize their blogs through advertising. But the same features can be quite useful to an academic who wants to see which blog entries seem to attract the most traffic.

Closer to “citation impact” is the fact that links to a given post are visible within that post through the ping and trackback systems. If another blogger links to this very blogpost, a link to that second blogger’s post will appear under mine as a link. In other words, a blogpost can embed future references.

In terms of teaching, thinking about impact through blogging can also have interesting effects. If students are blogging, they can cite and link to diverse items and these connections can serve as a representation of the constructive character of learning. But even if students don’t blog, a teacher blogging course-related material can increase the visibility of that course. In some cases, this visibility may lead to inter-institutional collaboration or increased enrollment.

Transparency

While secrecy may be essential in some academic projects, most academics tend to adopt a favourable attitude toward transparency. Academia is about sharing information and spreading knowledge, not about protecting information or about limiting knowledge to a select few.

Bloggers typically value transparency.

There are several ethical issues which relate to transparency. Some ethical principles prevent transparency (for instance, most research projects involving “human subjects” require anonymity). But academic ethics typically go with increased transparency on the part of the researcher. For instance, informed consent by a “human subject” requires complete disclosure of how the data will be used and protected. There are usually requirements for the primary investigator to be reachable during the research project.

Transparency is also valuable in teaching. While some things should probably remain secret (say, answers to exam questions), easy access to a number of documents makes a lot of sense in learning contexts.

Public Intellectuals

It seems that the term “intellectual” gained currency as a label for individuals engaged in public debates. While public engagement has taken a different type of significance, over the years, but the responsibility for intellectuals to communicate publicly is still a matter of interest.

Through blogging, anyone can engage in public debate, discourse, or dialogue.

Reciprocity

Scholars working with “human subjects” often think about reciprocity. While remuneration may be the primary mode of retribution for participation in a research project, a broader concept of reciprocity is often at stake. Those who participated in the project usually have a “right to know” about the results of that study. Even when it isn’t the case and the results of the study remain secret, the asymmetry of human subjects revealing something about themselves to scholars who reveal nothing seems to clash with fundamental principles in contemporary academia.

Reciprocity in teaching can lead directly to some important constructivist principles. The roles of learners and teachers, while not completely interchangeable, are reciprocal. A teacher may learn and a learner may teach.

Playing with Concepts

Blogging makes it easy to try concepts out. More than “thinking out loud,” the type of blogging activity I’m thinking about can serve as a way to “put ideas on paper” (without actual paper) and eventually get feedback on those ideas.

In my experience, microblogging (Identi.ca, Twitter…) has been more efficient than extended blogging in terms of getting conceptual feedback. In fact, social networks (Facebook, more specifically) have been even more conducive to hashing out concepts.

Many academics do hash concepts out with students, especially with graduate students. The advantage is that students are likely to understand concepts quickly as they already share some of the same references as the academic who is playing with those concepts. There’s already a context for mutual understanding. The disadvantage is that a classroom context is fairly narrow to really try out the implications of a concept.

A method I like to use is to use fairly catchy phrases and leave concepts fairly raw, at first. I then try the same concept in diverse contexts, on my blogs or off.

The main example I have in mind is the “social butterfly effect.” It may sound silly at first but I find it can be a basis for discussion, especially if it spreads a bit.

A subpoint, here, is that this method allows for “gauging interest” in new concepts and it can often lead one in completely new directions. By blogging about concepts, an academic can tell if this concept has a chance to stick in a broad frame (outside the Ivory Tower) and may be given insight from outside disciplines.

Playing with Writing

This one probably applies more to “junior academics” (including students) but it can also work with established academics who enjoy diversifying their writing styles. Simply put: blogwriting is writing practise.

A common idea, in cognitive research on expertise, is that it takes about ten thousand hours to become an expert. For better or worse, academics are experts at writing. And we gain that expertise through practise. In this context, it’s easy to see blogging as a “writing exercise.” At least, that would be a perspective to which I can relate.

My impression is that writing skills are most efficiently acquired through practise. The type of practise I have in mind is “low-stakes,” in the sense that the outcomes of a writing exercise are relatively inconsequential. The basis for this perspective is that self-consciousness, inhibition, and self-censorship tend to get in the way of fluid writing. High-stakes writing (such as graded assignments) can make a lot of sense at several stages in the learning process, but overemphasis on evaluating someone’s writing skills will likely stress out the writer more than make her/him motivated to write.

This impression is to a large extent personal. I readily notice that when I get too self-conscious about my own writing (self-unconscious, even), my writing becomes much less fluid. In fact, because writing about writing tends to make one self-conscious, my writing this post is much less efficient than my usual writing sessions.

In my mind, there’s a cognitive basis to this form of low-stakes, casual writing. As with language acquisition, learning occurs whether or not we’re corrected. According to most research in language acquisition, children acquire their native languages through exposure, not through a formal learning process. My guess is that the same apply to writing.

In some ways, this is a defence of drafts. “Draft out your ideas without overthinking what might be wrong about your writing.” Useful advice, at least in my experience. The further point is to do something with those drafts, the basis for the RERO principle: “release your text in the wild, even if it may not correspond to your standards.” Every text is a work in progress. Especially in a context where you’re likely to get feedback (i.e., blogging). Trial and error, with a feedback mechanism. In my experience, feedback on writing tends to be given in a thoughtful and subtle fashion while feedback on ideas can be quite harsh.

The notion of writing styles is relevant, here. Some of Hugh’s arguments about the need for blogging in academia revolve around the notion that “academics are bad writers.” My position is that academics are expert writers but that academic writing is a very specific beast. Hugh’s writing standards might clash with typical writing habits among academics (which often include neologisms and convoluted metaphors). Are Hugh’s standards appropriate in terms of academic writing? Possibly, but why then are academic texts rating so low on writing standards after having been reviewed by peers and heavily edited? The relativist’s answer is, to me, much more convincing: academic texts are typically judged through standards which are context-specific. Judging academic writing with outside standards is like judging French writing with English standards (or judging prose through the standards of classic poetry).

Still, there’s something to be said about readability. Especially when these texts are to be used outside academia. Much academic writing is meant to remain within the walls of the Ivory Tower yet most academic disciplines benefit from some interaction with “the general public.” Though it may not be taught in universities and colleges, the skill of writing for a broader public is quite valuable. In fact, it may easily be transferable to teaching, especially if students come from other disciplines. Furthermore, writing outside one’s discipline is required in any type of interdisciplinary context, including project proposals for funding agencies.

No specific writing style is implied in blogging. A blogger can use whatever style she/he chooses for her/his posts. At the same time, blogging tends to encourage writing which is broadly readable and makes regular use of hyperlinks to connect to further information. In my opinion, this type of writing is a quite appropriate one in which academics can extend their skills.

“Public Review”

Much of the preceding connects with peer review, which was the basis of Mark Liberman’s post.

In academia’s recent history, “peer reviewed publications” have become the hallmark of scholarly writing. Yet, as Steve McIntyre claims, the current state of academic peer review may not be as efficient at ensuring scholarly quality as its proponents claim it to be. As opposed to financial auditing, for instance, peer review implies very limited assessment based on data. And I would add that the very notion of “peer” could be assessed more carefully in such a context.

Overall, peer review seems to be relatively inefficient as a “reality check.” This might sound like a bold claim and I should provide data to support it. But I mostly want to provoke some thought as to what the peer review process really implies. This is not about reinventing the wheel but it is about making sure we question assumptions about the process.

Blogging implies public scrutiny. This directly relates to transparency, discussed above. But there is also the notion of giving the public the chance to engage with the outcomes of academic research. Sure, the general public sounds like a dangerous place to propose some ideas (especially if they have to do with health or national security). But we may give some thought to Linus’s law and think about the value of “crowdsourcing” academic falsification.

Food for Thought

There’s a lot more I want to add but I should heed my call to RERO. Otherwise, this post will remain in my draft posts for an indefinite period of time, gathering dust and not allowing any timely discussion. Perhaps more than at any other point, I would be grateful for any thoughtful comment about academic blogging.

In fact, I will post this blog entry “as is,” without careful proofreading. Hopefully, it will be the start of a discussion.

I will “send you off” with a few links related to blogging in academic contexts, followed by Hugh’s list of arguments.

Links on Academic Blogging

(With an Anthropological emphasis)

Hugh’s List

  1. You need to improve your writing
  2. Some of your ideas are dumb
  3. The point of academia is to expand knowledge
  4. Blogging expands your readership
  5. Blogging protects and promotes your ideas
  6. Blogging is Reputation
  7. Linking is better than footnotes
  8. Journals and blogs can (and should) coexist
  9. What have journals done for you lately?

The H-Bomb in Open Access to Scholarship

As confirmed in the Chronicle of Higher Education, Harvard University’s Faculty of Arts and Sciences has just adopted a groundbreaking mandate “that requires faculty members to allow the university to make their scholarly articles available free online.”

Some coverage elsewhere:

Peter Suber’s blog is a comprehensive source for Open Access news. Some of his posts covering the Harvard mandate:

Why are those news so important? Well, it’s the first such university mandate in the United States, so it does set a precedent in and of itself. (The UC system might be the second one.) But, of course, Harvard’s prestige is an important factor. Hence the “H-bomb” title: just mentioning “Harvard” has a very strong effect, so much so that some Harvard graduates refrain from mentioning their alma mater. As Suber assesses, Harvard’s support for Open Access makes it unlikely that publishers would refuse articles from Harvard faculty. Personally, I would even go so far as to say that the FUD spewed by “academic” publishers might become much less effective than it has previously been.

In other words, this is a big victory for scholarship. Too bad publishers see it as a defeat. Maybe like “content” lobby groups RIAA and MPAA, publishers will finally be hit by the “cluestick” and will begin to understand that it is, in fact, in their best interest to embrace openness.

Yes, call me naïve.

Schools, Research, Relevance

The following was sent to the Moodle Lounge.

Business schools and research | Practically irrelevant? | Economist.com

My own reaction to this piece…
Well, well…
The title and the tone are, IMHO, rather inflammatory. For those who follow tech news, this could sound like a column by John C. Dvorak. The goal is probably to spark conversation about the goals of business schools. Only a cynic (rarely found in academia 😛 ) would say that they’re trying to increase readership. 😎

The article does raise important issues, although many of those have been tackled in the past. For instance, the tendency for educational institutions to look at the short-term gains of their “employees’ work” for their own programs instead of looking at the broader picture in terms of social and human gains. Simple rankings decreasing the diversity of programmes. Professors who care more about their careers than about their impact on the world. The search for “metrics” in scholarship (citation impact, patents-count, practical impact…). The quest for prestige. Reluctance to change. Etc.

This point could lead to something interesting:

AACSB justifies its stance by saying that it wants schools and faculty to play to their strengths, whether they be in pedagogy, in the research of practical applications, or in scholarly endeavour.

IMHO, it seems to lead to a view of educational institutions which does favour diversity. We need some schools which are really good at basic research. We need other schools (or other people at the same schools) to be really good ast creating learning environments. And some people should be able to do the typical goal-oriented “R&D” for very practical purposes, with business partners in mind. It takes all kinds. And because some people forget the necessity for diverse environments, it’s an important point to reassess.
The problem is, though, that the knee-jerk reaction apparently runs counter to the “diversity” argument. Possibly because of the AACSB’s own recommendations or maybe because of a difference of opinion, academics (and the anonymous Economist journalist) seem to understand the AACSB’s stance as meaning that all programs should be evaluated with the exact same criteria which give less room for basic research. Similar things have been done in the past and, AFAICT, basic research eventually makes a comeback, one way or the other. A move toward “practical outcomes” is often a stopgap measure in a “bearish” context.

To jump on the soapbox for a second. I personally do think that there should be more variety in academic careers, including in business schools. Those who do undertake basic research are as important as the others. But it might be ill-advised to require every faculty member at every school to have an impressive research résumé every single year. Those people whose “calling” it is to actually teach should have some space and should probably not be judged using the same criteria as those who perceive teaching as an obstacle in their research careers. This is not to say that teachers should do no research. But it does mean that requiring proof of excellence in research of everyone involved is a very efficient way to get both shoddy research and dispassionate teaching. In terms of practical implications for the world outside the Ivory Tower, often subsumed under the category of “Service,” there are more elements which should “count” than direct gain from a given project with a powerful business partner. (After all, there is more volatility in this context than in most academic endeavours.) IMHO, some people are doing more for their institutions by going “in the world” and getting people interested in learning than by working for a private sponsor. Not that private sponsors are unimportant. But one strength of academic institutions is that they can be neutral enough to withstand changes in the “market.”

Phew! 😉

Couldn’t help but notice that the article opens the door for qualitative and inductive research. Given the current trend in and toward ethnography, this kind of attitude could make it easier to “sell” ethnography to businesses.
What made me laugh in a discussion of video-based ethnographic observation is that they keep contrasting “ethnography” (at least, the method they use at EverydayLives) with “research.” 😀

The advantage of this distinction, though, in the context of this Economist piece, is that marketeers and other business-minded people might then see ethnography as an alternative for what is perceived as “practically irrelevant” research. 💡