Tag Archives: peer-review

Blogging Academe

LibriVox founder and Montreal geek Hugh McGuire recently posted a blog entry in which he gave a series of nine arguments for academics to blog:

Why Academics Should Blog

Hugh’s post reminded me of one of my favourite blogposts by an academic, a pointed defence of blogging by Mark Liberman, of Language Log fame.
Raising standards –by lowering them

While I do agree with Hugh’s points, I would like to reframe and rephrase them.

Clearly, I’m enthusiastic about blogging. Not that I think every academic should, needs to, ought to blog. But I do see clear benefits of blogging in academic contexts.

Academics do a number of different things, from search committees to academic advising. Here, I focus on three main dimensions of an academic’s life: research, teaching, and community outreach. Other items in a professor’s job description may benefit from blogging but these three main components tend to be rather prominent in terms of PTR (promotion, tenure, reappointment). What’s more, blogging can help integrate these dimensions of academic life in a single set of activities.

Impact

In relation to scholarship, the term “impact” often refers to the measurable effects of a scholar’s publication through a specific field. “Citation impact,” for instance, refers to the number of times a given journal article has been cited by other scholars. This kind of measurement is directly linked to Google’s PageRank algorithm which is used to assess the relevance of their search results. The very concept of “citation impact” relates very directly to the “publish or perish” system which, I would argue, does more to increase stress levels among full-time academic than to enhance scholarship. As such, it may need some rethinking. What does “citation impact” really measure? Is the most frequently cited text on a given subject necessarily the most relevant? Isn’t there a clustering effect, with some small groups of well-known scholars citing one another without paying attention to whatever else may happen in their field, especially in other languages?

An advantage of blogging is that this type of impact is easy to monitor. Most blogging platforms have specific features for “statistics,” which let bloggers see which of their posts have been visited (“hit”) most frequently. More sophisticated analysis is available on some blogging platforms, especially on paid ones. These are meant to help bloggers monetize their blogs through advertising. But the same features can be quite useful to an academic who wants to see which blog entries seem to attract the most traffic.

Closer to “citation impact” is the fact that links to a given post are visible within that post through the ping and trackback systems. If another blogger links to this very blogpost, a link to that second blogger’s post will appear under mine as a link. In other words, a blogpost can embed future references.

In terms of teaching, thinking about impact through blogging can also have interesting effects. If students are blogging, they can cite and link to diverse items and these connections can serve as a representation of the constructive character of learning. But even if students don’t blog, a teacher blogging course-related material can increase the visibility of that course. In some cases, this visibility may lead to inter-institutional collaboration or increased enrollment.

Transparency

While secrecy may be essential in some academic projects, most academics tend to adopt a favourable attitude toward transparency. Academia is about sharing information and spreading knowledge, not about protecting information or about limiting knowledge to a select few.

Bloggers typically value transparency.

There are several ethical issues which relate to transparency. Some ethical principles prevent transparency (for instance, most research projects involving “human subjects” require anonymity). But academic ethics typically go with increased transparency on the part of the researcher. For instance, informed consent by a “human subject” requires complete disclosure of how the data will be used and protected. There are usually requirements for the primary investigator to be reachable during the research project.

Transparency is also valuable in teaching. While some things should probably remain secret (say, answers to exam questions), easy access to a number of documents makes a lot of sense in learning contexts.

Public Intellectuals

It seems that the term “intellectual” gained currency as a label for individuals engaged in public debates. While public engagement has taken a different type of significance, over the years, but the responsibility for intellectuals to communicate publicly is still a matter of interest.

Through blogging, anyone can engage in public debate, discourse, or dialogue.

Reciprocity

Scholars working with “human subjects” often think about reciprocity. While remuneration may be the primary mode of retribution for participation in a research project, a broader concept of reciprocity is often at stake. Those who participated in the project usually have a “right to know” about the results of that study. Even when it isn’t the case and the results of the study remain secret, the asymmetry of human subjects revealing something about themselves to scholars who reveal nothing seems to clash with fundamental principles in contemporary academia.

Reciprocity in teaching can lead directly to some important constructivist principles. The roles of learners and teachers, while not completely interchangeable, are reciprocal. A teacher may learn and a learner may teach.

Playing with Concepts

Blogging makes it easy to try concepts out. More than “thinking out loud,” the type of blogging activity I’m thinking about can serve as a way to “put ideas on paper” (without actual paper) and eventually get feedback on those ideas.

In my experience, microblogging (Identi.ca, Twitter…) has been more efficient than extended blogging in terms of getting conceptual feedback. In fact, social networks (Facebook, more specifically) have been even more conducive to hashing out concepts.

Many academics do hash concepts out with students, especially with graduate students. The advantage is that students are likely to understand concepts quickly as they already share some of the same references as the academic who is playing with those concepts. There’s already a context for mutual understanding. The disadvantage is that a classroom context is fairly narrow to really try out the implications of a concept.

A method I like to use is to use fairly catchy phrases and leave concepts fairly raw, at first. I then try the same concept in diverse contexts, on my blogs or off.

The main example I have in mind is the “social butterfly effect.” It may sound silly at first but I find it can be a basis for discussion, especially if it spreads a bit.

A subpoint, here, is that this method allows for “gauging interest” in new concepts and it can often lead one in completely new directions. By blogging about concepts, an academic can tell if this concept has a chance to stick in a broad frame (outside the Ivory Tower) and may be given insight from outside disciplines.

Playing with Writing

This one probably applies more to “junior academics” (including students) but it can also work with established academics who enjoy diversifying their writing styles. Simply put: blogwriting is writing practise.

A common idea, in cognitive research on expertise, is that it takes about ten thousand hours to become an expert. For better or worse, academics are experts at writing. And we gain that expertise through practise. In this context, it’s easy to see blogging as a “writing exercise.” At least, that would be a perspective to which I can relate.

My impression is that writing skills are most efficiently acquired through practise. The type of practise I have in mind is “low-stakes,” in the sense that the outcomes of a writing exercise are relatively inconsequential. The basis for this perspective is that self-consciousness, inhibition, and self-censorship tend to get in the way of fluid writing. High-stakes writing (such as graded assignments) can make a lot of sense at several stages in the learning process, but overemphasis on evaluating someone’s writing skills will likely stress out the writer more than make her/him motivated to write.

This impression is to a large extent personal. I readily notice that when I get too self-conscious about my own writing (self-unconscious, even), my writing becomes much less fluid. In fact, because writing about writing tends to make one self-conscious, my writing this post is much less efficient than my usual writing sessions.

In my mind, there’s a cognitive basis to this form of low-stakes, casual writing. As with language acquisition, learning occurs whether or not we’re corrected. According to most research in language acquisition, children acquire their native languages through exposure, not through a formal learning process. My guess is that the same apply to writing.

In some ways, this is a defence of drafts. “Draft out your ideas without overthinking what might be wrong about your writing.” Useful advice, at least in my experience. The further point is to do something with those drafts, the basis for the RERO principle: “release your text in the wild, even if it may not correspond to your standards.” Every text is a work in progress. Especially in a context where you’re likely to get feedback (i.e., blogging). Trial and error, with a feedback mechanism. In my experience, feedback on writing tends to be given in a thoughtful and subtle fashion while feedback on ideas can be quite harsh.

The notion of writing styles is relevant, here. Some of Hugh’s arguments about the need for blogging in academia revolve around the notion that “academics are bad writers.” My position is that academics are expert writers but that academic writing is a very specific beast. Hugh’s writing standards might clash with typical writing habits among academics (which often include neologisms and convoluted metaphors). Are Hugh’s standards appropriate in terms of academic writing? Possibly, but why then are academic texts rating so low on writing standards after having been reviewed by peers and heavily edited? The relativist’s answer is, to me, much more convincing: academic texts are typically judged through standards which are context-specific. Judging academic writing with outside standards is like judging French writing with English standards (or judging prose through the standards of classic poetry).

Still, there’s something to be said about readability. Especially when these texts are to be used outside academia. Much academic writing is meant to remain within the walls of the Ivory Tower yet most academic disciplines benefit from some interaction with “the general public.” Though it may not be taught in universities and colleges, the skill of writing for a broader public is quite valuable. In fact, it may easily be transferable to teaching, especially if students come from other disciplines. Furthermore, writing outside one’s discipline is required in any type of interdisciplinary context, including project proposals for funding agencies.

No specific writing style is implied in blogging. A blogger can use whatever style she/he chooses for her/his posts. At the same time, blogging tends to encourage writing which is broadly readable and makes regular use of hyperlinks to connect to further information. In my opinion, this type of writing is a quite appropriate one in which academics can extend their skills.

“Public Review”

Much of the preceding connects with peer review, which was the basis of Mark Liberman’s post.

In academia’s recent history, “peer reviewed publications” have become the hallmark of scholarly writing. Yet, as Steve McIntyre claims, the current state of academic peer review may not be as efficient at ensuring scholarly quality as its proponents claim it to be. As opposed to financial auditing, for instance, peer review implies very limited assessment based on data. And I would add that the very notion of “peer” could be assessed more carefully in such a context.

Overall, peer review seems to be relatively inefficient as a “reality check.” This might sound like a bold claim and I should provide data to support it. But I mostly want to provoke some thought as to what the peer review process really implies. This is not about reinventing the wheel but it is about making sure we question assumptions about the process.

Blogging implies public scrutiny. This directly relates to transparency, discussed above. But there is also the notion of giving the public the chance to engage with the outcomes of academic research. Sure, the general public sounds like a dangerous place to propose some ideas (especially if they have to do with health or national security). But we may give some thought to Linus’s law and think about the value of “crowdsourcing” academic falsification.

Food for Thought

There’s a lot more I want to add but I should heed my call to RERO. Otherwise, this post will remain in my draft posts for an indefinite period of time, gathering dust and not allowing any timely discussion. Perhaps more than at any other point, I would be grateful for any thoughtful comment about academic blogging.

In fact, I will post this blog entry “as is,” without careful proofreading. Hopefully, it will be the start of a discussion.

I will “send you off” with a few links related to blogging in academic contexts, followed by Hugh’s list of arguments.

Links on Academic Blogging

(With an Anthropological emphasis)

Hugh’s List

  1. You need to improve your writing
  2. Some of your ideas are dumb
  3. The point of academia is to expand knowledge
  4. Blogging expands your readership
  5. Blogging protects and promotes your ideas
  6. Blogging is Reputation
  7. Linking is better than footnotes
  8. Journals and blogs can (and should) coexist
  9. What have journals done for you lately?

Actively Reading Open Access

Open Access

I’ve been enthusiastic about OA (open access to academic texts) for a number of years. I don’t tend to be extremely active in the OA milieu but I do use every opportunity I can to talk about OA, both in formal academic contexts and in more casual and informal conversation.

My own views about Open Access are that it should be plain common-sense, for both scholars and “the public.” Not that OA is an ultimate principle, but it seems so obvious to me that OA can be beneficial in a large range of contexts. In fact, I tend to conceive of academia in terms of Open Access. In my mind, a concept related to OA runs at the very core of the academic enterprise and helps distinguish it from other types of endeavours. Simply put, academia is the type of “knowledge work ” which is oriented toward openness in access and use.

Historically, this connection between academic work and openness has allegedly been the source of the so-called “Open Source movement” with all its consequences in computing, the Internet, and geek culture.

Quite frequently, OA advocates focus (at least in public) on specific issues related to Open Access. An OA advocate put it in a way that made me think it might have been a precaution, used by OA advocates and activists, to avoid scaring off potential OA enthusiasts. As I didn’t involve myself as a “fighter” in the OA-related discussions, I rarely found a need for such precautions.

I now see signs that the Open Access movement is finally strong enough that some of these precautions might not even be needed. Not that OA advocates “throw caution to the wind.” But I really sense that it’s now possible to openly discuss broader issues related to Open Access because “critical mass has been achieved.”

Suber’s Newsletter

Case in point, for this sense of a “wind of change,” the latest issue of Peter Suber’s SPARC Open Access Newsletter.

Suber’s newsletter is frequently a useful source of information about Open Access and I often get inspired by it. But because my involvement in the OA movement is rather limited, I tend to skim those newsletter issues, more than I really read them. I kind of feel bad about this but “we all need to choose our battles,” in terms of information management.

But today’s issue “caught my eye.” Actually, it stimulated a lot of thoughts in me. It provided me with (tasty) intellectual nourishment. Simply put: it made me happy.

It’s all because Suber elaborated an argument about Open Access that I find particularly compelling: the epistemological dimension of Open Acces. Because of my perspective, I respond much more favourably to this epistemological argument than I would with most practical and ethical arguments. Maybe that’s just me. But it still works.

So I read Suber’s newsletter with much more attention than usual. I savoured it. And I used this new method of actively reading online texts which is based on the Diigo.com social bookmarking service.

Active Reading

What follows is a slightly edited version of my Diigo annotations on Suber’s text.

Peter Suber, SPARC Open Access Newsletter, 6/2/08

Annotated

June 2008 issue of Peter Suber’s newsletter on open access to academic texts (“Open Access,” or “OA”).

tags: toblog, Suber, Open Access, academia, publishing, wisdom of crowds, crowdsourcing, critical thinking

General comments

  • Suber’s newsletters are always on the lengthy side of things but this one seems especially long. I see this as a good sign.
  • For several reasons, I find this issue of Suber’s newsletter is particularly stimulating. Part of my personal anthology of literature about Open Access.

Quote-based annotations and highlights.

Items in italics are Suber’s, those in roman are my annotations.

  • Open access and the self-correction of knowledge

    • This might be one of my favourite arguments for OA. Yes, it’s close to ESR’s description of the “eyeball” principle. But it works especially well for academia.
  • Nor is it very subtle or complicated
    • Agreed. So, why is it so rarely discussed or grokked?
  • John Stuart Mill in 1859
    • Nice way to tie the argument to something which may thought-provoke scholars in Humanities and Social Sciences.
  • OA facilitates the testing and validation of knowledge claims
    • Neat, clean, simple, straightforward… convincing. Framing it as hypothesis works well, in context.
  • science is self-correcting
    • Almost like “talking to scientists’ emotions.” In an efficient way.
  • reliability of inquiry
    • Almost lingo-like but resonates well with academic terminology.
  • Science is special because it’s self-correcting.
    • Don’t we all wish this were more widely understood?
  • scientists eventually correct the errors of other scientists
    • There’s an important social concept, here. Related to humility as a function of human interaction.
  • persuade their colleagues
  • new professional consensus
  • benefit from the perspectives of others
    • Tying humility, intellectual honesty, critical thinking, ego-lessness, and even relativist ways of knowing.
  • freedom of expression is essential to truth-seeking
  • opening discussion as widely as possible
    • Perhaps my favourite argument ever for not only OA but for changes in academia generally.
  • when the human mind is capable of receiving it
    • Possible tie-in with the social level of cognition. Or the usual “shoulders of giants.”
  • public scrutiny
    • Emphasis on “public”!
  • protect the freedom of expression
    • The problem I have with the way this concept is applied is that people rely on pre-established institutions for this protection and seem to assume that, if the institution is maintained, so is the protection. Dangerous!
  • If the only people free to speak their minds are people like the author, or people with a shared belief in current orthodoxy, then we’d rarely hear from people in a position to recognize deficiencies in need of correction.
    • This, I associate with “groupthink” in the “highest spheres” (sphere height being giving through social negotiation of prestige).
  • But we do have to make our claims available to everyone who might care to read and comment on them.
    • Can’t help but think that *some* of those who oppose or forget this mainly fear the social risks associated with our positions being questioned or invalidated.
  • For the purposes of scientific progress, a society in which access to research is limited, because it’s written in Latin, because authors are secretive, or because access requires travel or wealth, is like a society in which freedom of expression is limited.
  • scientists who are free to speak their minds but lack access to the literature have no advantage over scientists without the freedom to speak their minds
  • many-eyeballs theory
  • many voices from many perspectives
  • exactly what scientists must do to inch asymptotically toward certainty
  • devil’s advocate
  • enlisting as much help
  • validate knowledge claims in public
  • OA works best of all
    • My guess is that those who want to argue against this hypothesis are reacting in a knee-jerk fashion, perhaps based on personal motives. Nothing inherently wrong there, but it remains as a potential bias.
  • longevity in a free society
    • Interesting way to put it.
  • delay
  • the friction in a non-OA system
    • The academic equivalent of cute.
  • For scientific self-correction, OA is lubricant, not a precondition.
    • Catalyst?
  • much of the scientific progress in the 16th and 17th centuries was due to the spread of print itself and the wider access it allowed for new results
    • Neat way to frame it.
  • Limits on access (like limits on liberty) are not deal-breakers, just friction in the system
    • “See? We’re not opposed to you. We just think there’s a more efficient way to do things.”
  • OA can affect knowledge itself, or the process by which knowledge claims become knowledge
  • pragmatic arguments
    • Pretty convincing ones.
  • The Millian argument for OA is not the “wisdom of crowds”
    • Not exclusively, but it does integrate the diversity of viewpoints made obvious through crowdsourcing.
  • without attempting to synthesize them
    • If “wisdom of crowds” really is about synthesis, then it’s nothing more than groupthink.
  • peer review and the kind of empirical content that underlies what Karl Popper called falsifiability
    • I personally hope that a conversation about these will occur soon. What OA makes possible, in a way, is to avoid the dangers which come from the social dimension of “peerness.” This was addressed earlier, and I see a clear connection with “avoiding groupthink.” But the assumption that peer-review, in its current form, has reached some ultimate and eternal value as a validation system can be questioned in the context of OA.
  • watchdogs
  • Such online watchdogs were among those who first identified problems with images and other data in a cloning paper published in Science by Woo Suk Hwang, a South Korean researcher. The research was eventually found to be fraudulent, and the journal retracted the paper….
    • Not only is it fun as a “success story” (CHE’s journalistic bent), but it may help some people understand that there is satisfaction to be found in fact-checking. In fact, verification can be self-rewarding, in an appropriate context. Seems obvious enough to many academics but it sounds counterintuitive to those who think of academia as waged labour.

Round-up

Really impressive round-up of recent news related to Open Access. What I tend to call a “linkfest.”

What follows is my personal selection, based on diverse interests.

"To Be Verified": Trivia and Critical Thinking

A friend posted a link to the following list of factoids on his Facebook profile: Useless facts, Weird Information, humor. It contains such intriguing statements about biology, language, inventions, etc.

Similar lists abound, often containing the same tidbits:

Several neat pieces of trivial information. Not exactly “useless.” But gratuitous and irrelevant. The type of thing you may wish to plug in a conversation. Especially at the proverbial “cocktail party.” This is, after all, an appropriate context for attention economy. But these lists are also useful as preparation for game shows and barroom competitions. The stuff of erudition.

One of my first reflexes, when I see such lists of trivia online, is to look for ways to evaluate their accuracy. This is partly due to my training in folkloristics, as “netlore” is a prolific medium for verbal folklore (folk beliefs, rumors, urban legends, myths, and jokes). My reflex is also, I think, a common reaction among academics. After all, the detective work of critical thinking is pretty much our “bread and butter.” Sure, we can become bothersome with this. “Don’t be a bore, it’s just trivia.” But many of us may react from a fear of such “trivial” thinking preventing more careful consideration.

An obvious place to start verifying these tidbits is Snopes. In fact, they do debunk several of the statements made in those lists. For instance, the one about an alleged Donald Duck “ban” in Finland found in the list my friend shared through Facebook. Unfortunately, however, many factoids are absent from Snopes, despite that site’s extensive database.

These specific trivia lists are quite interesting. They include some statements which are easy to verify. For instance, the product of two numbers. (However, many calculators are insufficiently precise for the specific example used in those factoid lists.) The ease with which one can verify the accuracy of some statements brings an air of legitimacy to the list in which those easily verified statements are included. The apparent truth-value of those statements is such that a complete list can be perceived as being on unshakable foundations. For full effectiveness, the easily verified statements should not be common knowledge. “Did you know? Two plus two equals four.”

Other statements appear to be based on hypothesis. The plausibility of such statements may be relatively difficult to assess for anyone not familiar with research in that specific field. For instance, the statement about typical life expectancy of currently living humans compared to individual longevity. At first sight, it does seem plausible that today’s extreme longevity would only benefit extremely few individuals in the future. Yet my guess is that those who do research on aging may rebut the statement that “Only one person in two billion will live to be 116 or older.” Because such statements require special training, their effect is a weaker version of the legitimizing effect of easily verifiable statements.

Some of the most difficult statements to assess are the ones which contain quantifiers, especially those for uniqueness. There may, in fact, be “only one” fish which can blink with both eyes. And it seems possible that the English language may include only one word ending in “-mt” (or, to avoid pedantic disclaimers, “only one common word”). To verify these claims, one would need to have access to an exhaustive catalog of fish species or English words. While the dream of “the Web as encyclopedia” may hinge on such claims of exhaustivity, there is a type of “black swan effect” related to the common fallacy about lack of evidence being considered sufficient evidence of lack.

I just noticed, while writing this post, a Google Answers page which not only evaluates the accuracy of several statements found in those trivia lists but also mentions ease of verifiability as a matter of interest. Critical thinking is active in many parts of the online world.

An obvious feature of those factoid lists, found online or in dead-tree print, is the lack of context. Even when those lists are concerned with a single topic (say, snails or sleep), they provide inadequate context for the information they contain. I’m using the term “context” rather loosely as it covers both the text’s internal relationships (the “immediate context,” if you will) and the broader references to the world at large. Without going into details about philosophy of language, these approaches clearly inform my perspective.

A typical academic, especially an English-speaking one, might put the context issue this way: “citation needed.” After all, the Wikipedia approach to truth is close to current academic practice (especially in English-speaking North America) with peer-review replacing audits. Even journalists are trained to cite sources, though they rarely help others apply critical thinking to those sources. In some ways, sources are conceived as the most efficient way to assess accuracy.

My own approach isn’t that far from the citation-happy one. Like most other academics, I’ve learned the value of an appropriate citation. Where I “beg to differ” is on the perceived “weight” of a citation as support. Through an awkward quirk of academic writing, some citation practices amount to fallacious appeal to authority. I’m probably overreacting about this but I’ve heard enough academics make statements equating citations with evidence that I tend to be weary of what I perceive to be excessive referencing. In fact, some of my most link-laden posts could be perceived as attempts to poke fun at citation-happy writing styles. One may even notice my extensive use of Wikipedia links. These are sometimes meant as inside jokes (to my own sorry self). Same thing with many of my blogging tags/categories, actually. Yes, blogging can be playful.

The broad concept is that, regardless of a source’s authority, critical thinking should be applied as much as possible. No more, no less.

Rethinking Peer-Review and Journalism

Can’t find more information just now but on the latest episode of ScienceTalk (SciAm‘s weekly podcast)

Scientific American editor-in-chief John Rennie discusse[d] peer review of scientific literature, the subject of a panel he recently served on at the World Conference of Science Journalists

I hope there will be more openly available information about this panel and other discussions of the peer-review process.

Though I do consider the peer-review process extremely important for academia in general, I personally think that it could serve us more if it were rethought.  Learning that such a panel was held and hearing about some of the comments made there is providing me with some satisfaction. In fact, I’m quite glad that the discussion is, apparently, thoughtful and respectful instead of causing the kind of knee-jerk reaction which makes many a discussion inefficient (including in academic contexts).