Tag Archives: citation count

"To Be Verified": Trivia and Critical Thinking

A friend posted a link to the following list of factoids on his Facebook profile: Useless facts, Weird Information, humor. It contains such intriguing statements about biology, language, inventions, etc.

Similar lists abound, often containing the same tidbits:

Several neat pieces of trivial information. Not exactly “useless.” But gratuitous and irrelevant. The type of thing you may wish to plug in a conversation. Especially at the proverbial “cocktail party.” This is, after all, an appropriate context for attention economy. But these lists are also useful as preparation for game shows and barroom competitions. The stuff of erudition.

One of my first reflexes, when I see such lists of trivia online, is to look for ways to evaluate their accuracy. This is partly due to my training in folkloristics, as “netlore” is a prolific medium for verbal folklore (folk beliefs, rumors, urban legends, myths, and jokes). My reflex is also, I think, a common reaction among academics. After all, the detective work of critical thinking is pretty much our “bread and butter.” Sure, we can become bothersome with this. “Don’t be a bore, it’s just trivia.” But many of us may react from a fear of such “trivial” thinking preventing more careful consideration.

An obvious place to start verifying these tidbits is Snopes. In fact, they do debunk several of the statements made in those lists. For instance, the one about an alleged Donald Duck “ban” in Finland found in the list my friend shared through Facebook. Unfortunately, however, many factoids are absent from Snopes, despite that site’s extensive database.

These specific trivia lists are quite interesting. They include some statements which are easy to verify. For instance, the product of two numbers. (However, many calculators are insufficiently precise for the specific example used in those factoid lists.) The ease with which one can verify the accuracy of some statements brings an air of legitimacy to the list in which those easily verified statements are included. The apparent truth-value of those statements is such that a complete list can be perceived as being on unshakable foundations. For full effectiveness, the easily verified statements should not be common knowledge. “Did you know? Two plus two equals four.”

Other statements appear to be based on hypothesis. The plausibility of such statements may be relatively difficult to assess for anyone not familiar with research in that specific field. For instance, the statement about typical life expectancy of currently living humans compared to individual longevity. At first sight, it does seem plausible that today’s extreme longevity would only benefit extremely few individuals in the future. Yet my guess is that those who do research on aging may rebut the statement that “Only one person in two billion will live to be 116 or older.” Because such statements require special training, their effect is a weaker version of the legitimizing effect of easily verifiable statements.

Some of the most difficult statements to assess are the ones which contain quantifiers, especially those for uniqueness. There may, in fact, be “only one” fish which can blink with both eyes. And it seems possible that the English language may include only one word ending in “-mt” (or, to avoid pedantic disclaimers, “only one common word”). To verify these claims, one would need to have access to an exhaustive catalog of fish species or English words. While the dream of “the Web as encyclopedia” may hinge on such claims of exhaustivity, there is a type of “black swan effect” related to the common fallacy about lack of evidence being considered sufficient evidence of lack.

I just noticed, while writing this post, a Google Answers page which not only evaluates the accuracy of several statements found in those trivia lists but also mentions ease of verifiability as a matter of interest. Critical thinking is active in many parts of the online world.

An obvious feature of those factoid lists, found online or in dead-tree print, is the lack of context. Even when those lists are concerned with a single topic (say, snails or sleep), they provide inadequate context for the information they contain. I’m using the term “context” rather loosely as it covers both the text’s internal relationships (the “immediate context,” if you will) and the broader references to the world at large. Without going into details about philosophy of language, these approaches clearly inform my perspective.

A typical academic, especially an English-speaking one, might put the context issue this way: “citation needed.” After all, the Wikipedia approach to truth is close to current academic practice (especially in English-speaking North America) with peer-review replacing audits. Even journalists are trained to cite sources, though they rarely help others apply critical thinking to those sources. In some ways, sources are conceived as the most efficient way to assess accuracy.

My own approach isn’t that far from the citation-happy one. Like most other academics, I’ve learned the value of an appropriate citation. Where I “beg to differ” is on the perceived “weight” of a citation as support. Through an awkward quirk of academic writing, some citation practices amount to fallacious appeal to authority. I’m probably overreacting about this but I’ve heard enough academics make statements equating citations with evidence that I tend to be weary of what I perceive to be excessive referencing. In fact, some of my most link-laden posts could be perceived as attempts to poke fun at citation-happy writing styles. One may even notice my extensive use of Wikipedia links. These are sometimes meant as inside jokes (to my own sorry self). Same thing with many of my blogging tags/categories, actually. Yes, blogging can be playful.

The broad concept is that, regardless of a source’s authority, critical thinking should be applied as much as possible. No more, no less.

Schools, Research, Relevance

The following was sent to the Moodle Lounge.

Business schools and research | Practically irrelevant? | Economist.com

My own reaction to this piece…
Well, well…
The title and the tone are, IMHO, rather inflammatory. For those who follow tech news, this could sound like a column by John C. Dvorak. The goal is probably to spark conversation about the goals of business schools. Only a cynic (rarely found in academia 😛 ) would say that they’re trying to increase readership. 😎

The article does raise important issues, although many of those have been tackled in the past. For instance, the tendency for educational institutions to look at the short-term gains of their “employees’ work” for their own programs instead of looking at the broader picture in terms of social and human gains. Simple rankings decreasing the diversity of programmes. Professors who care more about their careers than about their impact on the world. The search for “metrics” in scholarship (citation impact, patents-count, practical impact…). The quest for prestige. Reluctance to change. Etc.

This point could lead to something interesting:

AACSB justifies its stance by saying that it wants schools and faculty to play to their strengths, whether they be in pedagogy, in the research of practical applications, or in scholarly endeavour.

IMHO, it seems to lead to a view of educational institutions which does favour diversity. We need some schools which are really good at basic research. We need other schools (or other people at the same schools) to be really good ast creating learning environments. And some people should be able to do the typical goal-oriented “R&D” for very practical purposes, with business partners in mind. It takes all kinds. And because some people forget the necessity for diverse environments, it’s an important point to reassess.
The problem is, though, that the knee-jerk reaction apparently runs counter to the “diversity” argument. Possibly because of the AACSB’s own recommendations or maybe because of a difference of opinion, academics (and the anonymous Economist journalist) seem to understand the AACSB’s stance as meaning that all programs should be evaluated with the exact same criteria which give less room for basic research. Similar things have been done in the past and, AFAICT, basic research eventually makes a comeback, one way or the other. A move toward “practical outcomes” is often a stopgap measure in a “bearish” context.

To jump on the soapbox for a second. I personally do think that there should be more variety in academic careers, including in business schools. Those who do undertake basic research are as important as the others. But it might be ill-advised to require every faculty member at every school to have an impressive research résumé every single year. Those people whose “calling” it is to actually teach should have some space and should probably not be judged using the same criteria as those who perceive teaching as an obstacle in their research careers. This is not to say that teachers should do no research. But it does mean that requiring proof of excellence in research of everyone involved is a very efficient way to get both shoddy research and dispassionate teaching. In terms of practical implications for the world outside the Ivory Tower, often subsumed under the category of “Service,” there are more elements which should “count” than direct gain from a given project with a powerful business partner. (After all, there is more volatility in this context than in most academic endeavours.) IMHO, some people are doing more for their institutions by going “in the world” and getting people interested in learning than by working for a private sponsor. Not that private sponsors are unimportant. But one strength of academic institutions is that they can be neutral enough to withstand changes in the “market.”

Phew! 😉

Couldn’t help but notice that the article opens the door for qualitative and inductive research. Given the current trend in and toward ethnography, this kind of attitude could make it easier to “sell” ethnography to businesses.
What made me laugh in a discussion of video-based ethnographic observation is that they keep contrasting “ethnography” (at least, the method they use at EverydayLives) with “research.” 😀

The advantage of this distinction, though, in the context of this Economist piece, is that marketeers and other business-minded people might then see ethnography as an alternative for what is perceived as “practically irrelevant” research. 💡