Tag Archives: software development

Development and Quality: Reply to Agile Diary

[youtube=http://www.youtube.com/watch?v=iry_CKAlI3g]

Former WiZiQ product manager Vikrama Dhiman responded to one of my tweets with a full-blown blogpost, thereby giving support to Matt Mullenweg‘s point that microblogging goes hand-in-hand with “macroblogging.”

My tweet:

enjoys draft æsthetics yet wishes more developers would release stable products. / adopte certains produits trop rapidement.

Vikrama’s post:

Good Enough Software Does Not Mean Bad Software « Agile Diary, Agile Introduction, Agile Implementation.

My reply:

“To an engineer, good enough means perfect. With an artist, there’s no such thing as perfect.” (Alexander Calder)

Thanks a lot for your kind comments. I’m very happy that my tweet (and status update) triggered this.

A bit of context for my tweet (actually, a post from Ping.fm, meant as a status update, thereby giving support in favour of conscious duplication, «n’en déplaise aux partisans de l’action contre la duplication».)

I’ve been thinking about what I call the “draft æsthetics.” In fact, I did a podcast episode about it. My description of that episode was:

Sometimes, there is such a thing as “Good Enough.”

Though I didn’t emphasize the “sometimes” part in that podcast episode, it was an important part of what I wanted to say. In fact, my intention wasn’t to defend draft æsthetics but to note that there seems to be a tendency toward this æsthetic mode. I do situate myself within that mode in many things I do, but it really doesn’t mean that this mode should be the exclusive one used in any context.

That aforequoted tweet was thus a response to my podcast episode on draft æsthetics. “Yes, ‘good enough’ may work, sometimes. But it needs not be applied in all cases.”

As I often get into convoluted discussions with people who seem to think that I condone or defend a position because I take it for myself, the main thing I’d say there is that I’m not only a relativist but I cherish nuance. In other words, my tweet was a way to qualify the core statement I was talking about in my podcast episode (that “good enough” exists, at times). And that statement isn’t necessarily my own. I notice a pattern by which this statement seems to be held as accurate by people. I share that opinion, but it’s not a strongly held belief of mine.

Of course, I digress…

So, the tweet which motivated Vikrama had to do with my approach to “good enough.” In this case, I tend to think about writing but in view of Eric S. Raymond’s approach to “Release Early, Release Often” (RERO). So there is a connection to software development and geek culture. But I think of “good enough” in a broader sense.

Disclaimer: I am not a coder.

The Calder quote remained in my head, after it was mentioned by a colleague who had read it in a local newspaper. One reason it struck me is that I spend some time thinking about artists and engineers, especially in social terms. I spend some time hanging out with engineers but I tend to be more on the “artist” side of what I perceive to be an axis of attitudes found in some social contexts. I do get a fair deal of flack for some of my comments on this characterization and it should be clear that it isn’t meant to imply any evaluation of individuals. But, as a model, the artist and engineer distinction seems to work, for me. In a way, it seems more useful than the distinction between science and art.

An engineer friend with whom I discussed this kind of distinction was quick to point out that, to him, there’s no such thing as “good enough.” He was also quick to point out that engineers can be creative and so on. But the point isn’t to exclude engineers from artistic endeavours. It’s to describe differences in modes of thought, ways of knowing, approaches to reality. And the way these are perceived socially. We could do a simple exercise with terms like “troubleshooting” and “emotional” to be assigned to the two broad categories of “engineer” and “artist.” Chances are that clear patterns would emerge. Of course, many concepts are as important to both sides (“intelligence,” “innovation”…) and they may also be telling. But dichotomies have heuristic value.

Now, to go back to software development, the focus in Vikrama’s Agile Diary post…

What pushed me to post my status update and tweet is in fact related to software development. Contrary to what Vikrama presumes, it wasn’t about a Web application. And it wasn’t even about a single thing. But it did have to do with firmware development and with software documentation.

The first case is that of my Fonera 2.0n router. Bought it in early November and I wasn’t able to connect to its private signal using my iPod touch. I could connect to the router using the public signal, but that required frequent authentication, as annoying as with ISF. Since my iPod touch is my main WiFi device, this issue made my Fonera 2.0n experience rather frustrating.

Of course, I’ve been contacting Fon‘s tech support. As is often the case, that experience was itself quite frustrating. I was told to reset my touch’s network settings which forced me to reauthenticate my touch on a number of networks I access regularly and only solved the problem temporarily. The same tech support person (or, at least, somebody using the same name) had me repeat the same description several times in the same email message. Perhaps unsurprisingly, I was also told to use third-party software which had nothing to do with my issue. All in all, your typical tech support experience.

But my tweet wasn’t really about tech support. It was about the product. Thougb I find the overall concept behind the Fonera 2.0n router very interesting, its implementation seems to me to be lacking. In fact, it reminds me of several FLOSS development projects that I’ve been observing and, to an extent, benefitting from.

This is rapidly transforming into a rant I’ve had in my “to blog” list for a while about “thinking outside the geek box.” I’ll try to resist the temptation, for now. But I can mention a blog thread which has been on my mind, in terms of this issue.

Firefox 3 is Still a Memory Hog — The NeoSmart Files.

The blogpost refers to a situation in which, according to at least some users (including the blogpost’s author), Firefox uses up more memory than it should and becomes difficult to use. The thread has several comments providing support to statements about the relatively poor performance of Firefox on people’s systems, but it also has “contributions” from an obvious troll, who keeps assigning the problem on the users’ side.

The thing about this is that it’s representative of a tricky issue in the geek world, whereby developers and users are perceived as belonging to two sides of a type of “class struggle.” Within the geek niche, users are often dismissed as “lusers.” Tech support humour includes condescending jokes about “code 6”: “the problem is 6″ from the screen.” The aforementioned Eric S. Raymond wrote a rather popular guide to asking questions in geek circles which seems surprisingly unaware of social and cultural issues, especially from someone with an anthropological background. Following that guide, one should switch their mind to that of a very effective problem-solver (i.e., the engineer frame) to ask questions “the smart way.” Not only is the onus on users, but any failure to comply with these rules may be met with this air of intellectual superiority encoded in that guide. IOW, “Troubleshoot now, ask questions later.”

Of course, many users are “guilty” of all sorts of “crimes” having to do with not reading the documentation which comes with the product or with simply not thinking about the issue with sufficient depth before contacting tech support. And as the majority of the population is on the “user” side, the situation can be described as both a form of marginalization (geek culture comes from “nerd” labels) and a matter of elitism (geek culture as self-absorbed).

This does have something to do with my Fonera 2.0n. With it, I was caught in this dynamic whereby I had to switch to the “engineer frame” in order to solve my problem. I eventually did solve my Fonera authentication problem, using a workaround mentioned in a forum post about another issue (free registration required). Turns out, the “release candidate” version of my Fonera’s firmware does solve the issue. Of course, this new firmware may cause other forms of instability and installing it required a bit of digging. But it eventually worked.

The point is that, as released, the Fonera 2.0n router is a geek toy. It’s unpolished in many ways. It’s full of promise in terms of what it may make possible, but it failed to deliver in terms of what a router should do (route a signal). In this case, I don’t consider it to be a finished product. It’s not necessarily “unstable” in the strict sense that a software engineer might use the term. In fact, I hesitated between different terms to use instead of “stable,” in that tweet, and I’m not that happy with my final choice. The Fonera 2.0n isn’t unstable. But it’s akin to an alpha version released as a finished product. That’s something we see a lot of, these days.

The main other case which prompted me to send that tweet is “CivRev for iPhone,” a game that I’ve been playing on my iPod touch.

I’ve played with different games in the Civ franchise and I even used the FLOSS version on occasion. Not only is “Civilization” a geek classic, but it does connect with some anthropological issues (usually in a problematic view: Civ’s worldview lacks anthro’s insight). And it’s the kind of game that I can easily play while listening to podcasts (I subscribe to a number of th0se).

What’s wrong with that game? Actually, not much. I can’t even say that it’s unstable, unlike some other items in the App Store. But there’s a few things which aren’t optimal in terms of documentation. Not that it’s difficult to figure out how the game works. But the game is complex enough that some documentation is quite useful. Especially since it does change between one version of the game and another. Unfortunately, the online manual isn’t particularly helpful. Oh, sure, it probably contains all the information required. But it’s not available offline, isn’t optimized for the device it’s supposed to be used with, doesn’t contain proper links between sections, isn’t directly searchable, and isn’t particularly well-written. Not to mention that it seems to only be available in English even though the game itself is available in multiple languages (I play it in French).

Nothing tragic, of course. But coupled with my Fonera experience, it contributed to both a slight sense of frustration and this whole reflection about unfinished products.

Sure, it’s not much. But it’s “good enough” to get me started.

Funded Development for Touch Devices

It’s quite possible that these two projects are more radically innovative than they sound at first blush but they do relate to well-known concepts. I personally have high hopes for location-based services but I wish these services were taken in new directions.

Wheel Reinvention and Geek Culture

In mainstream North American society, “reinventing the wheel” (investing efforts on something which has already been done) is often seen as a net negative.  “Don’t waste your time.” “It’s all been done.” “No good can come out of it.”

In geek culture, the mainstream stigma on wheel reinvention has an influence. But many people do spend time revisiting problems which have already been solved. In this sense, geek culture is close to scientific culture. Not everything you do is completely new. You need to attempt things several times to make sure there isn’t something you missed. Like scientists, geeks (especially engineering-type ones) need to redo what others have done before them so they can “evolve.” Geeks are typically more impatient in their quest for “progress” than most scientists working in basic research, but the connection is there.

Reasons for wheel reinvention abound. The need to practice before you can perform. The burden of supporting a deprecated approach. The restrictions placed on so-called “intellectual property.” The lack of inspiration by some people. The (in)famous NIH (“Not Invented Here”) principle.  The fact that, as Larry Wall say, “there is always another way.”

Was thinking about this because of a web forum in which I participate. Although numerous web forum platforms exist as part of “Content Management Systems,” several of them free of charge, this web developer created his own content management system, including forum support.

Overall, it looks like any other web forum.  Pretty much the same features. The format tags are somewhat non-standard, the “look-and-feel” is specific, but users probably see it as the exact same as any other forum they visit. In fact, I doubt that most users think about the forum implementation on a regular basis.

This particular forum was created at a time when free-of-charge Content Management Systems were relatively rare.  The site itself was apparently not meant to become very big. The web developer probably put together the forum platform (platforum?) as an afterthought since he mostly wanted to bring people to his main site.

Thing is, though, the forums on that particular site seem to be the most active part of the site. In the past, the developer has even referred to this situation as a problem. He would rather have his traffic go to the main pages on the site than to the forums. Several “bridges” exist between the forums and the main site but the two seem rather independent of one another. Maybe the traffic issue has been solved in the meantime but the forums remain quite active.

My perception is that the reasons for the forums’ success include some “social” dimensions (the forum readership) and technical dimensions (the “reinvented” forum platform). None of these factors could explain the forums’ success but, taken together, they make it easy to understand why the forums are so well-attended.

In social terms, these forums reach something of a niche market which happens to be expanding. The niche itself is rather geeky in the passion for a product category as well as in the troubleshooting approach to life. Forum readers and participants are often looking for answers to specific questions. The signal to noise ratio in most of the site’s forums seems, on average, particularly high. Most moderation happens seamlessly, through the community. While not completely invisible, the site’s staff is rarely seen in most forum threads. Different forums, addressing different categories of issues, attract different groups of people even though some issues cross over from one forum to another. The forum users’ aggregate knowledge on the site’s main topic is so impressive as to make the site look like the “one-stop shop” for any issue related to the site’s topic. At the same time, some approaches to the topic are typically favored by the site’s members and alternative sites have sprung up in part to counterbalance a perceived bias on that specific site. A sense of community has been built among some members of several of the forums and the whole forum section of the site feels like a very congenial place.

None of this seems very surprising for any successful web forum. All of the social dynamics on the site (including the non-forum sections) reinforce the idea that a site’s succes “is all about the people.”

But there’s a very simple feature of the site’s forum platform which seems rather significant: thread following through email. Not unique to this site and not that expertly implemented, IMHO. But very efficient, in context.

At the end of every post is a checkbox for email notification. It’s off by default so the email notification is “opt-in,” as people tend to call this. There isn’t an option to “watch” a thread without posting in it (that is, only people who write messages in that specific thread can be notified directly when a new message appears). When a new message appears in a specific thread, everyone who has checked the mail notification checkbox for a message in that thread receives a message at the email address they registered with the site. That email notification includes some information about the new forum post (author’s username, post title, thread title, thread URL, post URL) but not the message’s content. That site never sends any other mail to all users. Private mail is done offsite as users can register public email addresses and/or personal homepages/websites in their profiles.

There’s a number of things I don’t particularly enjoy about the way this email notification system works. The point is, though, it works pretty well. If I were to design a mail notification system, I would probably not do it the same way.  But chances are that, as a result, my forums would be less successful than that site’s forums are (from an outsider’s perspective).

Now, what does all this have to do with my original point, you ask? Simple: sometimes reinventing the wheel is the best strategy.

Free As In Beer: The Case for No-Cost Software

To summarize the situation:

  1. Most of the software for which I paid a fee, I don’t really use.
  2. Most of the software I really use, I haven’t paid a dime for.
  3. I really like no-cost software.
  4. You might want to call me “cheap” but, if you’re developing “consumer software,” you may need to pay attention to the way people like me think about software.

No, I’m not talking about piracy. Piracy is wrong on a very practical level (not to mention legal and moral issues). Piracy and anti-piracy protection are in a dynamic that I don’t particularly enjoy. In some ways, forms of piracy are “ruining it for everyone.” So this isn’t about pirated software.

I’m not talking about “Free/Libre/Open Source Software” (FLOSS) either. I tend to relate to some of the views held by advocates of “Free as in Speech” or “Open” developments but I’ve had issues with FLOSS projects, in the past. I will gladly support FLOSS in my own ways but, to be honest, I ended up losing interest in some of the most promising projects out there. Not saying they’re not worth it. After all, I do rely on many of those projects But in talking about “no-cost software,” I’m not talking about Free, Libre, or Open Source development. At least, not directly.

Basically, I was thinking about the complex equation which, for any computer user, determines the cash value of a software application. Most of the time, this equation is somehow skewed. And I end up frustrated when I pay for software and almost giddy when I find good no-cost software.

An old but representative example of my cost-software frustration: QuickTime Pro. I paid for it a number of years ago, in preparation for a fieldwork trip. It seemed like a reasonable thing to do, especially given the fact that I was going to manipulate media files. When QuickTime was updated, my license stopped working. I was basically never able to use the QuickTime Pro features. And while it’s not a huge amount of money, the frustration of having paid for something I really didn’t need left me surprisingly bitter. It was a bad decision at that time so I’m now less likely to buy software unless I really need it and I really know how I will use it.

There’s an interesting exception to my frustration with cost-software: OmniOutliner (OO). I paid for it and have used it extensively for years. When I was “forced” to switch to Windows XP, OO was possibly the piece of software I missed the most from Mac OS X. And as soon as I was able to come back to the Mac, it’s one of the first applications I installed. But, and this is probably an important indicator, I don’t really use it anymore. Not because it lacks features I found elsewhere. But because I’ve had to adapt my workflow to OO-less conditions. I still wish there were an excellent cross-platform outliner for my needs. And, no, Microsoft OneNote isn’t it.

Now, I may not be a typical user. If the term weren’t so self-aggrandizing, I’d probably call myself a “Power User.” And, as I keep saying, I am not a coder. Therefore, I’m neither the prototypical “end user” nor the stereotypical “code monkey.” I’m just someone spending inordinate amounts of time in front of computers.

One dimension of my computer behavior which probably does put me in a special niche is that I tend to like trying out new things. Even more specifically, I tend to get overly enthusiastic about computer technology to then become disillusioned by said technology. Call me a “dreamer,” if you will. Call me “naïve.” Actually, “you can call me anything you want.” Just don’t call me to sell me things. 😉

Speaking of pressure sales. In a way, if I had truckloads of money, I might be a good target for software sales. But I’d be the most demanding user ever. I’d require things to work exactly like I expect them to work. I’d be exactly what I never am in real life: a dictator.

So I’m better off as a user of no-cost software.

I still end up making feature requests, on occasion. Especially with Open Source and other open development projects. Some developers might think I’m just complaining as I’m not contributing to the code base or offering solutions to a specific usage problem. Eh.

Going back to no-cost software. The advantage isn’t really that we, users, spend less money on the software distribution itself. It’s that we don’t really need to select the perfect software solution. We can just make do with what we have. Which is a huge “value-add proposition” in terms of computer technology, as counter-intuitive as this may sound to some people.

To break down a few no-cost options.

  • Software that came with your computer. With an Eee PC, iPhone, XO, or Mac, it’s actually an important part of the complete computing experience. Sure, there are always ways to expand the software offering. But the included software may become a big part of the deal. After all, the possibilities are already endless. Especially if you have ubiquitous Internet access.
  • Software which comes through a volume license agreement. This often works for Microsoft software, at least at large educational institutions. Even if you don’t like it so much, you end up using Microsoft Office because you have it on your computer for free and it does most of the things you want to do.
  • Software coming with a plan or paid service. Including software given by ISPs. These tend not to be “worth it.” Yet the principle (or “business model,” depending on which end of the deal you’re on) isn’t so silly. You already pay for a plan of some kind, you might as well get everything you need from that plan. Nobody (not even AT&T) has done it yet in such a way that it would be to everyone’s advantage. But it’s worth a thought.
  • “Webware” and other online applications. Call it “cloud computing” if you will (it was a buzzphrase, a few days ago). And it changes a lot of things. Not only does it simplify things like backup and migration, but it often makes for a seamless computer experience. When it works really well, the browser effectively disappears and you just work in a comfortable environment where everything you need (content, tools) is “just there.” This category is growing rather rapidly at this point but many tech enthusiasts were predicting its success a number of years ago. Typical forecasting, I guess.
  • Light/demo versions. These are actually less common than they once were, especially in terms of feature differentiation. Sure, you may still play the first few levels of a game in demo version and some “express” or “lite” versions of software are still distributed for free as teaser versions of more complete software. But, like the shareware model, demo and light software may seem to have become much less prominent a part of the typical computer user’s life than just a few years ago.
  • Software coming from online services. I’m mostly thinking about Skype but it’s a software category which would include any program with a desktop component (a “download”) and an online component, typically involving some kind of individual account (free or paid). Part subscription model, part “Webware companion.” Most of Google’s software would qualify (Sketchup, Google Earth…). If the associated “retail software” were free, I wouldn’t hesitate to put WoW in this category.
  • Actual “freeware.” Much freeware could be included in other categories but there’s still an idea of a “freebie,” in software terms. Sometimes, said freeware is distributed in view of getting people’s attention. Sometimes the freeware is just the result of a developer “scratching her/his own itch.” Sometimes it comes from lapsed shareware or even lapsed commercial software. Sometimes it’s “donationware” disguised as freeware. But, if only because there’s a “freeware” category in most software catalogs, this type of no-cost software needs to be mentioned.
  • “Free/Libre/Open Source Software.” Sure, I said earlier this was not what I was really talking about. But that was then and this is now. 😉 Besides, some of the most useful pieces of software I use do come from Free Software or Open Source. Mozilla Firefox is probably the best example. But there are many other worthy programs out there, including BibDesk, TeXShop, and FreeCiv. Though, to be honest, Firefox and Flock are probably the ones I use the most.
  • Pirated software (aka “warez”). While software piracy can technically let some users avoid the cost of purchasing a piece of software, the concept is directly tied with commercial software licenses. (It’s probably not piracy if the software distribution is meant to be open.) Sure, pirates “subvert” the licensing system for commercial software. But the software category isn’t “no-cost.” To me, there’s even a kind of “transaction cost” involved in the piracy. So even if the legal and ethical issues weren’t enough to exclude pirated software from my list of no-cost software options, the very practicalities of piracy put pirated software in the costly column, not in the “no-cost” one.

With all but the last category, I end up with most (but not all) of the software solutions I need. In fact, there are ways in which I’m better served now with no-cost software than I have ever been with paid software. I should probably make a list of these, at some point, but I don’t feel like it.

I mostly felt like assessing my needs, as a computer user. And though there always are many things I wish I could do but currently can’t, I must admit that I don’t really see the need to pay for much software.

Still… What I feel I need, here, is the “ultimate device.” It could be handheld. But I’m mostly thinking about a way to get ideas into a computer-friendly format. A broad set of issues about a very basic thing.

The spark for this blog entry was a reflection about dictation software. Not only have I been interested in speech technology for quite a while but I still bet that speech (recognition/dictation and “text-to-speech”) can become the killer app. I just think that speech hasn’t “come true.” It’s there, some people use it, the societal acceptance for it is likely (given cellphone penetration most anywhere). But its moment hasn’t yet come.

No-cost “text-to-speech” (TTS) software solutions do exist but are rather impractical. In the mid-1990s, I spent fifteen months doing speech analysis for a TTS research project in Switzerland. One of the best periods in my life. Yet, my enthusiasm for current TTS systems has been dampened. I wish I could be passionate about TTS and other speech technology again. Maybe the reason I’m notis that we don’t have a “voice desktop,” yet. But, for this voice desktop (voicetop?) to happen, we need high quality, continuous speech recognition. IOW, we need a “personal dictation device.” So, my latest 2008 prediction: we will get a voice device (smartphone?) which adapts to our voices and does very efficient and very accurate transcription of our speech. (A correlated prediction: people will complain about speech technology for a while before getting used to the continuous stream of public soliloquy.)

Dictation software is typically quite costly and complicated. Most users don’t see a need for dictation software so they don’t see a need for speech technology in computing. Though I keep thinking that speech could improve my computing life, I’ve never purchased a speech processing package. Like OCR (which is also dominated by Nuance, these days) it seems to be the kind of thing which could be useful to everyone but ends up being limited to “vertical markets.” (As it so happens, I did end up being an OCR program at some point and kept hoping my life would improve as the result of being able to transform hardcopies into searchable files. But I almost never used OCR (so my frustration with cost-software continues).)

Ah, well…

Android "Sales Pitch" and "Drift-Off"

(Google’s Android is an open software platform to be put on cellphones next year.)

There’s something to this video. Something imilar to Steve Jobs’s alleged “Reality Distortion Field,” but possibly less connected to presentation skills or perceived charisma. Though Mike seems to be a more experienced presenter than those we see in other videos about Android, and though the presentation format is much slicker than other videos about Android, there’s something special about this video, to me.

For one thing, the content of the three “Androidology” videos are easy to understand, even for a non-developer/non-coder. Sure, you need to know some basic terms. But the broad concepts are easy to understand, at least for those who have been observing the field of technology. One interesting thing about this is that these “Androidology” videos are explicitly meant for software programmers. The “you” in this context specifically refers to would-be developers of Android applications. At the same time, these videos do a better job, IMHO, of “selling Android to tech gurus” than other Android-related videos published by Google.

Now, I do find this specific video quite interesting, and my interest has to do with a specific meaning of “sales pitch.”

I keep going back to a Wired article about the “drift-off moment” during sales pitches (or demos):

When Mann gives a demo, what he’s waiting for is what salespeople call “the drift-off moment.” The client’s eyes get gooey, and they’re staring into space. They’re not bored – they’re imagining what they could do with SurveyBuilder. All tech salespeople mention this – they’ve succeeded not when they rivet the client’s attention, but when they lose it.

I apply this to teaching when I can and I specifically talked about this during a presentation about online tools for teaching.

This video on four of Android’s APIs had this effect on me. Despite not being a developer myself, I started imagining what people could do with Android. It was just a few brief moments. But very effective.

The four APIs discussed in this video are (in presentation order):

  1. Location Manager
  2. XMPP Service
  3. Notification Manager
  4. View System (including MapView and WebView)

Mike’s concise (!) explanations on all of these are quite straightforward (though I was still unclear on XMPP and on details of the three other APIs after watching the video). Yet something “clicked” in my mind while watching this. Sure, it might just be serendipitous. But there’s something about these APIs or about the way they are described which make me daydream.

Which is exactly what the “drift-off moment” is all about.

Technorati Tags: , , , , , , , , , , , , , , , , ,

[youtube=http://www.youtube.com/watch?v=MPukbH6D-lY&feature=PlayList&p=D7C64411AF40DEA5&index=2]

Legal Sense: CMS Edition

This one is even more exciting than the SecondLife statement.

After the announcement that the USPTO was reexamining its patents in a case against open source course management software, Blackboard incorporated is announcing that it is specifically not going to use its patents to sue open source and other non-commercial providers of course management software.

From a message sent to users of Blackboard’s products and relayed by the Moodle community.

I am writing to share some exciting news about a patent pledge Blackboard is making today to the open source and home-grown course management community.  We are announcing a legally-binding, irrevocable, world-wide pledge not to assert any of our issued or pending patents related to course management systems or transaction systems against the use, development or support of any open source or home-grown course management systems.

This is a major victory. Not only for developers of Moodle, Sakai, ATutor, Elgg, and Bodington course- and content-management solutions, but for anyone involved in the open and free-as-in-speech approach to education, research, technology, and law.

Even more so than in Microsoft’s case, Blackboard is making the most logical decision it could make. Makes perfect business sense: they’re generating goodwill, encouraging the world’s leading eLearning communities, and putting themselves in a Google-like “do no evil” position in the general public’s opinion. Also makes perfect legal sense as they’re acknowledging that the law is really there to protect them against misappropriation of their ideas by commercial competitors and not to crush innovation.

A small step for a corporation … a giant step for freedomkind.

Google Feature Overload

You know that feeling when you just realize that something really neat has been hidden in plain sight for a while and that most people had realized it before you did? It's my feeling with the current state of Google's products and features. Wasn't completely out of the loop: did learn about many features through tech podcasts and blog entries (Spreadsheets, Calendar, etc.). But some things just passed me by, like Co-Op and the Notebook browser extension (which does work on Mac OS X!).

One reason for my not noticing those items might have to do with the disparate classification of their products, tools, features. Some neat things are found in the labs, others in Web Search Features, yet others appear only as content for the personalized homepage or as gadgets/plugins for Google Desktop. What's tagged as "new" is not always so new while some seemingly new things aren't tagged as "new." And, as is well-known, Google tends to call "beta" products which appear quite stable and to not label some cutting-edge features as beta.

All in all, it's quite overwhelming.

There's certainly the perfect blog, podcast, mailing-list to learn all the important news about Google's new stuff. But that implies knowing how active Google really has been, recently. Just amazing, really. And following yet another tech company's product shouldn't be a task in and of itself for the average user.

It must all be because of their policy to have developers work on their own projects a certain proportion of the time. An excellent approach to development, certainly, and the result isn't even a lack of direction. But the task of understanding the Google universe is daunting because the possibilities are endless. Some products are still rather pedestrian but some may imply deep changes in workflow or approach to the online world.

The Google Hacks book should be updated every week… 😉

Microsoft Disinforms on Open-Source and Free-Software

Can Windows and Linux Learn to Play Nice?:

A commercial company has to build intellectual property, while the GPL, by its very nature, does not allow intellectual property to be built, making the two approaches fundamentally incompatible, Muglia said.

Interesting take on “intellectual property.’ Would benefit from a bit more of an explanation. Is “IP” the very foundation of any commercial company?
What's more awkward, though, is that Microsoft veep Bob Muglia talks about the GPL in the context of open-source. As he surely knows, this is exactly where the terms “open-source” and “free software” are not interchangeable. While the two are quite similar, “free software” refers to a movement in favour of free (as in speech) or “libre” development in direct opposition to the notion of “intellectual property.” “Open-source,” on the other hand, refers to a development process through which source code for software is shared by multiple developers in an open fashion, whether or not that code is meant to be protected as “intellectual property.” In fact, many open-source projects are not only interoperable with commercial software but do in fact have commercial licenses through which they protect their IP. Whichever model we prefer, free or open, they're models of very different things. The two models are quite compatible in practice. They are both used in resistance to Microsoft's hegemony. But confounding them serves little purpose in the discussion. It might not be a strategy on Muglia's part to confuse the two issues. Interestingly enough, the “free software” vs. “open source” issue wasn't even the main thrust of the Slashdot thread on the subject, at least in the beginning.

Technorati Tags: , , , , ,

RefWorks, Reference Software

RefWorks
A "Personal Web-based Database and Bibliography Creator"

Apparently, people at IU South Bend asked several users for comments about different tools and ended up with RefWorks. Can see why. In terms of ease-of-use, it's very good. And it has many interesting features, including some that aren't found in the typical dedicated desktop applications.

I must admit, I'm rather impressed with their rate of release. They seem to follow the typical open-source model of "release early, release often."
In fact, although it's proprietary/closed-source/commercially-distributed (through CSA) and not necessarily inexpensive/free-as-beer, it's almost open-sourcesque in its approach. At least, much more so than Thomson/ISI products.
Funnily enough, CSA integrates with Endnote (made by ISI) better than ISI products do. 😉

Of course, there are several good bibliography solutions around. A cool open-source one is BibDesk. Originally meant for BibTeX data, it now does much more and serves as a cool solution to autofile PDF versions of academic articles (realising part of the dream of an "iTunes for academic papers"). What's neat about RefWorks is that it can be shared. Not only is it possible to make any number of accounts for specific projects (very cool solution for classes) but it has a specific tool for reference sharing. Didn't use it yet but the rest of the program is good enough that RefShare can't be all bad.

Well, this is getting into a pseudo-review, which would be much more difficult to do. One thing that's rather impressive for an online system is that it accepted a submission of tens of thousands of references from an Endnote file without complaining too much (apart from server delays). So they don't seem to have a limit in the number of references.

Which leads us to an interesting point on reference software. [Start rambling…] A given item, say a reference to a journal article, will be present in many people's reference lists. Most of the data should be standardized for all occurrences of that item: author name, publication date, complete title… Some things are added by the user: date accessed, comments, reading notes… In good database design, RefWorks should only keep one copy of that item (with the standardized information) and have links to that item in people's lists. The customized info could probably be streamlined and will probably not amount to a lot of data. Now, there's an interesting side-effect of this as common references should in fact be standardized. One of the most nonsensical things with online reference databases is that you might have "Smith-Black, John D.," "Smith, J.," "John Daniel Black-Smith," and "Black, J.D.S." referring to the same person. Many programs have ways to standardize references locally but the power is there to have, once and for all, one standardized author ID with all associated info. Sure, the output might still end up as "Smith, J." in some bib formats. But at least the information would be kept. And there could be author pages with a lot of info, from institutional affiliation to publication lists and professional highlights. The main advantage of having a centralised system is that changes could be applied globally (as in "across the system") as opposed to customised by each user. Authors could register themselves and add pertinent information. Readers could send comments to authors (if allowed explicitly). Copies of some publications could be linked directly. Comments by many users could linked to a given publication. Think of the opportunities for collaboration!
And the simple time-saving advantage of having, once and for all, the correct, "official" capitalization of the title.
One important point: reading notes. Bibliographies are great. The maximal information needed for a given item in a bibliography would seem quite minimal (author(s), date(s), title(s)…). Presentation/format became an important issue because some publications are quite strict in their opinion that theirs is the "correct" way to display a reference. Yet there's much more that can be done with a database of academic references.
Yes, including reading notes.
Maybe it's just a personal thing but active reading implies some type of note-taking, IMHO. Doesn't need to be very elaborate and a lot of it can be cryptic. But it's truly incredible to see how useful it can be to have a file containing all reading notes (with metadata) from one pass over a given text. With simple search technology, looking for all things you've read that made you think of a specific concept can be unbelievably efficient in bringing ideas together. Nothing really fancy. Just a list of matches for a keyword. Basic database stuff. But, oh so good!
Again, it might be personal. What I tend to is to create a file for a given text I read and write notes with associated page numbers. Sometimes, it's more about a stream of consciousness started by a quote. Sometimes, it's the equivalent of underlining, for future use. And, sometimes, it's just a reminder of what's said in the text. This type of active reading is incredibly long but the effects can be amazing.
Of course, we all use different systems. It'd just be nice to have a way to integrate these practices with reference software. And to PDAs, of course! And PDFs!
The dream: you read an article in PDF format on your PDA, you "enter" your reading notes directly in the PDF, and they're linked to your reference software. You could even share some of these notes with colleagues along with the PDF file.
Oh, sure, many people prefer to do their readings offline and few people have the inclination to type the notes they scribble in the margins. But for those of us who do most of our reading online, there could/should be ways to make life so much easier. Technologically, it should be quite easy to do.
[…Stop rambling. Well, for now, at least.]