In version 1.0 of her book on The Participatory Museum, Nina K. Simon described the importance of impact assessment:
Evaluation can help you measure the impact of past projects and advocate for future initiatives.
That chapter struck me as having a slightly different aim than other sections of The Participatory Museum. Much of the book is about helping practitioners undertake participatory projects in museums where they work. Without using the business jargon of “best practices”, the thrust of the book relates to following appropriate examples and avoiding pitfalls. That chapter on evaluation, however, sounds like it also aims at providing practitioners with some of the tools they can use to convince diverse people of the value of specific projects aimed at broad participation. In other words, that chapter had something to do with “giving ammunition” for the debates and negotiations leading to culture change.
To be clear, there’s a lot of continuity between that chapter and the rest of the book. That chapter also includes clear guidelines on making projects work and other chapters give neat examples which can help convince third parties. But I still perceived a bit of a shift in tone, between this chapter and the rest of the book, as though it were addressed to a slightly different crowd.
In this post on Museum 2.0, Simon addresses a gap between decision-makers and practitioners, as it relates to assessment. Doing so, Simon may clarify something about her chapter on evaluation.
This post revolves around a study which, as is confirmed by one of its authors, “was written primarily with policymakers, school administrators, and other scholars as the intended audience”. Such a specific audience requires rhetorical devices which relate more to pleading than to constructing knowledge. While the study most likely follows the scientific method, the language used brings it close to judicial proceedings. “Solid evidence” isn’t just about data, and “proving” isn’t the same thing as “providing support for”. Perhaps more importantly, the “skeptical audience” are said to be “influenced by evidence”. So the purpose of such a research project is clear: getting certain decision-makers to change their minds about something we find important.
There’s nothing wrong with such an approach. Much research works like this, in the current context. So do some artistic projects. There’s a clear goal and the process used to achieve this goal sounds appropriate enough. Decision-makers may indeed be swayed by “solid evidence”. They may also be influenced by genuine insight, by people they deem influential, or by unrelated “arguments” (say, in bribery). But it’s fine to focus on them being rational actors who base their decisions on their own evaluation of the evidence presented to them.
There are some things to keep in mind. It’s one of those many situations in which transparency and honesty are quite important. A project which explicitly aims to convince me that museum tours enhance learning carries the burden of proof and lets me think critically about the issue. The type of assessment Simon was proposing had to be transparent and honest. Not only did it come from project participants, but any answer could be useful, including a negative assessment which could serve as feedback in the rest of the process (reminds me of Norbert Wiener…).
There’s also the broader context to keep in mind. Simon comes from the perspective of involving diverse participants, not giving more weight to those people already established as decision-makers. She’s quite clear on the importance of specific people in hierarchical structures (say, museum administrators). But her goal isn’t to help the hierarchy sustain itself. In a participatory context, there’s a lot to be said about “distributed” decision-making (in parallel to “distributed computing”). In a more collaborative structure, many decisions are made by many people, without waiting for approval or “buy-in” from upper instances. A lot of cool things can happen through such collaborative structures and we find many examples in this post-industrial era.
Maybe much of this relates to trust. At least, to the version of trust which underlined Hanifan’s original concept of “social capital”. When members of a community (or other strong social unit) trust one another, some things are easier to do because there’s no requirement to constantly convince others that what is done is ok. There’s considerable overhead in having to constantly justify everything you do. The state of academic research relates to this overhead. Much of it has to do with goal displacement, since research projects are aimed at justifying their own existence.
Applied research and action research may develop outside of such a logic. While any research project can have a “convincing” document as part of its output, action research can accomplish a lot through the project itself. Simon’s book contains several examples of how that might work, including the simple yet effective idea of getting decision-makers to directly observe what is going on “in the field”. A report may be convincing. A video might be more convincing. Direct experience can generate new insight and make for sound decisions.