Yesterday I took Merlot.org to task, mildly to be sure, but there is no doubt I did it. Merlot has two ways that contributed content gets reviewed: Users reviews a la Amazon.com book reviews and for some of the content there are professional reviews by hired faculty very much in the mode that peer-reviewed journal articles are refereed. The latter mechanism emerged because some of the founders of the Merlot wanted the creation of learning objects to count for promotion and tenure in the same way that the writing of journal articles counts. Their process was driven by that consideration.
One might have come up with quite a different process if one took as the primary need identifying content that down stream users would put into their courses. Merlot still doesn’t do a good enough job for them. In fields where there is a lot of content to choose from, how do I as a potential user of the content choose? The Amazon.com approach to this offers some information but is it sufficient? Don’t I as a user want to know more about the recommender before I give credence to the review? There are a lot of wackos out there. Why should I rely on their judgment? And, in truth, doesn’t that same criticism apply for the peer reviewed content? Just who is that reviewer and why should I trust that person?
Now let’s step away from the particular issue and ask how people find Web content now. The answer, to me, seems to be that perhaps one starts with a Google search, or one goes to a known “trusted source” meaning a place where interesting and valued content has been found previously. Those are launch places. Then from the trusted sources there are links out to something and one might follow the link from a link, etc.
I want to think a bit more about non-Google trusted sources. And for the moment let’s focus on blogs. So go to Bloglines.com, do a search on your favorite topic (I chose edtech) and then for some of the blogs that come up, click on “Subscribers” and you should see the list of those subscribers who made their subscription public. Then click on a few of them and see what they have as their feeds in the topic you searched. Then a couple of more iterations on the same. I think you’ll find that its seems like there are a few core blogs that many people read and then a bund more out there. Those core blogs end up being the trusted sources I mentioned above. And the commentary that those blogs provide, in my opinion, is similar to the type of commentary that movie reviews provide, expect that the topic need not be movies. The movie review metaphor, however, is helpful here, in my opinion because many people choose what movie to see based on what Roger Ebert says, or in a bygone era what Pauline Kael said.
So how about developing learning object content critics a la movie reviewers? Here’s another area to compare with Merlot. Merlot had reviewers who were subject matter experts (though in some of the science disciplines I believe how expert some of the reviewers were was an issue.) Ebert and Kael are experts in film, not experts in the subject matter of those films. Couldn’t we have generalists who are expert in learning objects that review content across disciplines? We couldn’t if this content creation is to count for promotion and tenure. But in terms of what works for the learner, I think it is more than possible. And, especially, if the bulk of the learning objects are to focus on the first and second year college experience, then this type of review might be much more valuable than disciplinary review. (Of course if the content was fundamentally wrong but was pleasing pedagogically that would be a concern. So in reviewing any single piece of content, it would have to be done in comparison with other content that is already trusted.)
Does anybody do content reviews now? I think that it comes up occasionally en passant in various blogs but I’m not aware of folks who do that as a regular avocation and go out of their way to find new content to review. So the question I want to pose here is whether we in higher ed who want to promote open content should provide incentives for reviews to take place? And if we did, could we make it other than cheer leading and more like film criticism. Typically authors don’t know their peer reviewers in the refereeing process. But everyone knows Roger Ebert. For the film review process to work the reviewer needs the intellectual freedom to say what he thinks.
Consequently, I’m unsure whether a review process of this sort can work. But it seems to me we should explore this approach more before we build yet another repository for online content. In my opinion we need to spend more time considering the social dimensions of the user’s choice and work to improve that. Repositories don’t do anything in that dimension.
Good point about the different purposes of reviews. I'm not so sure that the MERLOT history part is accurate - I think the issue is more that these differences were not evident back when MERLOT began, rather than choosing one goal over others [e.g., a scholarly approach versus a testimonial "this is what it did for me" approach].
ReplyDeleteRe giving credence to reviews by unknown reviewers: since MERLOT does multiple reviews of each object which are coalesced by an editor, the chance of one wacky review is reduced. Ultimately, if people find the MERLOT reviews effective the credibility will accrue to the process rather than the individuals - the reason academic journals rely on their reputation and keep reviewers anonymous.
Good point about the difference between a context expert as reviewer versus an instructional expert as reviewer. The MERLOT review process at taste.merlot.org/catalog/peer_review/eval_criteria.htmtries to balance these aspects - along with expertise on ease of use - but it may be too much to ask for every reviewer to have expertise across the board. Peter Taylor and others in Australia authored a report which called for reviews by a team of 3 people, each expert in one of these facets.
At least one of the MERLOT partners is trying to move the peer review process upstream in development, so that during design there is more explicit consideration of reusability by having a variety of contexts as well as expertise represented.
Re your point about browsing versus searching: another direction to ponder is browsing learning objects as sources of learning activities whose design pattern can be reused with other content. That implies quite a different form of inquiry from a content-oriented search or browse, more based on the problems learners commonly encounter in mastering the content.
Tom - thanks for your thoughtful and informative comment. I'll admit to being not fully sure of the history of Merlot. What I recall is from conversations with Carl Berger that he had with our CIC Learning Technology group some time ago.
ReplyDeleteI do want to go back to the main point, which is as a user I want to know who the reviewer is and that I hope the reviewer would have a reputation for producing good reviews and then ask how such a reputation might be built. So somewhat orthogonal to Merlot's reviewing practice, it would be good to understand how users get to know about the reviewers.