Yesterday I took Merlot.org to task, mildly to be sure, but there is no doubt I did it. Merlot has two ways that contributed content gets reviewed: Users reviews a la Amazon.com book reviews and for some of the content there are professional reviews by hired faculty very much in the mode that peer-reviewed journal articles are refereed. The latter mechanism emerged because some of the founders of the Merlot wanted the creation of learning objects to count for promotion and tenure in the same way that the writing of journal articles counts. Their process was driven by that consideration.
One might have come up with quite a different process if one took as the primary need identifying content that down stream users would put into their courses. Merlot still doesn’t do a good enough job for them. In fields where there is a lot of content to choose from, how do I as a potential user of the content choose? The Amazon.com approach to this offers some information but is it sufficient? Don’t I as a user want to know more about the recommender before I give credence to the review? There are a lot of wackos out there. Why should I rely on their judgment? And, in truth, doesn’t that same criticism apply for the peer reviewed content? Just who is that reviewer and why should I trust that person?
Now let’s step away from the particular issue and ask how people find Web content now. The answer, to me, seems to be that perhaps one starts with a Google search, or one goes to a known “trusted source” meaning a place where interesting and valued content has been found previously. Those are launch places. Then from the trusted sources there are links out to something and one might follow the link from a link, etc.
I want to think a bit more about non-Google trusted sources. And for the moment let’s focus on blogs. So go to Bloglines.com, do a search on your favorite topic (I chose edtech) and then for some of the blogs that come up, click on “Subscribers” and you should see the list of those subscribers who made their subscription public. Then click on a few of them and see what they have as their feeds in the topic you searched. Then a couple of more iterations on the same. I think you’ll find that its seems like there are a few core blogs that many people read and then a bund more out there. Those core blogs end up being the trusted sources I mentioned above. And the commentary that those blogs provide, in my opinion, is similar to the type of commentary that movie reviews provide, expect that the topic need not be movies. The movie review metaphor, however, is helpful here, in my opinion because many people choose what movie to see based on what Roger Ebert says, or in a bygone era what Pauline Kael said.
So how about developing learning object content critics a la movie reviewers? Here’s another area to compare with Merlot. Merlot had reviewers who were subject matter experts (though in some of the science disciplines I believe how expert some of the reviewers were was an issue.) Ebert and Kael are experts in film, not experts in the subject matter of those films. Couldn’t we have generalists who are expert in learning objects that review content across disciplines? We couldn’t if this content creation is to count for promotion and tenure. But in terms of what works for the learner, I think it is more than possible. And, especially, if the bulk of the learning objects are to focus on the first and second year college experience, then this type of review might be much more valuable than disciplinary review. (Of course if the content was fundamentally wrong but was pleasing pedagogically that would be a concern. So in reviewing any single piece of content, it would have to be done in comparison with other content that is already trusted.)
Does anybody do content reviews now? I think that it comes up occasionally en passant in various blogs but I’m not aware of folks who do that as a regular avocation and go out of their way to find new content to review. So the question I want to pose here is whether we in higher ed who want to promote open content should provide incentives for reviews to take place? And if we did, could we make it other than cheer leading and more like film criticism. Typically authors don’t know their peer reviewers in the refereeing process. But everyone knows Roger Ebert. For the film review process to work the reviewer needs the intellectual freedom to say what he thinks.
Consequently, I’m unsure whether a review process of this sort can work. But it seems to me we should explore this approach more before we build yet another repository for online content. In my opinion we need to spend more time considering the social dimensions of the user’s choice and work to improve that. Repositories don’t do anything in that dimension.