Tuesday, May 24, 2005

What would we want in a really good CMS? Part 3

I'm going to stick with the notion of using the CMS to promote critical thinking and specifically inquiry based learning. If the word "experiment" is broadly conceived, then many open ended assignments can be thought of as experiments. Some of these are experiments at the individual level only. The ones I want to focus on are experiments at the group or entire class level. In such experiments there is a requirement to pool the results of each student to give meaning to what is found.

Conceivably the course management system could be viewed as the place to record the observations in the experiment and then to process those to some degree (I will illustrate below). The processed results would then serve as input on the drawing conclusions and reflection parts of the inquiry process.

Consider this example, which was the first assignment in my econ principles class in spring '04. There were 15 students. I asked each student to identify the top 10 principles textbooks. The rules were that if they identified a book from the top 10 and nobody else in the class identified the same book they would get 5 points. If at least two students identified a book then nobody in the class would get any points for that book. There were no precise instructions on how they should report their books, e.g., by author, by publisher, with or without publishing date, nor were there rules about how many books they could suggest. (more than 10?) I used the assignment tool in WebCT Vista for this, because I needed to give them the points. (It turned out that one student earned 5 points, all the other submissions had duplicates and they identified collectively all the textbooks in the top 10.)

I had to compile the results to show what was in their response. I did this manually and with some intelligence. You can see that compilation here. This looks unexceptional but note that the headers of the columns are determined by the data; they were not preset. Those headers are the textbook authors. Shouldn't it be possible to go from straight survey data to a representation of the results like this? Would such a representation know when to include the result even if there were a typo in the author's name? Or suppose in the case of jointly authored work the student only included one author. (In some cases earlier editions only had one author and as the senior author got more mature he brought on a junior partner.) Should the result be included in that case?

Now envision that in addition to the representation as I've got it, some other statistics of the the data were given, for example the count in each column and the rank statistic. (If you don't know about rank statistics perhaps you recall the old TV show, Family Feud, where the goal was to give responses that were in the top 3 in a previously conducted survey. This is the same idea.) These type of pieces of information are extremely useful in the refection part. Where did the students go to get the information? Why did they look their? Did they try to behave in this respect unlike their peers?

CMS now are reasonably good in their ability to collect (text based) data from students. But a lot more headway could be made in terms of representing the results and "letting the data speak." This is an obvious area for improvement.

Let me turn to a different way the data might improve things. In the first post on this topic, I mentioned Alfred Hubler, the inventor of CyberProf. One of his original ideas (this was before Amazon.com existed) was to use the history of student responses to a physics quiz question to recommend hints to a student working on the problem. In other words, the hints should be based on the past frequency of mistakes. At the time he was implementing the idea, it failed miserably - the server ran very slow. At that time my home computer was a 70 Megahertz Power Mac. Now the family had a relatively new Dell at home with CPU over 3 Gigahertz. That's a more than 40 times increase in crunching power. (Gotta love Moore's Law.) It's time to take Alfred's idea out of the mothballs and try again. And in the process, relying on the crunching power of the student desktop computer makes sense.

Let me make one more point where I don't have a specific recommendation about the software but I feel comfortable describing the general issue. A lot of students for better or worse (mostly worse in my opinion) treat the professor as the oracle and try to learn at the feet of said professor. They do this by literally sucking up everything the professor spits out. So the student view may be characterized by the impression that the professor possesses truth and their job is to gain truth by listening to the professor. This view is anti critical thinking.

The experimental approach that I've advocated means that students learn from observation and then reflection, their own and with others in the class. This means that their perspective on the issue should go through a transformation, from very naive and flat to more nuanced and with depth. It would be extremely valuable to mark the stages where the perspective changes or is modified, either to reconcile an observation that wasn't predicted or to account for a conclusion that wasn't anticipated. By considering these markings, the students should see themselves mature as they progress and become more aware of their own learning.

As I said, I'm less sure how this should be done . At one point in the late '90s I thought that portfolio assessment of a student's work would help demonstrate that trajectory to the students. But as ePortfolios have played out, they have lost this function and instead have been used to demonstrate student competence (not well in my opinion). I hope the longitudinal assessment notion can reemerge.

Are there other areas where CMS can be made much better? Sure. So tomorrow I'll wrap up this topic with some other suggestions.

No comments:

Post a Comment