Sunday, November 06, 2011

Some regrets about learning management systems

A traditional approach to education has learning preceding assessment, with the assessment activity itself distinct from the learning and typically where the assessment doesn't produce additional learning.  Here I'm not referring to the incentive effects the assessment might generate - students do study for an exam.  What I'm talking about is that while taking the exam, might the students learn something new then and there?  If my experience as a teacher is any indicator of that, the expectation of the students is that this shouldn't happen.  They want to be tested on what they already know

As a reaction to the stress that high stakes assessment generates, the student reaction is sensible.  They'd like to reduce uncertainty and have confidence of the grade they will earn, based on the preparations they've already made.  This, however, doesn't mean students feel the same way when in a low stakes environment, such as doing homework online.  And if you focus on that environment, it is much more natural to have an iterative approach between learning and assessment.   Put a different way, learning is mainly by doing and in the doing there is assessment at each step, a check on whether that which just preceded makes sense and if the learner is ready to proceed to the next step.

Those with strong learning-to-learn skills develop methods of self-assessment entirely on their own and use those methods to master new material and internalize that material into their own world view.  One big goal of college is to help students develop their learning-to-learn skills when those skills are no so well developed, as is the case for many students.  Homework should be part and parcel of that.  Unfortunately, the mechanism by which students develop the learning-to-learn skills remains opaque.  When homework was done on paper, the assessment had to occur subsequently to when the student turned in the assignment. The technology of the time enforced the notion that assessment follows learning.  Textbook chapters had problems at the end for students to work.  Within a chapter there might be illustrative examples, but they are fully worked through.  Throughout my years of teaching I've had many students say, "I understand it when you it explain it, but I can't work a problem on my own."  They don't realize that they don't understand it.  They don't receive any helpful feedback after getting the example that tests their own understanding. 

With online technology and automated assessment there is the possibility of doing things differently.  Some years ago at the behest of my friend Steve Acker, I wrote  this piece on Dialogic Learning Objects for Campus Technology Magazine.  As example, I talked about "content surveys" that asked questions at various junctures of the presentation.  The students were expected to provide a written response to each question, after which a suggested response was provided. The students could back up and rewrite their responses after having seen the suggested response.  The iterative aspect was definitely in the content surveys, but the automated assessment at each juncture was not.  I downloaded the student responses, put them into a spreadsheet, and discussed some of the more interesting and revealing ones in class.

I don't know how many other disciplines can be described this way, but economics certainly can be divided into models and their understanding, on the one hand, and story telling about real world economic applications, on the other.  It's the story telling part that I was getting at with the content surveys.  The model part perhaps can be done with automated assessment, in whole or part.

Here are two examples of model-type dialogic objects done in Excel.  The first is on the elements of supply and demand, developed in a particular way to tie individual behavior to what happens in the market overall.   (You have to download the Excel file to do this.  You can't do it in Google Docs, which here is used simply as an open repository for the content.)  The second develops the buyer side notion of reservation price, the analogous seller side notion of opportunity cost, and talks about substitutes and complements in the market.   These examples do the dialogic part quite well and do have automated assessment.  I have recently rewritten them taking out all macros and activex controls so they should work on Mac as well as on PC, as long as you have a recent version of Excel.

You can't do in the LMS what I've done with Excel.  Indeed you can't even come close.  The question is why.  Let me give two different reasons.  One is that browsers are more limited than applications.  The second is that there's been a lack of imagination in developing the LMS, so the limits with what can be done in that environment result because the assessment engines remain unsophisticated.  I'll illustrate some instances of each.  My further remarks are meant for all LMS, but I don't know all the systems well.  So I will make reference explicitly only to the ones I know a little. 

The old WebCT Vista and the current Blackboard Learn has a self-test function.   When writing a self-test the designer has the option of providing feedback either immediately, as soon as the student has answered the particular question, or deferring all feedback till the student finishes the self test.  (Moodle 1.9, I believe, doesn't have that function.  You can give practice quizzes worth zero points, but you can't provide immediate feedback in those.)  Feedback immediately after a question is answered is consistent with the dialogic approach.  However, there was no way for the instructor to track whether the students did the the self test.  Such tracking (a participation credit, if you will) is a necessary component of an effective system.  Many students will do the work if they get credit for it, but not otherwise.  Why isn't there a self test with tracking option?   I attribute this to lack of imagination, since all the component functionality exists in other forms of assessment in these systems. 

The WebCT and Blackboard systems have a random number question type.  It allows a single answer only.  What I want, however, is to have random numbers for a problem, with a graph, where there are multiple questions pertaining to the same problem and where the realization of the random numbers stay fixed from one question to the next.  The LMS does allow randomized questions within an assessment, but you can't correlate the realizations of those across questions.  The upshot is that if you want to have a bunch of questions in the LMS that refer to the same scenario, you then have to give up on having randomization in parameter values.  I should note here that Moodle does offer a Cloze question type that does allow for multiple questions within a question.  Presumably, you could write multiple versions of the same Cloze question, with each version representing a particular realization of the random values.  It is possible but clunky.  And it doesn't address the immediate feedback issue.  Here too I attribute the limitation to a lack of imagination.

The LMS allows for third party hosted assessment done as a SCORM module, so you might have thought the issue could be outsourced to other content tools, such as Adobe Presenter, which does allow quiz questions with immediate feedback to be interspersed in a presentation.  My experience with that is the questions must be simple (multiple choice or matching) and the responses can't be tied to any random variables.  Scorm modules may be good and useful for other things.  But I don't believe they solve this set of issues.

There is a further issue with Economics in particular, but perhaps exists in other disciplines as well.  Since the subject is so graphical in nature, when random numbers are used it would be good for those numbers to appear in the graphs  as well.  By 1997, if I recall correctly, Mallard could do that.  The designer had to specify not just the random number but the (x,y) coordinates of where that random number should appear in the graph.  I believe this was done with Javascript that produced an overlay on the original image.  The random number was in the overlay.  Generating the right (x,y) coordinates was a pain, but it was possible.   It is a snap to do this in Excel.  Much of it is done automatically.  This one I attribute to limitations with browser functionality.

With these limitations, the assessment engines in the LMS are kind of dull.  As a result, the content that is written for them also tends to be dull.  The real issue here is not any one particular functionality but rather that we've not seen a broad culture develop to produce rich content with interesting assessment.  And we've certainly not seen the hope I articulated in that Campus Technology piece, that we'd begin to see students producing these sort of learning modules as part of their coursework, to be redeployed in future offerings of the course, so that having produced some rich content, a suitable volume of content would be produced.  

As I've been writing this piece, I read Why Science Majors Change Their Minds.  My son, who is a sophomore in Industrial Engineering here, sent it to my wife.  She forwarded the link to me.  There is a big argument in this piece that we need project oriented STEM education.   I agree.  But I suspect there still will be basic courses taught more traditionally.  In those basic courses, one might have hoped 15 years ago that the LMS would improve the instruction in them.  Alas, the LMS has not lived up to that potential.

No comments: