Saturday, October 09, 2021

Learning Objects and Micro-Lectures Revisited - Introducing Peer Review by Students into Redesign

About two to three weeks ago I completed what might prove to be my last professional obligation.  I refereed a paper for the Online Learning Journal. I have some history with this journal, which was previously known as the Journal of ALN (Asynchronous Learning Networks).  Indeed, I have the lead article in the very first volume of the journal.  Having published in a journal, there is an obligation to referee for them.  I'd like to begin by talking a bit about refereeing and about the obligation to do so.  

I cut my teeth on refereeing not with online learning but rather with economic theory, what I studied in graduate school.  There I learned to get into the depths of a paper, to really understand what the author was driving at, but also to look for flaws in the development of the model.  Working papers precede papers published in journals and those are what most practicing economists read.  If there are flaws in the former but they are not fatal and can be remedied, then part of the refereeing process is to indicate the path whereby this can be done. Not all PhDs in economics have this skill to read a paper in such depth.  I recall my early years as an assistant professor, one colleague in particular wanted me to read his working papers papers and give him feedback, even though I had the annoying habit (to him) that having identified the flaw I wouldn't read further into the paper, until another version was produced that corrected the error. 

Writing a referee report is something more than giving feedback to a colleague.  I learned how to do it over time, by receiving referee reports on my own work, some of which I regarded highly while others I thought superficial.  And as I struggled early to get my work published, getting a handle on this form of negotiation with the referee and the journal editor became crucial for my academic survival.  It was a trial by fire, quite stressful but also very educative.  My own style in writing referee reports, after I had tenure, was borne out of that assistant professor experience.  At the time the typical process was single-blind reviewing, meaning the referee's name was hidden from the author, but not vice versa.  Most of the papers I ended up reviewing were written by assistant professors at other universities.  Once or twice, I got a paper from a well known economist to review.  My recollection of the latter is that the paper was not carefully written, which created its own set of issues.  And there were usually two different referees, a way to ensure that more than one perspective about the paper would be considered.  The editor might then be thought of as a third referee in combination with a different role, as arbitrator, to determine how the paper should be disposed.  I want to note for those who haven't ever done this that the referee typically sends a cover letter to the editor, along with the referee report, in which some recommendation for how the paper should be disposed is given. 

Refereeing at the journals where I participated was not compensated.  A reward for doing a good job in refereeing was to earn the respect of the editor by doing so.  In contrast, if a referee did what was evidently only a superficial job, the editor might then develop enmity for the referee, unless it was evident that some life event interfered in doing the refereeing well, in which case it would be appropriate for the referee to decline reviewing the paper.  Concocting faux life events to get out of refereeing is another way to earn the editor's enmity.  One might then ask whether there is sufficient incentive in the process for most referees to be earnest in their reviews.  Thinking this way, in my opinion, leads down the road to perdition.  It is a sense of obligation to the profession that makes a scholar do a good job in refereeing.  That sense of obligation is a necessary component to make the system function well overall. 

But, I think most undergraduates I've taught in the previous decade don't understand this sense of obligation, except perhaps in volunteer work they do.  In their understanding, the main type of work they hope to have upon graduation is all about incentives, embedded mainly in how they will be paid.  And I'm afraid that is how they go about their studies, with such a heavy focus on grades, which reinforces this considering incentives only.  I've had a few students where either I mentored them after the class ended or I've had an extended email thread with them during the course that continued afterwards, where they couldn't understand why I would do this since I'm retired. Why not simply "enjoy life" instead?  Since I'm retired, why take an interest in how the student develops that goes beyond what other instructors apparently do and extends beyond the semester where I've been paid to teach the class? They don't see the sense of obligation I have for doing this, which is much the same as the sense of obligation in refereeing. 

But being retired does insert an added wrinkle.  For how long does the sense of obligation extend?  Does it end at retirement, last as long as the person is physically and mentally up to fulfilling it whether working or retired, or perhaps is it something else - competence in the subject matter and whether it might have eroded over time - that determines the extent of the obligation. In my particular case, I am time abundant and can readily afford a departure from "enjoying life" to referee a paper.  Yet for the OLJ review I was afraid in advance that I was not competent, as the paper would invariably be about some experience with online learning during the pandemic, while I had no such experience myself, having last taught in fall 2019.  I asked the editors about this.  They left it up to me.  After the fact, I felt okay reviewing the particular paper, although the general issue remains and I asked out of them sending me papers in the future.  On a different matter, OLJ now does double blind reviews. (The referee is not told the name of the author(s) though might get clues as to that from the references and other mentions in the paper.)  Would an author want the referee to know who they are?  I couldn't figure that one out.  

But I also did wonder whether the sense of obligation can be taught - to undergraduates - and, if so, how might that be done?  In what follows, that question should be kept in mind. 

* * * * * 

Let's now turn to learning objects and micro-lectures.  I made a slew of these back in spring 2011, the first semester I taught after retiring.  These were for intermediate microeconomics.  Most, if not all, of the learning objects were Excelets made to illustrate the economics.  I then made screen capture videos where each video reviews one worksheet, demonstrating how to manipulate the controls and to understand what is being graphed.  These videos were then captioned.  The voice over, done in my usual style, was made without referring to notes.  The presentation is more casual than you would find in a textbook, but nonetheless comprehensive on the topic under consideration. At the time of teaching this course I was aware of a possibility that I might teach it again in the future in blended format (where some online substitutes for face-to-face lecture).  That didn't happen, as we got a new department head in Economics soon thereafter and he had other ideas about how intermediate microeconomics should be taught. But this explains why I went to the trouble of producing the Excelets and the micro-lectures, which took considerable effort in the making.

The students in the class didn't like the micro-lectures, but then they didn't like the class as a whole.  I gather that this was mainly because they found the exams difficult and the videos didn't seem to prepare them for the exams as they expected.  I found this disappointing, though it brought back memories from the early 1980s when I first taught intermediate micro and had similar struggles.  The story would end right here except that a funny thing happened after the course was over.  The videos continued to get hits and the occasional comment that thanked me for making the video. 

For a while it was a mystery as to who was watching these videos.  Though I don't have absolute confirmation on this, I gathered that most of the viewers were students who were taking intermediate micro or some other microeconomics class, with the vast majority taking the class at some other university, quite possibly not in the U.S.  Either their instructor or their textbook was difficult to follow on a particular topic, so they went to YouTube looking for a video from some other instructor that might be easier to understand or more thorough in the explanation provided. If that was indeed happening then the micro-lecture presentation content, which I made, was decoupled from any assessment content that the students might experience, which would be provided by their own instructor. So these students should have a different perspective from the students who took my class, as the micro-lectures could be considered simply by whether they felt that they understood what was going on immediately after watching them. 

Now we have reached the point where I can explain why I've focused only on my own learning objects and micro-lectures. YouTube provides analytic information to video creators about viewer access to videos.  I only have such data for my own creations.  It would be extremely interesting to have the analogous data from a wide variety of instructors.  But lacking that, I will maintain this more narrow focus. Below is information for the top 5 videos from the past month, 4 of which were created during that spring 2011 semester.  (Note that the average duration times are in minutes:seconds format.)


 

Each video is only a few minutes in its entirety.   Students go out of their way to start viewing the videos, as there is no requirement for them to do so.  Yet most stop well before the video is complete.  Why does this happen?  It is something of a mystery.  

Originally, before I looked further at the analytics data, I hypothesized that a small number of dedicated students watched the videos through to their conclusion, while the rest would watch only briefly and then stop.  In fact, its more a continuous falling off in the distribution as illustrated below.  This is for the Isoquants video, the top one listed above. but now it is for the past year rather than just the past month.  I switched to yearly data so there would be enough information to plot, though the monthly graph looks quite similar. 


Now let's consider a hypothetical where an evaluator gets to interview a student viewer of this video soon after they finished watching it.  If the student watched to completion, did the student feel the material in the video was well explained and easily understood?  If the student stopped before the end of the video, and maybe this should be segmented into stopping early, stopping about halfway, and stopping later but still before the end, was the student satisfied or disappointed with the experience?  Why did the student stop viewing?  Beforehand, what was the student hoping to get out of the viewing?  Could the video have been done in a different way that would have produced a more satisfying experience for the student?

Of course, it is quite possible for factors unrelated to video quality to explain the student stopping time in viewing.  For example, if the student is multi-processing, then it can be the lure of something else online that is the key factor.  Alternatively, if the student has network connectivity issues, that might end viewing the video involuntarily.

There is a way that these two different sorts of explanation overlap.  The analytics reveal that well over half of those who access the video do so via search in YouTube.  But once one finds a hit to that search and goes to it, one will be confronted with videos on the same topic that are showcased in the right sidebar.  Does the student plan to watch all of these or only one?  Which video on the subject matter gets top ranking by the search engine?  If one reason to stop watching a video is to watch another on the same topic, how does the stopping time get determined in that case?  And is the stopping time impacted if the viewer has already watched part of a video on the same topic?

Crowd sourcing video quality may be sensible for certain types of content.  I don't want to dispute that. But for academic content, which surely will not go viral, it may not be the best way.  Wouldn't it be better for a student to watch a reasonably well done video in its entirety than to flit between various videos on the same subject done by different instructors?  It may matter less which instructor to watch than to get the complete lesson.  Extrapolating based on the data I have, that seems to occur infrequently. 

To sum up, the reason why most students don't watch the video to completion can be categorized as some flaw with the video (there are needless sticking points during the presentation), or some flaw with the student (the student lacks the necessary background or gets lost too easily in the argument), or extraneous factors (mainly multiprocessing and living life online).  In what follows I will abstract from the third category as I have nothing to say about how to manage those extraneous factors.  If it proves to be the main cause, what I do say should be discounted accordingly.   I find it amusing that what remains is remarkably similar to the situation in the early to mid 1990s, when I taught intermediate micro and that motivated me to take up online learning at the time.  Then, a small fraction of the students really liked my course and got a lot out of it.  But the vast majority did not.  I wanted to know if the cause was them or me.  Was there a better way to teach the class the would engage more students and improve their learning?  That was the question then.  It's still the question now. 

I want to make one more point before getting into the redesign part of the post.  About 20 years ago I became aware of the Merlot Project, through the CIC Learning Technology Group and specifically from Carl Berger, then of the University of Michigan. Merlot was a referatory, which contained the metadata (descriptions) for learning objects, and links to those objects that resided elsewhere on the open Internet.  The idea was to encourage the diffusion of use of a learning object developed by one instructor so that other instructors teaching similar courses at other universities would bring that learning object into their own courses.  At the time Merlot provided quality assurance of the learning object via peer review.  So, in this sense, learning objects were treated like working papers submitted for review at some journal.  However, there were/are differences, particularly in how the reviewer was selected and the criteria the reviewer would use to accept a learning object in Merlot.  Would a potential instructor adopter of the learning object buy into the criteria and thereby trust the review?  Or would this instructor feel the need to perform their own review, possibly a quite cumbersome activity? 

But what if students adopted learning objects directly, not mediated by instructors?  Part of the discussion in this section is to show that is happening some now, though at present it is mainly under the radar in talking about online learning.  Would some sort of peer review process help to drive student adoption?  What might that look like?  In the next section I speculate on this some. 

* * * * * 

If there are sticking points in a video micro-lecture they are surely not there by design as the instructor/creator of the video aims for clarity.  So, in identifying sticking points, students are more expert than the instructor is. Therefore, some process is needed whereby students can identify sticking points. However, in any particular identification, it may be a deficiency in the student that gives a "false positive" rather than a true sticking point. The solution is to trust strength in numbers.  If many different students independently identify the same sticking point, then there is reason to believe there is a problem with the video that is there in spite of the instructor/creator's best efforts to avoid such problems.  

As a rule, students who don't understand some course content are reluctant to admit that to their instructor.  A process needs to be identified where students are comfortable opening up on these matters.  The previous paragraph assumes such a process has been identified.  Some years ago I wrote a post called Rethinking Office Hours, which I believe has the elements of a good process.  If you couple that with having most if not all presentation content coming via online micro-lectures, quite possibly created by other instructors, along with homework/assessments that entail both identifying sticking points in those micro-lectures and the students writing up summaries of the videos in their own words, then in total you have the makings of such a process.  This same process might have to be done on multiple campuses and entail the same video micro-lectures, to pool the results where students have identified sticking points. That would entail a good deal of coordination across campuses that currently does not exist.  Here, I will simply assume it is possible to do and move on.

Having identified the sticking points in the video, let's consider two different possibilities to address them.  If the sticking point is caused by students not having the background that the creator assumed they would have, then a possible solution is to link to other content, text or another video, that provides the requisite background.  This potential solution will make viewing all the necessary content a longer experience than is indicated by the duration of the original video. So a full solution here requires encouraging students to take on this additional burden.  And part of that may mean not simply linking to other content that already exists, but instead producing a condensed version that is sufficient to make sense of the original video yet is not too time consuming to view or read.

The other possibility is that the instructor was needlessly obtuse in providing an explanation or in offering a discussion of a result.  In this case the video content should be replaced by something that is more straightforward, as if one student was explaining the content to another.  But in this desire for greater simplicity, one must be sure not to omit critical bits of content.  So, here we note that the instructor who made the original video is an expert in the subject matter, while a student who explains the material to another student may understand the content, but is not an expert.  The ultimate arbiter of whether the content omission was critical or not should be the instructor or other instructors who teach the same subject.   

There is a copyright issue to contend with as well.  Either the modifications of content meet the requirements of what counts as Fair Use or the creator of the original video must give permission to allow changes to be made to it or the creator must be the one who makes the changes.  If my situation is any indicator of how this issue might play out, 10 years after creating those micro-lectures for intermediate microeconomics, I lack the energy to modify the videos in a substantial way.  Putting small changes in the description (perhaps with links to other content) would be okay.  Beyond that, somebody else would have to do the work.  My only concern then is that the new versions don't end up tarnishing my reputation, because the new work ends up being of much lower quality than the original. 

Let us note a larger lifelong learning issue here.  Eventually, students need to learn to get themselves unstuck, either by researching the appropriate background information that they didn't have at hand at the outset or by working through a seemingly complex explanation so they can make good sense of it by themselves.  If this education in identifying sticking points in micro-lecture is to advance the student as a lifelong learner, then the student should also be involved in creating the revised video, at least some of the time, as that will close the circle on this lifelong learning issue.

Further, if the student can see how the effort in identifying sticking points produces a benefit to other students, who get to view an improved revised video, then the student gets first hand experience at acting responsibly.  A sense of obligation may then develop out of a sufficient number of such experiences.  How long that would take is anyone's guess.  But I suspect one course done as sketched above would not be sufficient, not even close.  So one should envision a series of courses, perhaps taken in a prescribed order, that would be necessary for the student to take to produce the desired result. 

In the review process of papers submitted for publication at a journal, it is possible for there to be outright rejection and it is also possible for the paper to go through a second round of revise and resubmit.  We should envision the analogous things are possible for our micro-lecture videos that undergo review by students.  Those videos that survive the process should be without sticking points and should be reasonably intelligible to student viewers.  If there is a consortium of universities that have parallel classes which do this type of video reviewing, then the consortium has the ability to give a virtual stamp to those videos that have made it through the process.  This is the analog of getting a research paper published in a journal. 

Now envision that the consortium stamp is readily viewable by potential student viewers of the videos who are at universities that are outside the consortium.  Will those students be attracted to these videos when there are other videos available that don't have the stamp and/or are made by instructors from universities outside the consortium?  Likewise, if students are attracted to videos with the consortium stamp, will they be more likely to watch those videos to completion?   The hope is that the answer to both of these questions is yes.

* * * * *

The devil is in the details.  The previous section, which gives a high level overview of the ideas only and no implementation plan whatsoever, may make it seem plausible when in actuality it is not.  Let me mention one bit to consider here.  The methodology part of intermediate microeconomics is pretty time invariant - more or less the same things would have been taught when I started back in 1980 as would be taught now.  That method might then be applied to current events - Econ in the News.  It's the videos about the methodology that should be the focus here.  A video about a current event is surely possible, but its durability will be less as it becomes less timely. Here the focus is on methodology videos that are apt to have substantial durability as long as they are perceived to be of high quality.

Even with that qualification, the overview may be entirely implausible.  Yet I find such speculation an attractive exercise.  It points to where we are stuck in our current approach and perhaps some things to try that might improve matters.  I wonder if a reader of this post would agree.

No comments: