Wednesday, June 08, 2016

Feedback Rather Than Assessment

At the end of July it will be six years since I've retired and then close to ten years since I left my post as Assistant CIO for Educational Technologies for the Campus to move to the College of Business.  I am pretty out of it now and unaware of much of the day to day things that go on in learning technology, on Campus and in the profession broadly considered.  But I remain on a bunch of listservs and once in a while I read what was posted.  Last week somebody (whom I'm not citing, because I don't know him and it is not a public list) posted about learning management systems and made reference to this ELI essay about Next Generation Digital Learning Environments.  I want to comment on it.  But before I do here are several bits of background to consider.

Even while I had the Campus job I felt some obligation to criticize the profession when I thought it was going off base.  So, for example, I wrote this post after the ELI conference in January 2007.  This was part of an ongoing conversation with my friends and colleagues and perhaps also with readers of my blog whom I otherwise didn't know. I felt a little bad after writing that post, particularly the stuff near the end, so I wrote another called Learning Technology and 'The Vision Thing' in which I gave my preferred alternative of where the profession should be going.  I really didn't expect it to change things, but people need to be aware of more idealistic alternatives to the status quo.  Maybe after reading my piece some will consider those alternatives where beforehand they weren't doing so.  Such prior thinking is necessary to produce attempts that aim to make matters better.  Hence, I saw my role as a prod, to make others in the profession more thoughtful in the way they went about their business.  Indeed, this is largely how I see my role as a teacher now.

A couple of years ago, quite a while after I had retired, I did what at the time seemed to me was similar though in retrospect was not, as I hope to illustrate.  I saw this video of Jim Groom and Brian Lamb, where they were discussing their essay in Educause Review, Reclaiming Innovation.  I agreed with the part of their argument that discussed the tyranny of the LMS.  But in discussing the cure I thought they relied too much on other developments in technology and not nearly enough (really not at all) on developments in educational psychology.   So, as is my wont, I wrote this rhyme on my blog, a mild critique.  One of my friends in Facebook, who is knowledgeable about ed tech, gave it a love.  Empowered by that reaction I emailed it to the authors for their response.  They were mildly annoyed.  They didn't see it as their job to address this criticism.  Doing so was outside their area of expertise.  At the time I bit my tongue and thought, oh well.  But reflecting on that episode in the process of writing this piece, I realized they were right.  While I'm sure we've bumped into each other on the various blogs, we don't really know each other, so there was no extended conversation in which such criticism might be a part.  And my suggestions were way too high level to be at all useful.  Something far more concrete was needed.  When I do discuss my aspirations for changes in the LMS in this piece I will try to do so in a concrete way.

Next, let's turn to Writing Across the Curriculum (WAC) principles.  In spring 1996 I attended a three-day workshop led by Gail Hawisher and Paul Prior.   It was really excellent.  I keep returning to WAC principles when thinking about effective pedagogy.  Here are several of the points from it to consider.  First, learners need to be able to respond to criticism and comment from instructors, also from peers.  There is much learning in providing a good response.  So in a WAC course papers entail multiple drafts.  The second draft is itself a response to comments received on the first draft.  Second, there is a tendency for instructors to write lengthy comments on the final draft, particularly if they give it a comparatively poor grade.  The comments then assuage their own guilt feelings but are mostly ignored by the students, who have been stung by the harsh treatment they perceive to have received.  Third, there is a tendency for instructor comments to be highly normative and aspirational - offering where they'd like to see the students go with the writing, but not situating those comments in where the students currently are, so even when provided in a draft the student often doesn't understand how to effectively revise the paper.  Last,  instructors tend to view the chore of responding to student writing as overwhelming.  They lack the time to do it well.  They then get angry about having to take half measures.  That anger can then find its way back into the responses themselves.

I chose in my title feedback rather than response.  I'd like to explain why and then consider the difference between the two.  Many learning activities are other than producing a full second draft on a paper.  But just about all learning activities entail going beyond the initial stab to make further headway.  On what basis does the learner do this?  The learner reacts to feedback, preferably in a reflective rather than instinctual way.  This is just what is meant by the expression learning from mistakes.  So response is a subset of feedback.  Feedback might be automated, or indirect (e.g., learning from the comments provided on another student's paper), or serendipitous (for example, I might stumble upon an essay written by somebody else on a similar topic but do so only after I've produced my blog post).  Ultimately, learners need to develop learning-to-learn skills, part of which is finding and identifying appropriate feedback.   This will only happen if learners hunger for getting feedback on their early thinking.  They therefore need to develop a taste for it.

I want to turn to a different set of experiences.  If you've been around for a while and teach economics or some business discipline you're likely aware of Aplia, an experiment in homework tools and content, originally offered up entirely separate from textbooks.  Aplia was the brainchild of the economist Paul Romer, who at the time, like many of us teaching economics, was dejected that the assessments bundled with textbooks were so weak, when there was the potential to do much better.  As it turns out, Romer was on campus at Illinois near the time that Aplia was founded and he was aware of my content and use of Mallard.  So I had a friendly chat with him then and later interacted a little online with him and some of his staff about early content Aplia was providing.  I don't know that I entirely embraced their approach, but I certainly looked on it as a promising development.  Alas, in 2007 Aplia was bought out by Cengage.  As an economist who used to teach industrial organization, this was certainly not a surprising development.  Aplia in its original form was a threat to textbook publishers.  Students already at the time weren't reading the textbook and if they didn't need the book to access the assessment content, then textbook sales would plummet.  The textbook publishers then (and I believe this is true still now) didn't have a real revenue model based purely on assessments.  There was lock-in to the textbook model.

This issue with lock-in needs to be confronted squarely.  I, for  one, have been vexed by it, as some of the innovations I'd have liked to see and that were clearly possible nonetheless have not emerged beyond the trial balloon stage.   For example, more than a decade ago my friend Steve Acker had me write this piece on Dialogic Learning Objects, based on the idea of producing a virtual conversation between the student and the online content and thereby blending presentation and feedback, which really is the natural thing to do.  I continue to write content of this sort in Excel for the class I teach on the economics of organizations.  But as Michael Feldstein points out, the authoring of such content is arduous.   Further, for there to be a functioning market for such content, potential adopters must be able to identify (a) the content is high quality and (b) the content is consistent with the way they teach their course.  This sort of verification is also difficult and time consuming to do.  The textbook market largely gets around these verification problems by having the nth edition of an already well known textbook in the field or, in the case of a new offering, having it written by a well-known scholar in the field.  In either case, the textbook authors themselves are very unlikely to author dialogic content.  The publishers don't pay very well for ancillary content to the textbook and, in particular, don't offer royalties for that.  So there is tyranny of the status quo and not just with the LMS.

Let me turn to one more set of experiences and then conclude this section of the essay.  This is about lessons I learned early when I was in SCALE.  Our mantra back then was: it's not the technology, it's how you use it.  We championed interesting and clever use, especially since our benefactors, the Alfred P. Sloan Foundation and in particular our grant officer Frank Mayadas, were not interested in software development.  I have made some of this sort of use myself.  It typically marries a learning idea that comes from outside the technology to some capability that the technology enables, where such marriage is not immediately apparent.  So, for example, consider my post on The Grid Question Type in Google Forms.   The illustration there is something I learned from Carl Berger.  It is called the Participant Perception Indicator, which gives a multiple dimensions look at the participant's understanding of some concept.  The PPI provides a very good illustration of how grid questions can be utilized.

This post is far and away the one with the greatest number of hits on my blog.  Most of my posts have fewer than 100 hits.  This one has over 24,000.  And what's most interesting about it is the variety of questions and comments received.  People want to tweak the tool or customize it for their own use.  In considering this sort of customization, what I've learned over time is the need for a judo approach - which combines understanding what the tool can and can't do with knowledge of the goals in use and then allows jerry-rigging of the initial design to better achieve those goals.  Further, I've learned that most users don't have the mindset to perform this sort of jerry-rigging.  Innovators and early adopters are different that way.  So, in fact, when one considers a Rogers story of diffusion of innovations, what actually diffuses when the innovation is effective is the combination of the technology and the effective use.  Then the impact is powerful.  When it is only the technology itself that diffuses, the impact may be far less profound.  This is especially true with educational technology and in particular with the LMS.  Unfortunately, there is an abundance of dull use.

Here is one further point to consider and one way the world is quite different now than when I was running SCALE back in the late 1990s.  There is now an abundance of online environments, free to the end user, which might serve as alternatives to the campus-provided environments intended for instruction.  That itself is not news.  However, the question that doesn't seem to get entertained along with that observation is: where are the innovators and the early adopters?  Are they in the LMS because they figured out how to practice their judo in a way to incorporate their own teaching goals and because they are publicly spirited and want to support learning on their campus?  Or are they in some of these other environments, because they view the LMS as an impediment and they can exercise more control in the free commercial tools?

I don't know the answer to these questions except in how I myself answer them, though now I no longer consider myself as an innovator or early adopter.  I think of the LMS as an impediment.  It is too rigid and affirming of the traditional approach.  I have been able to implement certain practices by going outside the LMS and by willingly engaging in more course administration than most instructors would put up with.  The examples I provide in the next section are based on my own experience.  I believe all of this might be done in a redesigned LMS.  The real issue is not whether it is possible.  The questions are whether it should happen and whether learning technology as a profession should embrace these suggestions.

Finally, I'd like to give a little disclaimer before I get started with that.  I know these things are possible because I've tried them and implemented them.  There may be other approaches that would have even bigger impact and are also possible.  So I don't want to claim that my suggestions are exhaustive.  They are sufficient, however, in the sense that the profession doesn't currently seem to be talking about them and thus to ask:  might the profession begin to have those sort of discussions in the future?

* * * * *

The following graphic offers a simple way to frame the issue.  It is Figure 2 from the paper, The Theory Underlying Concept Maps and How to Construct and Use Them



Preceding this graphic there is a discussion to explain it.  This particular point is especially useful to understand.

3. The learner must choose to learn meaningfully. The one condition over which the teacher or mentor has only indirect control is the motivation of students to choose to learn by attempting to incorporate new meanings into their prior knowledge, rather than simply memorizing concept definitions or propositional statements or computational procedures. The indirect control over this choice is primarily in instructional strategies used and the evaluation strategies used......

The authors go on to point out that since there is much variation across learners on both prior preparation and on motivation, it is important not to think of meaningful learning versus rote learning as a binary choice but rather as a continuum between these two poles.  I do think the vertical line segment between the two antipodes is correct.  Meaningful learning is higher order than rote learning.  I belabor that because I want to consider a third category not included in the graphic where the vertical alignment is less sure.  These are students who have totally tuned out and don't even go through the motions of rote learning.

Now let's get at the issues.  The first is this.  Does the LMS exert some influence on the learner's choice of how to learn and, if so, is the bias up or down in that choice?  The second is this.  How do learning analytics fit in this framework?  Is it mainly about getting students who have totally tuned out back into the game, even if that means they are then mainly operating near the rote learning part of the spectrum?

My sense about the answer to the first question is that there is bias and it is downward toward rote learning.  There are a lot of other factors in operation here.  The LMS is not the only culprit, not by a mile.  But the LMS helps to enforce the grades culture, which in turn encourages the students to be rote learners.  This issue doesn't get much discussion among learning technologists.  It should, in my opinion.

My sense about the answer to the second question is yes, learning analytics coupled with appropriate interventions from instructors and advisers can effectively move students from tuned out to rote learners.  But without other changes in place it will not move them up to become meaningful learners.  If that is right, is it something the profession should nonetheless champion?  To answer that here is a quick aside about the economics of higher education.

Rote learning endures, in large part, because it is an approach that will get students to pass the courses they take.   If the pure rote learner always failed or earned only the lowest possible passing grade, D-, students would have very strong extrinsic incentive to move away from rote learning.  The extrinsic incentive is far weaker when rote learning can produce high grades in itself.  That there is grade inflation means grades are much less meaningful about what they communicate to others regarding how much the student has learned.  (As George Kuh argued in describing the disengagement compact, grade inflation may be the inevitable consequence when quality of teaching is determined largely by student course evaluations, as is now the common practice.)   New graduates then are valued in the labor market at the average learning of all those who graduate from the institution (this is called a pooling equilibrium).  So a degree can have value even for a learner who is a nearly pure rote learner, because the market interprets that student as having learned more than he or she actually did, as well as perhaps giving some bonus points for persisting on through to the degree.  This is the behind the scenes economic argument for why learning analytics should be encouraged.

However, if there are large swaths of students who are in the tuned out category and if learning analytics does succeed en masse by turning these students into near pure rote learners, as was suggested above, the consequence will be to lower the average learning among graduates and therefore to depreciate the value of the degree.  (This is Akerlof's Market for Lemons model applied to Spence's model of Job Market Signaling.) The better prepared students, in response, will look elsewhere to attend college and a vicious cycle might ensue at any place that pursues learning analytics with too much vigor and gusto.  This offers some background on why people who think hard about digital learning environments should be asking what they might do to promote meaningful learning.  It seems to me it is an important question to ask.  My answer, in a nutshell, is provided by the title to this post.

* * * * *

In this section I want to get at specific modifications to the LMS that I think would be helpful.  To me it is useful to divide classes into two categories based on how reliant they are on the LMS and other factors that influence how those courses are taught.

Large classes:  These classes make extensive use of the LMS, quite likely rely on the auto grading function in the quiz tool, which is depended on for giving online homework, and are typically quite reliant on a textbook, which is closely followed in the presentation of course content.   Depending on the nature of the assessment done with the quiz tool, large classes are more likely to encourage rote learning than are small classes.

Small classes:  These classes may use the LMS for some administrative function but are more likely to rely on other online environments for collaboration and other course work.  Students may very well engage in projects as an integral part of such courses.  Readings might come from multiple sources as might other multimedia course materials.

Let me note that usage of the LMS is critical in considering these categories.  Enrollments themselves may encourage a certain type of usage pattern, but just as a low enrollment course can nonetheless be taught as  lecture rather than as a seminar, a low enrollment class can rely on auto grading of homework and stick closely to the textbook in its topic coverage.

Let's make one other point before going further. The Large classes are usually taken earlier in the students' time at college.   To the extent that students choose how much to commit to meaningful learning and those choices are made, in part, by habits formed in prior classes the students have taken, there can be persistence of the rote learning choice even in environments that aim to encourage meaningful learning.

In this essay there are two suggestions about modification of the LMS meant primarily for the Large class environment and another two suggestions meant mainly for the Small class environment. 

First Suggestion:  Elevate the importance of the self-test tool so it is on a par with the quiz tool.  Allow students to get credit for completing a self-test.  In this case completion means ultimately answering all questions correctly, no mater how many tries it takes to do that.

Discussion:  I don't know if these terms are used the same from one LMS to another.  Here I'm using self-test tool to refer to a quiz where the learner can get immediate feedback after answering an individual question and then can adjust the answer to that question based on that feedback.  The pattern of question - response - feedback - revision of response... ultimately getting the question right and then moving onto the next question with the pattern repeating until the entire self-test is completed is meant as a virtual conversation that intends to have the student learn and produce understanding while at the same time measure that the student has done the requisite work.

For this to possibly work, it means the feedback must be useful and promote student thinking.  It also means that the student can't get to the right answer readily by brute force methods.  Simple true-false questions will not satisfy that requirement.  The question must be substantially more complex in the scope of possible answers, if not in the difficulty of what it is actually asking.  For example, consider matching questions.  Matching five alternatives numbered 1 to 5 to five to other alternatives lettered a) to e) has 120 possible ways of matching.  With several such questions in the self-test, brute force ways would be expected to take a long time to complete the assessment and encourage the student, instead, to think through to the answer because that would be faster than brute force.

No partial credit would be allowed.  Students who do the self-test and complete it would therefore be encouraged to spend time on task and the hope is that after doing a few of these the students would get the sense that the homework is there to help the students learn the material, not to judge how well they perform.  The aim is to make the homework into a learning tool.  Of course, whether it is effective or not will depend on how well the content is written.  That is true for all online learning materials.  The point is that even with good questions, the more typical online quiz enables partial credit and the feedback is only given after the quiz is completed.  For a student who feels that enough partial credit has been earned, so is not willing to retake the quiz (assuming that is a possibility) the feedback probably won't be effective.  This makes the students orientation much more focused on earning points and much less on producing understanding of the subject matter.

While on an individual homework the consequence might not be large, if the approach were embraced in many large courses the impact on students potentially could be quite considerable.

Second Suggestion:  Students getting credit for a self-test is an example of the receipt function.  They get a receipt in the LMS just like anyone gets a receipt after completing an online commercial transaction.  The receipt function must also be present for all the other assessment tools including the survey tool and the assignment dropbox.  There must be a ready way for the instructor to offer course points in exchange either for a given receipt or for a set of receipts.  An example of the former would be 10 points per receipt.  An example of the latter would be out of 12 possible receipts, 100 points will be given if 10 receipts are presented.

Discussion:  The receipt function is meant to convey that there is some course work that should be done and credit will be assigned for completing the work, but the work will otherwise not be graded based on quality or correctness.  When applied to surveys on course content, this is very much like how clickers are used in class now, except the content surveys can be done ahead of time before class, to facilitate Just In Time Teaching.  Further, unlike clickers, the surveys can include a paragraph question so the students can communicate about their reasoning after they have responded to the short answer question.  Surveys typically don't allow students to attach files or provide links to online documents or presentations.  This is why the receipt function is needed for the assignment dropbox as well.  For submissions that are done by receipt, there is an all or nothing aspect.  For submission done the usual way, those allow partial credit.  The presence of the receipt conveys that no partial credit will be allowed.

Let's recognize that with an item that provides a receipt a student can be sandbagging in the submission.  In that case the student does enough to generate a receipt, but no more than that.  In the current grades culture, sandbagging might be a rational response to some intended learning activity that offers a receipt, because the student cares about the points first and foremost and otherwise wants to conserve time and effort.  One should ask first what the culture must be like for the vast majority of students to take the activity seriously and not sandbag.  One should then follow up with the question whether it might be possible to move the culture in that direction by broad embrace of the receipt function.

In thinking about these matters I want to note the strong parallel between grades as extrinsic incentive for students and cash payments as extrinsic incentive for those who work for a living.  For the latter, it is my strong belief that there are limits to the effectiveness of pay for performance, as I've written about in this post called The Liberal View of Capitalism.   In the alternative, people do the work seriously mainly out of a sense of obligation.  They also have an expectation in that case that their co-workers and their supervisor will appreciate their efforts and suitable recognition will eventually be provided.  Translating this back to the learning setting, the instructor has an obligation to make all receipt generating activity meaningful for the student.  Such a perception encourages the student to take the activity seriously.  In contrast, if the activity is perceived as busy work, surely that will encourage sandbagging.

There is then a further issue that due to heterogeneity of the students, some might find an activity meaningful while other find the same activity busy work.  Let's consider that case for students who vary along the rote-learning to meaningful-learning interval and let's say it is the meaningful learners who find the activity busy work, because it is too easy for them.  Would this doom the use of the receipt generating activity or might it still survive after suitable modification?  The answer to this question depends in large part on whether the student's sense of obligation covers only the student's own learning or if it extends to the learning of fellow students as well.   In the latter case the meaningful learner might take the activity seriously for the good of the order or might be willing to receive an exemption from the activity, foregoing the points that would be earned from a submission, in exchange for helping out another student who is struggling with the activity, and earning the points by doing that, even if helping the other student is much more time consuming yet not sufficiently large that it can be listed as a service activity on the resume.  In any event, this sort of additional complication hints at asking how much further the LMS needs to be modified to accommodate it or whether such accommodations can readily be accomplished by other means.  I really don't know.  I raise the issue here mainly to argue that there does need to be some experimentation with the receipt function before the learning technology community can come to agreement as to how it should be implemented.

The next two functions are meant for the small class setting but do assume the receipt function is already in place.

Third Suggestion:  Enable different forms of grading.  In particular allow for portfolio grading wherein many items under receipt receive a single qualitative grade that is based not just on the average quality of the items but also on whether later produced items show higher quality than earlier produced items.   In other words, portfolio assessment is meant to track growth in student performance and to communicate the importance of measured growth as a way to indicate that students actually are learning.

Discussion:  Portfolio grading is already the norm in certain disciplines, notably those that entail a studio approach, where students produce artifacts as their way of doing their course work.  But portfolio grading is entirely alien in other disciplines, such as courses that have problem sets and exams.  There each item is evaluated on its merits and not compared or contrasted to any other items the student has produced.  The thought here is that small classes should entail at least some amount of students producing artifacts and that production must be meaningful for learning.  That should happen in all classes where enrollments are sufficiently low.  (Let us leave the question of where to draw the line on class size for another day.)

Some faculty will resist this, of course, because providing portfolio assessment in a serious manner is time consuming and because these instructors may have no prior experience in doing so and therefore might not be convinced of the pedagogic value in requiring such work as a significant component on which students will be evaluated.  Knowing this, some institutions might not embrace a portfolio grading functionality even if it were present in the LMS and fairly easy for the instructor to use.

Now let's make some counterarguments.  Teaching small classes typically involves fewer headaches and more joy for the instructor than teaching larger courses.  If the two activities are to balance and give the same amount of teaching credit for instructors, the smaller class really should be taught more intensively.  Smaller classes are also better environments for encouraging meaningful learning.  If a student writes about a page a week, with these essays meant to tie the subject matter of the course to the student's relevant prior experiences or prior thinking on the matter, that activity would indeed promote meaningful learning.  Further it can serve as fodder for in class discussion.  In that sense it would be the small class analog to the content surveys that are used for Just In Time Teaching when in the large class setting.

Naturally, the faculty would have to experiment with this for themselves if they are to eventually embrace these counterarguments.  Nobody should expect them to accept these arguments at face value without trying them out on their own first.  But there is one further point to stress here.   The administrative overhead from doing such an experiment should not be an important factor in determining whether the experiment is deemed successful or not.  So the function needs to already be in the LMS to facilitate these experiments, before the function is broadly adopted by the faculty.

Last Suggestion:  Embrace a soft deadline approach were there is a marked late date that precedes the deadline and where the normal expectation is that students will complete the work before the marked late date but in extraordinary circumstances students can turn in work after the marked late date.  If they don't abuse this privilege they can do so without penalty.  A variety of penalty schemes can then be implemented to handle the case where students are chronically late with their work.

Discussion:  A functionality of this sort might be implemented in the large class setting, in which case substantial early use of late submissions might trigger some intervention with the student, just as in other learning analytics cases.   But that is not the intended purpose of this recommendation.  Indeed, it is my view that in the high enrollment setting deadlines need to be hard, so students learn to get the work done ahead of time.  That sort of time management skill is critical and should be learned early in the student's time on campus (when the student is apt to be taking many large classes).

Here the reason for soft deadlines is different.  It is there mainly to allow the students some discretion on their time allocation and to recognize that on occasion other obligations (courses, part time jobs, extra curricular activities, social obligations, and family obligations) place strong demands on the student and a mature student will sometimes need to balance these in a way that the student sees most appropriate.  The current "solution" to this problem is for students who are under high stress to pull an all nighter, possibly several in a row.  This can lead to depression and impair student performance.  The system should help the student manage this in a more sensible way and thereby help the student become adult in making life decisions.  It should be the small classes that are the first to accommodate on late submissions, because that will be less disruptive overall.

Soft deadlines in small classes, in other words, are a form of buffer or insurance, against excessive student time obligation overall.

There are other possible benefits from soft deadlines.  For example, in many places of work where email is used as the distribution vehicle, soft deadlines are the de facto business practice, dictated by the distribution medium.  It would help students in preparing for that work after graduation to experience soft deadlines while still a student.  There is further that many students are immature about the relationship between time put into producing work and the ultimate quality of that work.  I have had students tell me that procrastination is actually efficient because they really get cranking when operating near the deadline.  Of course, for these students their first draft is also the final version of what they submit  Their immaturity in expressing this view is telling.

Consider how students might learn the value of prior preparation and giving work the time it needs to produce something of high quality.  So ask, how would an occasional procrastinator operate under a soft deadline approach?  Might that student dissipate the soft deadline buffer unnecessarily, because the student lacks the discipline to save it for when it is really needed?  If so, and if the work is subsequently deemed mediocre, wouldn't that undermine the belief that procrastination is efficient? Admittedly several such experiences are likely necessary to come to that conclusion.  What of the student who has had at least a few of those?  Is this a lesson that can be learned from experience and evidence about perceived quality of work?

My concluding remark in this section is that now the students' internally held beliefs on these matters are largely confounded by the grade culture in which students operate, but which is not replicated in the world of work thereafter.  Students need to condition their own expectations about that world of work based on their own experiences as students, to be sure, but student experiences that are more akin to that future world where they will ultimately operate would be preferable to what we have now.  Soft deadlines in small classes would facilitate that process.

* * * * *

Whether the suggestions offered up in the previous section actually would lead to improvements in learning I leave for readers to determine.  Here I want to conclude with some different issues.  These are guided by asking the question, what would it take for the LMS vendors to implement these suggestions in their products?  Others may have different ways to address that question.  I'm an economist and that informs how I think about these things.  The economist's core tool is supply and demand.  I'm going to use that here to consider the issue of LMS vendors seriously entertaining these recommendations.

On the supply side, almost all changes in the code of the software should be viewed as modifications in fixed costs.  Minor changes of the code constitute small increases in fixed cost.  Big rewrites of the code constitute large increases in fixed cost.  Small increases in fixed cost require only modest increases in demand to cover them.  In contrast, to rationalize large increases in fixed cost, a dramatic increase in demand is required.  I am not nearly knowledgeable enough about the software to say which of the suggestions can be done without increasing fixed cost in a big way.  But I have deliberately tried to contain myself to what I perceive is do-able.  Given that, I would hope none of the recommendations would increase fixed cost drastically.  And, after all, the vendors are always changing code to keep their software up to date in subsequent releases.  That modernization aspect is built into the business process and should not otherwise necessitate increasing price of the software to cover the development costs.  Only code changes in the the software beyond modernization will do that.

It is the demand side that is more perplexing.  In the NGDLE  paper (page 5) it says:

At the 2014 EDUCAUSE Annual Conference, 50 thought leaders from the higher education community came together to brainstorm NGDLE functionality. This group identified and prioritized 56 desirable NGDLE functions....

In the odd chance that those 50 thought leaders read my essay, what would they make of it?  Would they be appalled?  Isn't specific tool modification within the LMS terribly old fashioned and not at all what we think of in considering next generation learning environments?  Would it therefore mean the ideas in my essay would be rejected out of hand, never to see the light of day? I'm afraid that's exactly what would happen, absent some other sort of intervention.  So consider this.

Imagine a different group comprised of 50 faculty member who are dedicated teachers, identified mainly by the fact that they are regular attendees at events put on by the Center for Teaching on their respective campuses.  Ask this group to read my essay.  Ask them first and foremost not about the recommendations but about the issues those recommendations are meant to address.  Do they buy into the distinction between meaningful learners and rote learners and do they find that too many of their own students operate near the rote learning end of the spectrum?

I'm going to assume here that the group would make that much of an identification.  The next question would be to ask them what modifications do they make in their own classes to address this issue?  Dedicated instructors sharing tips and tricks on this matter would be a very good thing in its own right.  Then, the final point of discussion for these instructors would be to develop a wish list for the online environments in which they operate that might help them better address these issues.

The results from that faculty group discussion should then be brought to the attention of the learning technology thought leaders for them to reflect on.  And the main question to ask here is this.  Do the NGDLE recommendations speak at all to the faculty members concerns?   What would then happen if many of the thought leaders concluded that the NGDLE recommendations at best only tangentially addressed these issues? 

On page 2 of the NGDLE paper, in the section on The Learning Management System, there is a sentence that probably wasn't intended to be provocative at all but actually is in the context I've just presented.

Higher education is moving away from its traditional emphasis on the instructor, however, replacing it with a focus on learning and the learner.

If the context were one of decrying the lecture, this is not a controversial assertion.  But here the context is whether learning technologists need to listen to faculty who care a great deal about their teaching.  If the answer is that the learning technologists don't need to listen to these faculty, that would be provocative!  It would imply, in particular, that the learning technologists are better arbiters of the learner's needs than these faculty are.  Do the thought leaders actually believe this?

Of course, I don't have the evidence from that group of faculty to share.  But I have participated in numerous such discussions with faculty over the years at a variety of venues, and my sense of these discussions is that the topics of conversation don't vary that much, with the possible exception that recently more will claim that the problems are getting worse. 

Given that some of these thought leaders might conclude: (a) these are indeed issues that we too should be concerned with and (b) we therefore need to think through what we can do to help address these issues.   If that is the conclusion reached, this essay has hit its aim. 

No comments:

Post a Comment