Sunday, July 30, 2017

Thoughts from a Has Been: The Next, Next Digital Learning Environment - Team Production in Instruction

This post offers some reactions to the recent piece by Phillip D. Long and Jon Mott, The N^2GDLE Vision....  In many respects, I am not qualified to offer a meaningful critique, as I retired back in summer 2010 and have not kept up with developments in the field since.  But I haven't been able to get all that came before entirely out of my system, witness a couple of critiques I've done since retiring about earlier pieces written in this vein, such as this rhyme about work by Jim Groom and Brian Lamb and a longish post entitled Feedback Rather than Assessment, about the previous NGDLE paper that appeared in Educause Review by Malcolm Brown, Joanne Dehoney, and Nancy Millichap.  Further, the more things change, the more they stay the same.  Not an insignificant bit of Long and Mott's piece is about 'dissing' the learning management system, a cottage industry within educational technology for well over a decade, one that I've participated in even after retirement, for example in this piece, Some regrets about learning management systems.  Indeed, that post and its sequel, Where Are Plato's Children?, make me quite sympathetic to the 'smart online tutor' part of Long and Mott's vision for N^2GDLE.  But there are many other parts of this vision that I found idealistic in the extreme.  So one wonders whether their conclusions are robust to making more realistic assumptions, or if that would produce quite a different result.  As my strength is looking at the learning issues through a political economy lens, that's what I will do in this post.  I hope it produces some value add for readers beyond the value it produces for me by allowing me to purge these thoughts from my system, which is quite frequently my motivation for writing a piece.

If there is such value add, much of that will be found in elucidating where I am wrong, so in making credible counter arguments.  I not only admit the possibility that I might be in error on these matters, I recognize that in some places that is especially likely. So I offer up my piece as a challenge, not just to find the errors, but to refute them.  Doing that should make Long and Mott's argument stronger.

Let's get to the heart of the matter right off.  Developing this software environment will take incremental resources, while many of our campuses are in flat or shrinking revenue environments.  More importantly, developing the content that will utilize this new software environment will also take incremental resources.  In my reading of the paper, the content development piece will be much more expensive than the software development piece.  Further, while the software part might be expected to be funded within the IT budget on campus,  where IT leaders can manage revenue reallocation, the content part surely won't be.  So the powers that be who control the revenue allocation outside of IT must buy into the vision to make this a go.  Will they?  Why should they?  If they don't, then Long and Mott are merely preaching to the choir.  It might not occur to the choir to think this through from a political economy angle.  So it is conceivable that the Long and Mott piece appeals to learning technologists yet at the same time the ideas therein are doomed at the campus level.

For a more realistic approach, it would seem, we need to understand the preferences of the powers that be.  Let me assert here a reactive rather than visionary way to articulate these preferences.  (This is one of those assumptions that can be challenged.)  The powers that be will want what instructors and students want.

Do the majority of instructors and students favor the status quo over what is proposed by Long and Mott?  In this status quo there is much surface learning.  (For example, see Ken Bain's What the Best College Students Do.)  Long and Mott want deep learning across the board.  How do we get from here to there?  Maybe we can't.  To support that conclusion, I offer up the metaphor of the Tragic Tory, that I wrote about some years ago in a column for the then Educause Quarterly, now defunct.  There can be substantial lock in to the status quo, so much so that it blocks all potential improvements.  We have no problem seeing this in considering, for example, the QWERTY keyboard, which was designed around 150 years ago to make us type slower, but which persists even now, even though typewriter keys jamming hasn't been an issue for upward of 45 years and perhaps quite a bit longer than that.  Why is it hard to imagine that we are locked into an old mode of teaching and learning and that external factors, like No Child Left Behind and the accountability movement, have actually exacerbated the lock in to these traditional approaches?

The next part of this is to argue that there needs to be substantive culture changes to break the lock in and make progress, but then to ask whether those cultural changes should be targeted only at where we want to end up (what I believe Long and Mott are arguing in their piece) or if these changes need to address where we currently are (my view as to what is necessary).  In my post, Why does memorization persist as the primary way college students study for exams?, the first half sketches out the nature of the lock in, in accord with George Kuh's Disengagement Compact.  Then, in the second half, I offer up a series of suggested reforms that taken together were meant to move us away from the status quo to something better.  However, I wrote this post as a thought experiment only.  I didn't expect the ideas to be embraced because I didn't see the willingness to do so then and I don't see the willingness to do so now.  That inertia can certainly find foundation in that the suggestions for change, such as the ones I advanced, are unproven.  Making drastic changes based on pure speculation is a fool's errand.   But there isn't even the will to begin piloting on ideas such as these, to test whether the ideas hold water, especially since doing that itself will take some incremental resource.  At least on my campus, we have a strong tendency to put such incremental resource toward new course offerings that parallel emerging social issues or recent research developments, rather than to take on large intro courses that have been taught for some time and try to make them better.  However, if contrary to fact such an effort to change the culture in a manner like what I suggest were put into place, we would need to confront this next question.  Would we still need a radical new vision for the online learning environment?  Or would that then be superfluous?

I will return to the cultural issues in the next section, where I consider team production in instruction, something Long and Mott argue for.  Here I want to consider some of the purely technological aspects of their vision, partly to illustrate my confusion as to what they are arguing for, and partly to couple that with my skepticism about pulling off this vision.

There are two aspects to their technological vision.  One part is the interoperability of tools - the Lego metaphor at root where the varies pieces snap together.  The IMS standard is mentioned in this context.  (Whatever happened to SCORM?  Actually, I don't want to know the answer to that question.)  In the abstract, interoperability would seem highly desirable.  Who would argue against it?  (Me, or course, as I will try to get at below.)

The other part of the technological vision is that the online learning system becomes this vast store of the learner's experience with the system, which can then be used for personalization of the subsequent experience, aided by a large dose of artificial intelligence.  (Perhaps the authors can get Amazon to become a big sponsor of their efforts and then they can call their environment Alexa^2, which in my mind would be an improvement on their current unwieldy title.)  I have no big critique of this piece of their argument beyond the critique I've seen by others of AI systems more broadly considered and their potential for abuse of the personal data that these systems amass.  We live in a world nowadays where fear that Big Brother Is Watching is more prominent than it has been for some time.  Unless we have ironclad ways to assuage those fears, I don't understand why we would engage them further in online environments to promote learning in higher education

I do have a different issue, however, about use data that I would like clarification on.  This is best illustrated by considering the learner working at a large desk, with a laptop but also with other learning tools, perhaps a textbook, perhaps a pencil and and pad of paper.  Suppose the latter are utilized to aid formative thinking - writing equations, drawing graphs, posing questions in all caps, and other things like that which go beyond mere doodling.  With technical content, I'd imagine that sort of thing happening quite a lot, though I confess that maybe that's people my age who would do it but current students would not.  If there is such content generated by the student, does it remain outside the learning system?  One can imagine having a video camera capturing this content, which might be one type of work around.  But if the students themselves were aware of being watched in this way, wouldn't they feel 'on stage' so that they are self-conscious about it?  That itself could substantially weaken student engagement, to the point were one ditches the idea of the camera.  Yet if there is no such work around, why should we be confident about the data which are captured by the learning system somehow being sufficient for the desired personalization?  On this one, I simply don't get why the authors have faith that the student generated data that would be captured by the system would be sufficient.

Let's get back to interoperability.  I would like to divide software between applications that have their main audience, perhaps their entire audience, for use in instruction, and then perhaps only within higher education, from other applications that have broad use outside of instruction and, indeed, the instructional use might be just a minor bit of the overall use of such software.  For applications in the the first group, expecting interoperability may make sense, with exchange of user data between apps the desired goal.  If, however, adhering to the standards that deliver interoperability imposes a cost on the software development, should we really expect applications in the second group to embrace the standards?   The political economy of the situation suggests that will not happen.  What then will occur?  Let me illustrate with a couple of examples.

I make screen capture videos for my class with my voice over.  Some of those are of PowerPoint presentations.  Others are of Excel files that I use to illustrate the economics.  I put those files into my campus account at  The videos are in YouTube.  Both and YouTube offer use statistics.  But those data are not granular the way that Long and Mott envision; they give aggregate use but not individual use.  The campus did come up with a video service, based on Kaltura, but well after I started doing this.  I don't know whether the campus video service offers granular use data or not.  In the meantime, I discovered substantial external interest in my videos.  (You might call this the OER use of the content, but I want to note that most if not all the demand is coming from students who are taking parallel courses elsewhere and who are stuck on particular topics.  They find help by going through the YouTube search engine, but would never look at a repository of learning objects or a referatory like Merlot to find what they are looking for. This student use is unlike use by instructors elsewhere who might bring the content into their own courses.)  I feel some continued obligation to support this external use, so would prefer to leave the content where it is rather than to port it into some closed container, just so I could get better use stats for my own students.  If a significant fraction of other instructors are like me in this regard, quite possibly for quite different reasons, but using these sort of tools that will not integrate well with the learning system, reliance on these other online environments will remain the norm into the future.  For example, adjunct instructors who are likely to teach for many different universities over a comparatively short time span might prefer to keep their content at an external host rather than in a campus-supported system. And, if that is the case, instructors themselves will devalue the benefits from the integration of tools that Long and Mott argue for.

On just this example, I can see an argument for quite a different vision - a fairly stripped down environment that does the very basic functions well, but does only those functions.  That would clearly be cheaper.  And it might offer better performance on those tools that do survive into the new environment.  This alternative probably wouldn't inspire learning technologists and other IT professionals.  Yet it might make others on campus quite pleased.

Here is the other example.  Over the years I have learned to use Excel as a homework tool, in a manner much like Plato.  (My design is based on conditional response - IF functions - and conditional formatting - the text of the response is not visible at all when the font is the same color as the cell background, and then one can vary the color and the nature of the font based on whether the response is correct or incorrect.  The approach also has graphs built up step by step as a sequence of questions pertinent to the information in the graph get answered correctly.)  This use of Excel follows many years where I used Mallard as part of the homework I'd assign in intermediate microeconomics.  Mallard, and its contemporary CyberProf, were first generation Web smart quizzing tools in the spirit of Plato.  Those systems eventually stopped being developed, but another contemporary, LON-CAPA, continues in use to this day.  These environments offered more sophisticated assessment tools than can be found in commercial learning management systems and might be considered forerunners of the smart online tutoring systems that Long and Mott envision.  Back to the Excel homework.  Many of my questions are fill in the blank, where the answer is an Excel formula that mimics the algebra needed to do the economics.  The algebra is then evaluated by whether it produces the right value.  Each student gets the same problems to work, but with different parameter values, where those are based on their own identity information.

To get credit for the homework, the students need to get all the questions right - no partial credit. When they do that Excel spits out an individual specific key.  The key is based on the particular homework and the student identity information provided at the start.  I would love it if this information could somehow automatically find its way into the course grade book, which is now kept in a learning management system.  But doing that is beyond me.  So, instead, I have students enter two bits of information into a Google Form.  One is that key I mentioned.  The other is the student alias that I assign.  (Each student alias is the name of a famous economist concatenated with the course name and semester of the course offering.)  Even if an outsider to the class somehow stumbled onto the information in this Google Form, the student's true identity should be protected.  So I believe the practice is consistent with FERPA.  But then I have to move the information over from the Google Sheet that has the student responses to the course grade book.  That I do manually.  This is extra clerical work that most instructors would not put up with.  I tolerate it because my class is comparatively small, about 25 students, and because it allows me to give meaningful homework that I otherwise don't have to grade.  If there were a learning system that did this as well as the Excel and eliminated the need for me to do the clerical work, I would happily incur the one-time costs of transferring my content into that system.  I hate doing clerical work.

Now consider the case in high enrollment classes, with at least an order of magnitude more students than my class, where the logistic issues in running the course are far greater, and where the class is very likely now taught by an adjunct.  These courses probably rely on the quiz tool in the LMS and many if not all of the questions are apt to be multiple choice, quite possibly imported from a publisher's test bank for the textbook that is used in the course.  If the same instructor has been teaching the course with this textbook for a while, no doubt there were lots of headaches getting the course site set up the first time through, but those headaches are in the past.  This is part of the lock in I mentioned above.  This instructor has not authored the assessment content used in the course.  Any assessment content that was more complex and designed for a different learning system would have to be screened by such an instructor, as to whether it is appropriate and really better to implement, meaning it is not buggy and one can anticipate large learning gains from switching approaches.  But, almost surely, this would mean the instructor would need to write different exams, an arduous task in itself.  It's then likely that mean scores on those new tests would be lower than the means have been on the current tests, just because the approach is new.  And it's likely that the instructor's course evaluations would take a hit as a consequence.  Would such an instructor willingly incur that for the promise of what the new system might deliver in the future?

Next consider other low enrollment courses like mine (which on my campus are mostly upper level courses, if not graduate courses.)  Such courses might not use the quiz tool in the LMS at all and instead rely on more open ended student assignments - projects, presentations, term papers, etc. Indeed, these courses may only use the LMS incidentally and instead use other collaboration tools to support course work. Do courses like these stand to gain much from having a highly personalized learning environment that Long and Mott envision?  Or do such courses already get personalization from the work as it has been designed for the course?  If the course is reliant on some other tool - a wiki, Google Docs, or some other environment that encourages collaboration, might the instructors of courses like this see little or no benefit in the vision that Long and Mott articulate, because they've been doing this for a while without interoperability so don't see the need for it?

In this second example, I am the exception who would embrace the Long and Mott vision.  (In addition to Excel, I have the students use blogs out in the open, according to their alias.  Tracking this, too, has to be done manually at present.)  The other instructors are the rule who would not, although the reasons are quite different depending on whether the instructor is teaching a high enrollment class or not.

Let me make one additional point purely on the technology part of the argument.  Long and Mott don't consider other potential uses for course sites, so don't get into some issues that have vexed us all over the years, such as whether much of the class site should be publicly available or if it should be hidden from the public eye and accessible only by those who have the appropriate login credentials.  Yet there are other obvious potential uses of these sites.  For example, students who are considering whether to register for this semester's version of a course but who remain uncertain whether that is a good fit for them or not will likely want to have a look at the course site when it was last previously offered.  This is particularly true if the instructor remains the same.  Instructors don't design their course sites with this other use in mind, so if they are presented with a closed container as the learning environment, they are apt to preclude this other use.  (Indeed, on my campus it is the academic department's responsibility to obtain copies of syllabi and provide those to students who are interested.  Information beyond the syllabus, while it might be useful to students during registration, is viewed as extraordinary and is not collected.)  There are other potential uses as well, for example, to have other instructors embrace novel teaching practices by imitating those practices developed by an innovating instructor.  These other uses suggest that class sites should be publicly available.  FERPA and copyright, in contrast, have encouraged the LMS to be a closed system, making external access to the class site difficult to attain.  Do Long and Mott have a way to get the best of both possible worlds?  Or is this one a case where we will continue to kick the can down the road, because that's all we can do?

* * * * *

I found myself so amazed by reading the suggestion that learning objectives should be correlated across classes, and that considerable effort should be put in so the joint course offering offers a coherent vision to the learner, that I thought it appropriate to devote a separate section just to consider that recommendation.  As an ideal, who can argue with it?  (I was amused that Long and Mott appeal to Herbert Simon to support this recommendation.  Simon is a Nobel Prize winner in Economics and one of the truly novel thinkers in how organizations work, but I hadn't realized that he also had articulated this vision of team production in instruction.)   Yet it is so far away from where we are now that I wonder how it is reasonable to expect it to happen.  Or, to put it another way, what other accommodations must be put in place to encourage it to happen?

Let me first describe the usual practice as I see it in undergraduate instruction, at least on my campus.  Then let me consider some alternatives that depart from the usual practice and are more in accord with what Long and Mott consider.  Finally, I want to consider whether those alternatives can become more numerous or if that's not in the cards.

Many comparatively low enrollment courses have only one instructor over time.  The same person teaches the course over and over again.  No other instructor teaches the course.  To the extent that preparing a course for the first time is a big effort, the pattern I described is efficient as it economizes on the fixed cost of developing a new course.  In this environment the instructor comes to feel that he or she owns the course.  Outsiders who are perceived by the instructor as having less standing have little to no influence in how the course is taught.   Onto this let's overlay how faculty development happens.  In the main, this is by opt in of the instructor.  The college and the campus offer a variety of workshops and then market those to instructors.  It is the instructor's choice whether to attend those or not.  If the instructor attends, it remains the instructor's choice whether to embrace any of the lessons from the workshop or not.   The academic department that houses the course exerts very little influence on the subject matter of the course or on the learning goals embedded in the course.

Larger enrollment classes may differ from this pattern in two ways.  First, there may be multiple lecture sections taught by different instructors.  In this case it is possible, though it doesn't always happen, that there is coordination between the instructors.  (For example, they may offer common exams.)  This coordination may be thought of as more for the purpose of consistency than to get at certain learning goals.  Large courses tend to be very static. When they are revised considerable thought is put into that.  In between revisions, there is little to no tweaking in the approach.  Second, there may be discussion sections led by TAs.  Those too need coordination.  TAs are supposed to follow the lead set by the course coordinator, rather that exercise their own independent judgment on the material to be covered.  Third, when the course serves as a prerequisite for some other course or some major, the client course, major, or department may react when there are complaints about the prior preparation not delivering on what it is supposed to be doing.  This doesn't happen very often.  When it does happen, there is some negotiation about how the course should be taught in the future to better satisfy client needs and aspirations.  Absent the prerequisite lever, clients don't have much power to influence how the earlier course is taught.

In my particular case, I have been teaching one section a year of a course called The Economics of Organizations since fall 2012.  The course is my design.  The course is taught under a special topics rubric.  If I decide in the future to spend falls outside of central Illinois, the course won't be offered.   There is nobody else to teach it. The department asks me for my syllabus each time it is offered.  Otherwise, the department exerts no influence as to the content of the course.  Put a different way, the trust model is in full use here.  I am trusted to make the appropriate decisions about course subject matter and course modality.  As long as there aren't complaints from students to the Economics department, the trust model holds sway.

Thus, what Long and Mott argue for regarding collaboration and coordination across courses would entail much greater involvement by the academic departments than is the current norm and some of that would need to address instructor willingness to adjust the teaching in a way where the instructor has far less control.  How to do that will pose a substantial challenge.

Now I want to offer a potential path through this thicket.  I have been involved in team teaching efforts on multiple occasions and they have been uniformly pleasurable experiences for me.  The one I want to focus on here was done in an adult education context.  From 2007 - 2009 I was part of an evolving group of 'faculty' who conducted the Educause Learning Technology Leadership Institute.  (Some people rotated out of the group while I continued to serve.  Others rotated in to replace them.  I rotated out after the 2009 institute.)  The institute itself lasts one work week.  The planning that goes into it is real and substantial.  Things may have changed since, but the way it worked when I was involved is that each faculty member would have primary responsibility for two different sessions and would be paired with a different faculty member for each of these, typically a different person for each session.  So some of the planning would be on a session by session basis, done by those two faculty to figure out the content of the session and then the way to conduct that session.  Then there was planning by all the faculty together along with the Educause staff who supported the institute, to put the pieces together and to work through the various snags that arouse in the process.

I found all of this quite collegial and very enjoyable.  I felt none of the ownership I mentioned for my undergraduate economics course.  Indeed, in my first year as part of the group I came in as a pinch hitter to replace somebody else who had gotten sick.  So I only started in mid year, when normally the start is much earlier.  As a consequence my job then was simply to make it work as best as I could and otherwise to go with the flow.  Yet people who know me are aware that I have a strong need to engage in self-expression in some way.  I found I could readily satisfy that with the group, even while earnestly trying to support the group goals.  Not everything worked perfectly, to be sure, but a good bit of it came off quite well.

LTLI has a structure that facilitates all the planning by the faculty and Educause staff.  All plenary sessions have the attendees in the same room at the same time.  When there was group work to be done, and there was plenty of that for a project called Making the Case, the various groups of attendees were separated but worked in parallel.  All of this was tightly scheduled, part of the planning for the institute.  In such a tightly structured environment, coordination by the faculty is much easier.

The parallel environment on our campuses sometimes occurs in professional masters programs, particularly those that have a common core offering during the first semester/year (the duration of the common curriculum depends on duration of the overall program).  During the common curriculum phase, the students take their courses in lock step.  A lock step curriculum is a good way to achieve the tight structure that can support substantial collaboration across courses.  If one wants broader collaboration across courses, as Long and Mott argue for, perhaps they should be considering whether there can be broader implementation of a lock step curriculum at the undergraduate level, particularly during the general education phase, during the first year or two.  We don't have that now. Each student registers for a unique program of study and is not grouped with other students who take all the same courses.  We might ask whether the alternative is possible and if it is what it would take to make it a reality.

Now a personal anecdote on this score as I, for one, think pursuing this goal of a lock step curriculum would be something good to do.  Nine or ten years ago, I was then the Associate Dean for eLearning in the College of Business, at one of the weekly meetings of the Department Heads, A-Deans, and Dean, I suggested that the college try to do just this.  The Associate Dean for Undergraduate Education, a good guy who really cared about doing his job well, just laughed.  He fully embraced the goal.  But he said it was entirely impractical.  There was literally no way to implement it as just one college in a very large campus.

Now, with that memory still fresh in my head, I'm reacting the same way to Long and Mott.  The goals are great.  I wish them good luck in getting there.  But if I were allowed to bet on the proposition, I would bet against.  It seems to me just too hard to accomplish.

* * * * *

There are still a few other issues that bear mention and with which I will close this already very long post.

One of these is about the relationship between the textbook and the online learning environment.  Which drives the bus and which takes a back seat?  I won't try to answer that here, but surely it needs to be worked through in a convincing way, one that doesn't wreck the vision all by itself.

A second one is about the right market structure to support this new online environment.  Do we think it will be a commercial venture or a variety of competing companies that support this innovation.  My personal history here is more than a bit dated, but I certainly can remember back to when Blackboard bought out WebCT (Illinois was a large WebCT client at the time and the learning management system was my baby then).  To be charitable, let us say that things didn't go swimmingly immediately after that.  I am totally ignorant of the current nature of the LMS market, but I remain suspicious that a collegial environment can be sustained this way and that intermittent profit taking won't disrupt the innovation cycle in some manner entirely unintended by the pioneers of the innovation itself.   On the other hand, other approaches to sustain the innovation, whether open source, community source, or some yet new form, seemingly can work at low scale but then become encumbered beyond that.  My point here is that this too needs to be worked through in a convincing manner.

The last one is something I am continually surprised about.  Technologists such as Long and Mott continue to articulate a view that the technology itself will drive change.  They place great faith in the technology in doing that.  In their view the history doesn't refute this hypothesis.  It is just that we've had the wrong technology (the LMS) as the driver.  Put the right technology in place and the results will be wonderful.  I subscribed to this view for a short period of time in the mid 1990s, as I was just getting started with ed tech, when the possibilities seemed enormous, even while the bandwidth was quite limited.

I have subsequently embraced a different view, where it is the innovators and early adopters who drive change.  The technology acts as a facilitator for them and quite often the change then doesn't carry over to majority users.  Very early on in my time as a learning technology administrator, I learned about something called Hawthorne effects - early use of an innovation would produce different results than later use, in this case because the early use was monitored while the later use was not.  In the academic setting, I have come to believe that those early users are quite unrepresentative of the later adopters.  The early user has a desire to be creative with the technology and to fit it in interesting ways to address issues the current environment poses.  (This makes teaching very much like an applied research.)  The later users employ the technology in a much more mundane way.  The benefits from adoption are considerably less as a consequence.

I want to note that which of these views is right can't be identified by the history over the last 20 odd years or so.  But I think it obvious that more people in IT subscribe to the first view while more of those outside IT subscribe to the second.  So I will close by noting that if those inside IT want to make the Long and Mott vision a reality and if they really need to enlist some people outside of IT (those with budget authority) to support the effort, they have their work cut out for them.  At the least, I hope my post illuminates those tasks that need to be done to get there.

No comments: