Thursday, June 23, 2016

Learning Analytics - My Take

I want to begin with a couple of examples where my own personal use of the technology is deployed to provide feedback of a sort.  I will critique these examples for their effectiveness.  They are deliberately taken from outside the teaching and learning arena so everyone can see what is going on and express an opinion on the matter.   The reader will have experienced something similar, I'm sure.  I will then try to extrapolate from those examples and pose some questions based on that extrapolation.

The first example is Google search, but I will focus on a feature that usually gets very little commentary.  Google is the default search engine for me and I make quite a lot of use of it.  When I am writing I probably do a search every few minutes or so.  I likewise might do a search when I am reading online and something occurs to me to follow up on.  I interrupt the reading and search then and there for that something, rather than wait to conclude what I had been reading.  Search, in this sense, is an alternative to taking notes on the reading.  I almost never take notes and I rarely even bookmark the pages I've searched, relying on the browser history instead to do that for me.

The feature about the Google search that fascinates me is what happens after I type a few letters into the search box.  A pull-down menu is generated that offers suggestions about what I am searching for.  It is a remarkably good function and gives the impression that Google is reading my mind.  For example, just now I've typed the letters "thel" in the search box (without the quotes).  The second item on the pull-down menu is Thelonius Monk, the person I was thinking of when I started to type that search.  This ability to match the likely search target based on just a few letters of the search offers powerful feedback to the user in part because it is so immediate, it really helps if spelling is an issue in that search, and it encourages repeated use because of its efficacy.

I do not know anything about the algorithm that generates the items on that pull-down list and in particular whether it is based only on the aggregate experience of all users in Google, an incredibly large data set, so that what is being returned in the pull-down list is the most common searches that start with those letters, ordered by their relative frequency, or if my own personal data also matters for what shows up on the list.  As it turns out, I listen to Pandora in the browser rather than through a dedicated app (on the phone I use an app) and I have a Thelonius Monk station, though I listen to it infrequently.  Does that matter in what Google returns?  I don't know.  But I did just try the same search at Yahoo and the order of responses in the pull-down menu was different.  This doesn't explain why that is, but it does suggest a puzzle that needs some resolution.  Regardless of that resolution, I can say that I'm quite happy with the way Google does this.  It works well for me.

Let's turn to the second example.  When I am looking up a book title or an author, I will typically first search Amazon.  Their site is more user friendly than the Campus Library site (which I will use mainly to search for individual articles that like are in some database).  Further, I'm typically not trying to get a copy of the book.  I'm just looking for some bibliographic information about it, perhaps so I can provide a link in a blog post.  Invariably after this search has been completed, the next time I go to Facebook, typically in the sidebar but once in a while even directly in my News feed, there is an ad for said book at the Amazon site.

In this case it has to be my own search behavior in the browser that triggers the ad.   This seems remarkably unintelligent to me.  Why should I pay attention to the ad when I so recently had been to the Amazon site looking at the page for the book?  If I hadn't bought the book the first time around, is it at all likely that the ad will now convince me to go back to the site and make a purchase?  Somebody must think so, but I don't get it.  At best, it is a heavy handed intervention, demonstrating the interests of Amazon and Facebook, but disregarding my interests as a user.  I understand fully that they are both businesses and need to make a buck to continue to operate.  But they are both making money hand over fist.  They could afford to make a little less if it meant greater user satisfaction.  (I may not be the best user as an example here, because I hate to be sold anything and if there is a hint of salesmanship in the process I will find it a turnoff. )

I want to note that the Facebook robot goes to my blog every time I post a Note, which in turn happens because I repost my blog entries to Facebook.  So there is a lot of information on me from which to form a profile.  But I believe this information is largely discarded because they don't know how to data mine it effectively.  The searches at Amazon, in contrast, are data mined to the fullest. Then the action taken based on the data mining is very heavy handed, in my view.

* * * * *

Let's switch gears now and focus on the teaching and learning situation in college, particularly at the undergraduate level.  Here are a series of questions informed by the examples above, each followed by a bit of commentary on how to consider the context in which the the question is posed.

Q1:  What is the lag time between the generation of the data that triggers the feedback and the receipt of the feedback itself?

Commentary:  Short lags, as in the Google pull-down list, facilitate learning.  So, for example, in a recent post called Feedback Rather Than Assessment I discussed students doing a self-test that is auto-graded.  After responding to a particular question the student is told whether the question has been answered correctly.  If not, the student gets feedback aimed at helping the student to better understand what the question is asking and how to go about finding the correct answer.  That feedback might be based on the prior experience of other students who answered the question in the same way or on how the particular student already answered other questions that are related to the current question. Well done feedback of this sort would most definitely facilitate learning.

In contrast, long lags, for example if the student has not submitted any of the work after several deadlines have past and that then triggers a phone call from an advisor who wants the student to make an office visit, are not really about learning.  They are about (non) participation and then providing remediation for that.   Participation analytics perhaps is not a jazzy label, but it would be a more accurate description of the use of data in this case.  Further, to the extent that the meeting with the advisor produces a change in the student's behavior thereafter, it should be evident that there is a degree of coercion in this to get that behavioral change.  The student has to submit to authority.  If in retrospect the student agrees that the authority was in the right, then this bit of coercion is beneficial.  That, however, should not be assumed.  I will discuss this more in another question.  Here let's note that there is no coercion entailed in the feedback triggered in the self-test, though there is some coercion in getting the student to initiate on the self-test to begin with.  This issue of when and where coercion is appropriate in instruction is something that needs to be considered further. 

Q2:  What is the nature of the information on which feedback is based?

Commentary:  Typing into a search box illustrates something about what the person is thinking.  Enough of that sort of thing and you can get a good sense where that person is coming from.  If the task is for the student to write a short paper, then the various searches the student does might very well inform how well the student did the homework necessary to write that paper.

In contrast, clicking on a link to a file to download it or to preview it online says essentially nothing about what the student's reaction is having seen the file or listened to it.  Further it doesn't say anything about whether the student pays full attention to the content of the file or if instead the student is multiprocessing while supposedly looking at the file.

More generally, the issue is whether we are getting sharp information that brings the picture of the student into fine relief or if we are getting only dull information, which will speak mainly to participation at some level but not to learning.

One further point here is that with dull information it is much easier for the student to game the system so that if a few clicks will get the student out of some obligation the student would prefer to avoid, those clicks will be observed but might not signify what they are intended to otherwise indicate.

Q3:  Is the sample size adequate to provide useful feedback based on it?

Commentary:  I'm again going totally outside teaching and learning to illustrate the issue.  I am a regular reader of Thomas Edsall's column in the New York Times.  I like the way he polls a variety of experts in the field on a question and uses his column to let them do the talking, either contrasting views when that is the case or talking about the consensus in the event that is reached.   Recently Edsall has been on a Donald Trump kick, just as many other columnists have been.  In that I'm afraid Edsall has finally reached the slippery slope.

The Trump candidacy may be the electoral version of The Black Swan which, as a graduate school classmate informs me, is a colorful label for a random variable with an underlying distribution that exhibits fat tails, in which case outliers are not all uncommon and the sample mean can be quite volatile instead of settling down.   Consider that on May 11, Edsall posted a piece called How Many People Support Trump but Don't Want to Admit it?  That essay gave plausibility to the conclusion that Trump will be the next President.  Yet in a piece dated today, Edall has a quite different message in a piece called How Long Can the G.O.P. Go?   Here Edsall argues that Trump most likely is going down and he may bring down other Republican candidates with him.

How can there be two such varied pieces within such a short time span?  I don't know.  It could be that many undecideds made up their minds in the interim or that some who had been pro Trump changed their mind.  Or it could be a fat tail problem and that the polling samples are mainly noise and not telling us what is really going on with the electorate.   I am not a statistician.  But here, even a statistician might not be able to tell.   If the underlying model has changed and the statistician doesn't know that, taking a historical approach to consider the observed data will lead to erroneous conclusions.

Most learning technologists are not statisticians nor are the bulk of instructors whom they provide consultation to.  Some people will utter the mantra - the data always tells the story.  No, they don't.   Sometimes they do.  Other times there is a black swan. 

Q4:  Do students perceive the instructor (the university) to have their own interests at heart when recommending some intervention based on a learning analytics approach?

Commentary:  In spring 2011 I taught for the first time since I retired.  Of the two classes I had then, one was an advanced undergraduate class on Behavioral Economics.  I had some issues with that class so I opted to not teach that particular subject matter again.  In spring 2012 I taught a different course, on The Economics of Organizations, which is now the only class I teach.  As it turned out the spring 2012 class size was very small - only 8 students;  so we had a lot of discussion.  Further, a few of the students had taken the Behavioral Econ class from me the year before.  These students were extremely candid.  They railed about their education and where quite critical of the place.  I had previously heard criticism from students about my own teaching, on occasion, but usually that would amount to my course being too hard or that sometimes I wasn't encouraging enough to a student.  I have never been criticized for not caring about the students.

Yet that was the essence of the critique those spring 2012 students were making.  The Econ department didn't care about them.  There were so many Econ majors (I believe around 850 at the time) and so few people to advise them that they felt they were being treated like a number, not like a human being.  This was news to me at the time, but I've been alerted to it ever since.  It is why I came up with that example of Amazon and Facebook in the previous section.  My attitude there is essentially the same as the attitude these students portrayed to me.  And there is guilt by association.  If the Econ department didn't care about them, then the U of I as a whole didn't care about them either.

I don't know whether most students on campus come to this view or not, though I suspect it is more pronounced in LAS than it is either in Business or Engineering, since some of this is a resource matter and LAS, which doesn't have a tuition surcharge, is more resource challenged than those other colleges.

Learning analytics is being touted as a way to let data provide answers in resource scarce environments, particularly at large public institutions.  But there is an underlying assumption that the students trust the institution to make good interventions on their behalf.  That assumption needs to be verified.  If it is found wanting, then it may be that learning analytics won't produce the outcomes that people hope it will deliver.

Q5: Is there a political economy reason (i.e., a budget reason) for learning technologists to advance a learning analytics agenda?

Commentary:  I'm an economist by training and am comfortable making political economy arguments.  Indeed, I will go so far to say there are always political economy factors to consider in any sort of social intervention.  To me that is an entirely uncontroversial assertion.  Yet to the non economist it might seem like a radical proposition.  So here I want to say that I've been down this route before and made essentially the same argument in a different context.  I will first review that argument.  Then I want to try to update it to the present.

Soon after I started this blog, in spring 2005, I wrote a series of posts Why We Need a Course Management System, with Part 2 the particular essay that made the political economy arguments.  At the time, my campus had many online learning systems supported at the campus level (with still other systems in the various colleges).  The campus was in the process of moving to an enterprise CMS (now I would call it an LMS so as to distinguish from a Content Management System).   This was in some sense necessary for scaling reasons.   Usage had grown dramatically.  But it is conceivable that several of the older systems could have been updated and continued instead of moving to one monolithic system.

The technical issues on this aside, my political economy argument said the case for many different systems - users pick which they prefer - doesn't work well in a tight budget environment.  Further, home grown systems of this sort are particularly at risk, especially as they age.  A larger commercial system could command a certain size budget to support it.  The smaller systems, in contrast, could be nickle-and-dimed, and for that reason units were reluctant to claim ownership of such systems.  At Illinois there was Campus Gradebook, a stand alone tool that was a derivative of the Plato System, very popular with instructors who used it.  There was also the intelligent quiz tool Mallard, also quite popular with instructors who used it.  I was the one who gave the kill order for Campus Gradebook.  Mallard lasted longer, but eventually did die as well, after I had left the Campus IT organization.  These tools did what they did better than the LMS.  But they couldn't survive from a resource point of view in a tough climate.

Turning to now, the climate is even tougher financially, and the LMS is pretty much an old technology idea at this point.  Further, with the exception of a few tools in the LMS, there are better alternatives out there, particularly for file sharing, communication, and calendaring.  So the temptation budget-wise, to cut learning technology as an area, must be pretty large.  Yet nobody wants to see their own budgets cut.  Instead they want to put forward an argument that in the reinvention of their area they provide an essential function that needs full funding.

Which side of this political economy argument is right?  I don't know but my sense of this is that the more learning analytics is tied to actual innovation in teaching practice or leaner strategies the more it makes sense to fund the area.  If there is stasis on these matters, then to me this starts to look a lot like the arguments I was making 11 years ago.  The message here is that that real payoff is not on what the technology can do but on its potential for beneficial impact on patterns of use.   I wonder if the field can be sufficiently self-critical in this regard.  There is a very strong temptation to play the role of cheerleader.  I should add here that while they are not identical there are parallels between how learning analytics is considered now for college education and the entire accountability movement in K-12 education.  Thinking about the latter gives me the shivers and that provides a good chunk of the motivation for writing this piece.

* * * * *

Let me wrap up.  Particularly on big campuses there is a problem with IT in general that the people in the IT organization talk to each other, and thereby reinforce their own views, but don't talk nearly enough with others, especially those who don't speak geek.  As a result the IT area develops its own conception of mission, perhaps based on the language in a fairly abstract campus strategic planning document, rather than determining its mission as part of solving a larger puzzle that emerges via extended conversation with the entire campus community.

Learning technology may have it even harder than IT in general in this regard because there are other campus providers to grapple with - particularly the Center for Teaching folks and folks in the Library - plus each of them may also have issues with too much internal discussion but not enough extended conversation with the entire campus.

These are ongoing concerns, whether in good resource times or bad.  Tough times, however, tend to make us all hunker down even more.  For the good of the order that hunkering down is the wrong thing to do, but for our own preservation it is perfectly understandable behavior.

When trying to look for universal truths, I find myself going back to the TV show, The West Wing, (though the show is getting dated now).  In a particularly good episode entitled Harsfield's Landing, President Bartlet tells Sam over a game of chess to "see the whole board."   That's the message I'm trying to deliver here.

No comments:

Post a Comment