This is a follow up post to my previous one. Here is a very quick summary so readers don't have to go back to that if they'd prefer not to.
1. In the kid's game, Life, there was something called a Share The Wealth Card. I used that as a metaphor for programmatic wealth redistribution activities.
2. Mean household income in the U.S. is around $650K. Median household income in the U.S. is around $81K. Sharing the wealth here implies raising the median so it is closer to the mean, via some sort of Robin Hood approach.
3. Brexit and the Trump candidacy have made evident some rather frightening nativism and anti-immigration sentiment. Would well to do people who are concerned by these developments now be willing to share their own wealth, to some degree, as a way to quell the more vitriolic elements of the nativism and racism?
In this piece I want to take up the politics about how such redistribution might be accomplished and actually be sustained. Let us recall a little history. Wealth was also highly concentrated in the roaring twenties. The New Deal and policies after WW II (for example, The GI bill and federally insured home mortgages) helped to build a middle class and compress wealth. That trend started to reverse in the late 1970s. But there is no doubt that the "Reagan Revolution" accelerated it. The previous post took on the argument that globalization and structural issues totally explain the recent hollowing out of the middle class. That Denmark got some mention during the Democratic primaries indicates that a big part of the issue is a willfulness to redistribute wealth.
This piece from 18 months ago, in other words from before it was obvious that Trump would be the Republican nominee, is quite instructive to read on when redistribution will occur politically. Among the voting population, wealthier voters have greater participation rates. Poorer voters tend not to participate. The more that modest income voters do participate, the more likely it is for government policy to have aspects of wealth redistribution towards those of modest means.
The syllogism should be very clear. If we went to a system of universal (mandatory) voting, Wikipedia calls it compulsory voting, that would be an effective way to ensure sharing the wealth. Politicians now can ignore the welfare of the poor, the near poor, and much of the working class, because that doesn't matter to the politicians in getting reelected. As a consequence, our recent politics has strongly redistributed wealth upward.
To my knowledge, no candidate in this election has even mentioned mandatory voting. Indeed, the Republicans have pushed for Voter ID Laws, clearly a big step in the opposite direction. One might guess, therefor, that if either party were to push for mandatory voting it would be much more likely to be the Democrats. But would they do this?
Let us consider the matter from a purely selfish view. There are many professional types and well to do people who now are Democrats. Mandatory voting would probably mean that these people would take a hit economically, albeit for the good of the order. Are enough of them now willing to do that out of fear of where the logic of Trumpism will lead the country? Or would a push for mandatory voting actually dry up campaign contributions from this group and encourage many of these people to vote Republican?
This issue probably will not come up at all during the current campaign. It would be a distraction now and especially if the Clinton campaign feels it has a commanding lead it would violate the bird in the hand principle. But it should begin to receive attention immediately thereafter.
It is my view that Chuck Schumer will be the prime player on this matter. He is on record in the not too distant past for saying the party should embrace policies for the middle class, but not go overboard to help the poor, as they mainly don't vote. And, of course, he will be the Senate leader of the Democrats, taking over for Harry Reid. At issue then is whether he will recalibrate his own views as a consequence of the Trump candidacy, Brexit, and related developments.
If the Democrats could hold onto the bulk of their more well to do supporters, a move toward mandatory voting would be an enormous political boon for them and would help them both at the national and the state levels. Further, in watching the contortions of Republican political leaders talk about the Trump candidacy, the line that's been repeated over and over again is - we need to respect the will of the American people, note that it will be quite hard to argue against mandatory voting. Elected officials don't want to go on record saying that they prefer for some eligible voters not to exercise the franchise.
So if the issue does come up it has a good chance of passing. And the view of Trump supporters (that the elites have ignored them) really offers a good argument for why it should come up. Still, it will take courage to play this card. I wonder if we have it in us to do that.
pedagogy, the economics of, technical issues, tie-ins with other stuff, the entire grab bag.
Monday, June 27, 2016
Saturday, June 25, 2016
The Euphemism We Call Globalization and the Real though Non-Proximate Causes of Weak Wages
When folks of my generation were kids we played a variety of board games, especially in the winter or when it was raining outside. The best of these was probably Monopoly, though it took a long time to play till conclusion. Another good one was Risk. There were other games as well that we played for variety, though they were not as good. For Monopoly and Risk I can recall quite a lot of detail of how the game was to be played. For most of the others, beyond the name of the game I'm mainly drawing a blank now. The one exception is Life. It had several idiosyncratic features. The one that comes to mind now is the Share the Wealth Card. I'm going to use that as a metaphor in what follows.
* * * * *
Yesterday, for no apparent reason, I did a bit of arithmetic to determine per capita wealth in the U.S. There is a lot written about income and income distribution, but much less written about wealth. I'm not sure why that is, but I think we should pay more attention to it. These are my type of calculations - quick and dirty - the source of the numbers (Wikipedia) may not be impeccable and given the slide in the markets yesterday may be somewhat off for that reason. With those caveats, let's begin.
Do a Google search on U.S. population. A graph will appear with the current number 318.9 million people, then with the qualifier that this is from 2014. You can scroll down to find a more recent number for 2016, but I will use the 2014 number and explain why in a second. Next do a search on total wealth in the U.S. The second hit is to a page on National Wealth across countries arrayed by the richest (the U.S.) on down. The number is $85,901 trillion. This is for 2015. If I could find a population number and a wealth number from the same year, whether 2014, 2015, or 2016 I would have used that. So when I do the per capita calculation, let's keep that year mismatch in mind.
If you divide $85,901 trillion by 318.9 million people you get $269,366.57/person. For ease in manipulation of the numbers I'm going to round that down to $250K/person. It's not precise, but it is in the ballpark. Even with the rounding down, that is an impressive number to me. A family of 4 that was right at the average would have a million dollars of wealth. The average household actually has 2.6 people (I know this from other Census look ups and will not provide the link to support that number here) so that average household wealth in the U.S. is around $650K.
Somebody with a billion dollars is then 4,000 times richer than average. Mark Zuckerberg, who is purported to be worth around $40 billion, is 160,000 times richer than average. The uber rich live on a different planet than the rest of us.
Of course the wealth distribution is highly skewed, much more so than the income distribution. There is a median household wealth number reported, $81.4K. (That number is from 2013, but I suspect it hasn't changed much since.) Note that is about one eighth of the average. That is a huge difference. It is calculations like these that make you wonder why people at or near the median don't collectively play their share the wealth cards.
I want to give one caveat to this before moving on. Household wealth should grow over the life cycle till retirement and then decline after that. That is the normal pattern. While working you save some of your earnings. Those savings accumulate. So that older households have more wealth than younger ones is to be expected and what you'd actually want. The variation in wealth due to differences in age should be accommodated. It is the variation in wealth do to unequal earning power that we need to focus on.
* * * * *
The news about Brexit has again brought into focus the anger of those who have been dispossessed economically and who are utterly disgusted with the elites in government and in business. Trickle down was a con job. Austerity as a response to the financial crisis was a wrong headed government policy. The elites, who should have known better, appear so clueless. One question is: why?
Two pieces this week highlight that the underlying issues are fundamentally economic. The first, by Steven Rattner, is about our broken retirement system. Too many people have not adequately funded their retirement. After a few years, they will become destitute. The 401(k) model hasn't worked. People simply aren't saving enough. The most obvious reason why is that they aren't earning enough so they defer the saving decision to keep up current consumption. Further, they are not experts at managing their portfolios, but the model expects them to be. So they can be hoodwinked by financial advisers. Indeed, in the current setup that appears to be the expectation.
The second was this Shields and Brooks segment about voter disenchantment across the globe. Mark Shields sounded like a broken record in this clip, ticking off one economic issue after another, each contributing to the malaise at the root of the anger. Particularly prominent was debt from college student loans, which in aggregate seems to be breaking one record after another for the magnitude of such debt. This is happening while the job market for many new grads remains in a torpor. Despair over the situation then festers.
David Brooks for his part argued that there was a cultural aspect to this as well as the economic one. How else can one explain the rise of nativism and anti-immigration sentiment? Shields, however, was unyielding on the point that the cause was economic dislocation and hardship. Nobody denies the nativism, but in Shield's view it is more venting than a perceived cure to the problems. I would argue this is the consequence when those taking the economic hits don't feel that they have a share the wealth card to play. Complaining is all they have left.
And yet all that anger is truly frightening. The history lesson of the 1930s is very instructive. Demagoguery is clearly on the rise now. Read Amy Davidson's piece, Brexit Should Be a Warning About Donald Trump. Indeed, it should. If you are somebody of means now and you are frightened about Trump, what are you thinking? So another question is: have these people of means reached the point where they are willing to give out some share the wealth cards, even if it means they will end up taking a hit financially, just to get everyone else to calm down?
I do not understand the thinking of the very wealthy, particularly the apparent need to continue earning a great deal during the working life, only to become a philanthropist thereafter and give away the wealth. The logic of that pattern eludes me. Why not share the wealth with employees, contractors, and customers during the working life instead? I understand that earnings are one way to keep score in a game that Thorstein Veblen taught us about 100 years ago in his study of the robber barons. But why hasn't that game been superseded by something else that is more socially beneficial and that should ultimately be more satisfying to those playing it? This desire for hoarding beyond any reasonable bound might be justified if trickle down worked. The evidence is plain that it does not. It is probably in the province of the wealthy to decide whether, mainly out of obstinacy, we end up going down the path of demagoguery, uttering libertarian platitudes while Trump as President wrecks the country, or to prevent that outcome by lessening their claims on the pie. I wish I had a better feel for how this choice will play out.
* * * * *
In this final section I want to took about the "logic of globalization" as it pertains to what workers get paid and then to consider other causes that influenced worker pay historically. As I used to teach intermediate microeconomics, I will take that sort of approach in giving my explanation.
Globalization here means there is an internationally determined perfectly elastic labor supply curve. Wages are set by that. A domestic employer can't pay more than that wage, even if it were able to. Were this employer concerned for the welfare of the employees and hence willing to pay wages in excess of market, that would violate the fiduciary responsibility that the employer has to shareholders. Further, if this were done in a significant way, it would open the firm up to a hostile takeover threat, as the new management could pay employees only at market, make more profit that way, and justify the acquisition of the company accordingly.
This, I think, is mostly bunk, though it sounds a reasonable argument. First, the idea to treat employees and other input suppliers as a cost rather than as a contributor to company value encourages a mindset that costs should be minimized and thus employees should be paid the bare minimum. If employees were, instead, viewed as partners and co-producers, then sharing profits with them would be a natural thing to do. Second, for large established firms there is an ample supply of retained earnings. These are not being reinvested. The retained earnings are just sitting there. The magnitude is on the order of $2 trillion in aggregate. This is with large entrenched firms that have no threat of takeover, while in contrast companies in the 1980s that held lots of idle cash were candidates for hostile takeover. Now, a good fraction of those retained earnings could be paid to employees.
Third, in these large companies the CEO and his/her cronies are major shareholders, if not an outright majority. The expression fiduciary responsibility is itself a euphemism for selfishness and not to worry about the welfare of others. This selfishness makes much more sense in a small, owner-operator organization. As I've tried to argue above, it is hard to understand the utility in the selfishness of the uber rich. It's more a vestigial organ. If the CEO didn't inherit the wealth the selfishness may have been a driver early on (though I bet for many who really liked the work it was more by-product than main product). For example, consider the history of Larry Ellison, who until recently was the CEO of Oracle.
When I teach intermediate microeconomics I ask the students: when the firm earns economic profit (revenue in excess of payment to each factor of production at its opportunity cost) who gets the economic profit? Sometimes this elicits a response - profit goes to shareholders. In turn I respond to the students - shareholders are suppliers of a factor of production - financial capital. They must earn their opportunity cost or they will take their supply elsewhere. They need not earn beyond this. This puzzles the students. So they get quiet after that and ask me, who gets the economic profit? My answer - it can go to any factor of production and might go to other stakeholders as well.
This has them puzzling. How then does the economic profit get distributed? My reply is that it is determined as the solution to a bargaining problem. There are a variety of factors that impacted the outcome of that bargaining problem.
* * * * *
Yesterday, for no apparent reason, I did a bit of arithmetic to determine per capita wealth in the U.S. There is a lot written about income and income distribution, but much less written about wealth. I'm not sure why that is, but I think we should pay more attention to it. These are my type of calculations - quick and dirty - the source of the numbers (Wikipedia) may not be impeccable and given the slide in the markets yesterday may be somewhat off for that reason. With those caveats, let's begin.
Do a Google search on U.S. population. A graph will appear with the current number 318.9 million people, then with the qualifier that this is from 2014. You can scroll down to find a more recent number for 2016, but I will use the 2014 number and explain why in a second. Next do a search on total wealth in the U.S. The second hit is to a page on National Wealth across countries arrayed by the richest (the U.S.) on down. The number is $85,901 trillion. This is for 2015. If I could find a population number and a wealth number from the same year, whether 2014, 2015, or 2016 I would have used that. So when I do the per capita calculation, let's keep that year mismatch in mind.
If you divide $85,901 trillion by 318.9 million people you get $269,366.57/person. For ease in manipulation of the numbers I'm going to round that down to $250K/person. It's not precise, but it is in the ballpark. Even with the rounding down, that is an impressive number to me. A family of 4 that was right at the average would have a million dollars of wealth. The average household actually has 2.6 people (I know this from other Census look ups and will not provide the link to support that number here) so that average household wealth in the U.S. is around $650K.
Somebody with a billion dollars is then 4,000 times richer than average. Mark Zuckerberg, who is purported to be worth around $40 billion, is 160,000 times richer than average. The uber rich live on a different planet than the rest of us.
Of course the wealth distribution is highly skewed, much more so than the income distribution. There is a median household wealth number reported, $81.4K. (That number is from 2013, but I suspect it hasn't changed much since.) Note that is about one eighth of the average. That is a huge difference. It is calculations like these that make you wonder why people at or near the median don't collectively play their share the wealth cards.
I want to give one caveat to this before moving on. Household wealth should grow over the life cycle till retirement and then decline after that. That is the normal pattern. While working you save some of your earnings. Those savings accumulate. So that older households have more wealth than younger ones is to be expected and what you'd actually want. The variation in wealth due to differences in age should be accommodated. It is the variation in wealth do to unequal earning power that we need to focus on.
* * * * *
The news about Brexit has again brought into focus the anger of those who have been dispossessed economically and who are utterly disgusted with the elites in government and in business. Trickle down was a con job. Austerity as a response to the financial crisis was a wrong headed government policy. The elites, who should have known better, appear so clueless. One question is: why?
Two pieces this week highlight that the underlying issues are fundamentally economic. The first, by Steven Rattner, is about our broken retirement system. Too many people have not adequately funded their retirement. After a few years, they will become destitute. The 401(k) model hasn't worked. People simply aren't saving enough. The most obvious reason why is that they aren't earning enough so they defer the saving decision to keep up current consumption. Further, they are not experts at managing their portfolios, but the model expects them to be. So they can be hoodwinked by financial advisers. Indeed, in the current setup that appears to be the expectation.
The second was this Shields and Brooks segment about voter disenchantment across the globe. Mark Shields sounded like a broken record in this clip, ticking off one economic issue after another, each contributing to the malaise at the root of the anger. Particularly prominent was debt from college student loans, which in aggregate seems to be breaking one record after another for the magnitude of such debt. This is happening while the job market for many new grads remains in a torpor. Despair over the situation then festers.
David Brooks for his part argued that there was a cultural aspect to this as well as the economic one. How else can one explain the rise of nativism and anti-immigration sentiment? Shields, however, was unyielding on the point that the cause was economic dislocation and hardship. Nobody denies the nativism, but in Shield's view it is more venting than a perceived cure to the problems. I would argue this is the consequence when those taking the economic hits don't feel that they have a share the wealth card to play. Complaining is all they have left.
And yet all that anger is truly frightening. The history lesson of the 1930s is very instructive. Demagoguery is clearly on the rise now. Read Amy Davidson's piece, Brexit Should Be a Warning About Donald Trump. Indeed, it should. If you are somebody of means now and you are frightened about Trump, what are you thinking? So another question is: have these people of means reached the point where they are willing to give out some share the wealth cards, even if it means they will end up taking a hit financially, just to get everyone else to calm down?
I do not understand the thinking of the very wealthy, particularly the apparent need to continue earning a great deal during the working life, only to become a philanthropist thereafter and give away the wealth. The logic of that pattern eludes me. Why not share the wealth with employees, contractors, and customers during the working life instead? I understand that earnings are one way to keep score in a game that Thorstein Veblen taught us about 100 years ago in his study of the robber barons. But why hasn't that game been superseded by something else that is more socially beneficial and that should ultimately be more satisfying to those playing it? This desire for hoarding beyond any reasonable bound might be justified if trickle down worked. The evidence is plain that it does not. It is probably in the province of the wealthy to decide whether, mainly out of obstinacy, we end up going down the path of demagoguery, uttering libertarian platitudes while Trump as President wrecks the country, or to prevent that outcome by lessening their claims on the pie. I wish I had a better feel for how this choice will play out.
* * * * *
In this final section I want to took about the "logic of globalization" as it pertains to what workers get paid and then to consider other causes that influenced worker pay historically. As I used to teach intermediate microeconomics, I will take that sort of approach in giving my explanation.
Globalization here means there is an internationally determined perfectly elastic labor supply curve. Wages are set by that. A domestic employer can't pay more than that wage, even if it were able to. Were this employer concerned for the welfare of the employees and hence willing to pay wages in excess of market, that would violate the fiduciary responsibility that the employer has to shareholders. Further, if this were done in a significant way, it would open the firm up to a hostile takeover threat, as the new management could pay employees only at market, make more profit that way, and justify the acquisition of the company accordingly.
This, I think, is mostly bunk, though it sounds a reasonable argument. First, the idea to treat employees and other input suppliers as a cost rather than as a contributor to company value encourages a mindset that costs should be minimized and thus employees should be paid the bare minimum. If employees were, instead, viewed as partners and co-producers, then sharing profits with them would be a natural thing to do. Second, for large established firms there is an ample supply of retained earnings. These are not being reinvested. The retained earnings are just sitting there. The magnitude is on the order of $2 trillion in aggregate. This is with large entrenched firms that have no threat of takeover, while in contrast companies in the 1980s that held lots of idle cash were candidates for hostile takeover. Now, a good fraction of those retained earnings could be paid to employees.
Third, in these large companies the CEO and his/her cronies are major shareholders, if not an outright majority. The expression fiduciary responsibility is itself a euphemism for selfishness and not to worry about the welfare of others. This selfishness makes much more sense in a small, owner-operator organization. As I've tried to argue above, it is hard to understand the utility in the selfishness of the uber rich. It's more a vestigial organ. If the CEO didn't inherit the wealth the selfishness may have been a driver early on (though I bet for many who really liked the work it was more by-product than main product). For example, consider the history of Larry Ellison, who until recently was the CEO of Oracle.
When I teach intermediate microeconomics I ask the students: when the firm earns economic profit (revenue in excess of payment to each factor of production at its opportunity cost) who gets the economic profit? Sometimes this elicits a response - profit goes to shareholders. In turn I respond to the students - shareholders are suppliers of a factor of production - financial capital. They must earn their opportunity cost or they will take their supply elsewhere. They need not earn beyond this. This puzzles the students. So they get quiet after that and ask me, who gets the economic profit? My answer - it can go to any factor of production and might go to other stakeholders as well.
This has them puzzling. How then does the economic profit get distributed? My reply is that it is determined as the solution to a bargaining problem. There are a variety of factors that impacted the outcome of that bargaining problem.
- The rise of predatory finance. You can think of the movie Wall Street and various Gordon Geckos out there. Or you can think of the more recent movie The Big Short. In the old (now quaint) view of finance, it is there to provide liquidity to companies that don't have enough of it and to help the companies manage their financial risk. Predatory finance does neither of those. Instead it steers economic profit to the predator and away from other factors of production.
- The decline in financial regulation and antitrust. Economic profit persists when companies have market power. Competition should cut into that. But competition can be blocked for a variety of reasons, in which case an alternative is for the government to regulate. It has been increasingly reluctant or unable to do so.
- The decline in unions. Labor is now very weak. I'm not sure that I would recommend reading Only One Thing Can Save Us, but it does make the point that a strong labor movement is consistent with high value industrial product and holds up Germany as the shining example for us to emulate.
- Greater and greater myopia regarding corporate earnings. Payments to factors of production in excess of opportunity cost breeds loyalty and may produce down the road returns as a consequence. A company that takes the long view treats its employees well. A company that focuses only on the bottom line today pays dirt wages.
Friday, June 24, 2016
A little analyics exercise for my own class
Below are some quantitative data on my course offerings for the last 4 years based on information that Blogger provides and enrollment numbers that Banner keeps. (The excel file with these data is also available for download.) Since I teach a small class, you would think this sort of information is kind of superfluous. I did this just so I could see the type of data folks who do learning analytics look at.
My take away from this is:
(a) Hits/Post/Student is an indicator of participation or engagement of the class as a whole. Three out of the 5 offerings had that number near 6. (Thankfully, none have the number lower than 1.) Fall 2012 was an unusual semester (that is explained in a comment) and may impacted the numbers. Fall 2015 was a class I really struggled with. The numbers seem to bear this out.
(b) Because I don't have individual student access stats, I can't say anything about student engagement this way. But I do have a sense of wide variation in engagement across students. On the under performer side, these are students who are chronically late or miss submitting course work. On the over achiever side, these are students who will email me privately to discuss issues with an assignment and those who make regular use of office hours. I really don't know how hits to the site correlate with these other measures, but in principle that should be measurable.
(c) There seem to be four possible things to explain variation in Hits/Post/Student. The classroom is one. The DKH classrooms are fairly dreadful, but 223 DKH is more intimate than 123 DKH. I hate the tablet armchairs, but they are better than bolt down seating when in a small class setting.
Class size might be another explainer. I like to use Socratic methods, which works well in a small class setting but may break down in larger classes. I don't know where the line might be but 25 students may be the about the max for which my approach works well. There were attendance issues both in spring 2012 (which is why I now teach only in the fall, under the assumption that senioritis is worse in the spring) and in fall 2015. In a larger class where many kids don't show up, the kids who do come may be influenced by that.
Cohort effects could be quite significant. I may simply have had a passive bunch of students in fall 2015.
The last thing is my experience teaching the course. I do make it a point to try something new each time I offer the class, but there clearly was more novelty early on. That could influence how the live class works and in turn impact how intensively the online part of the course is utilized.
(d) If I did not have other evidence, I'm not sure that the hit data would be meaningful to me. It is a bit useful to confirm impressions I have formed by looking at other evidence. But I would never use it as a primary way of getting a sense of how the class as a whole is doing
The final thing I'll comment on is that even with the data having to be compiled, it wasn't that hard to do. So the learning analytic folks might ask some individual instructors to do something similar for their courses to see what impression they form from doing this sort of exercise.
My take away from this is:
(a) Hits/Post/Student is an indicator of participation or engagement of the class as a whole. Three out of the 5 offerings had that number near 6. (Thankfully, none have the number lower than 1.) Fall 2012 was an unusual semester (that is explained in a comment) and may impacted the numbers. Fall 2015 was a class I really struggled with. The numbers seem to bear this out.
(b) Because I don't have individual student access stats, I can't say anything about student engagement this way. But I do have a sense of wide variation in engagement across students. On the under performer side, these are students who are chronically late or miss submitting course work. On the over achiever side, these are students who will email me privately to discuss issues with an assignment and those who make regular use of office hours. I really don't know how hits to the site correlate with these other measures, but in principle that should be measurable.
(c) There seem to be four possible things to explain variation in Hits/Post/Student. The classroom is one. The DKH classrooms are fairly dreadful, but 223 DKH is more intimate than 123 DKH. I hate the tablet armchairs, but they are better than bolt down seating when in a small class setting.
Class size might be another explainer. I like to use Socratic methods, which works well in a small class setting but may break down in larger classes. I don't know where the line might be but 25 students may be the about the max for which my approach works well. There were attendance issues both in spring 2012 (which is why I now teach only in the fall, under the assumption that senioritis is worse in the spring) and in fall 2015. In a larger class where many kids don't show up, the kids who do come may be influenced by that.
Cohort effects could be quite significant. I may simply have had a passive bunch of students in fall 2015.
The last thing is my experience teaching the course. I do make it a point to try something new each time I offer the class, but there clearly was more novelty early on. That could influence how the live class works and in turn impact how intensively the online part of the course is utilized.
(d) If I did not have other evidence, I'm not sure that the hit data would be meaningful to me. It is a bit useful to confirm impressions I have formed by looking at other evidence. But I would never use it as a primary way of getting a sense of how the class as a whole is doing
The final thing I'll comment on is that even with the data having to be compiled, it wasn't that hard to do. So the learning analytic folks might ask some individual instructors to do something similar for their courses to see what impression they form from doing this sort of exercise.
Thursday, June 23, 2016
Learning Analytics - My Take
I want to begin with a couple of examples where my own personal use of the technology is deployed to provide feedback of a sort. I will critique these examples for their effectiveness. They are deliberately taken from outside the teaching and learning arena so everyone can see what is going on and express an opinion on the matter. The reader will have experienced something similar, I'm sure. I will then try to extrapolate from those examples and pose some questions based on that extrapolation.
The first example is Google search, but I will focus on a feature that usually gets very little commentary. Google is the default search engine for me and I make quite a lot of use of it. When I am writing I probably do a search every few minutes or so. I likewise might do a search when I am reading online and something occurs to me to follow up on. I interrupt the reading and search then and there for that something, rather than wait to conclude what I had been reading. Search, in this sense, is an alternative to taking notes on the reading. I almost never take notes and I rarely even bookmark the pages I've searched, relying on the browser history instead to do that for me.
The feature about the Google search that fascinates me is what happens after I type a few letters into the search box. A pull-down menu is generated that offers suggestions about what I am searching for. It is a remarkably good function and gives the impression that Google is reading my mind. For example, just now I've typed the letters "thel" in the search box (without the quotes). The second item on the pull-down menu is Thelonius Monk, the person I was thinking of when I started to type that search. This ability to match the likely search target based on just a few letters of the search offers powerful feedback to the user in part because it is so immediate, it really helps if spelling is an issue in that search, and it encourages repeated use because of its efficacy.
I do not know anything about the algorithm that generates the items on that pull-down list and in particular whether it is based only on the aggregate experience of all users in Google, an incredibly large data set, so that what is being returned in the pull-down list is the most common searches that start with those letters, ordered by their relative frequency, or if my own personal data also matters for what shows up on the list. As it turns out, I listen to Pandora in the browser rather than through a dedicated app (on the phone I use an app) and I have a Thelonius Monk station, though I listen to it infrequently. Does that matter in what Google returns? I don't know. But I did just try the same search at Yahoo and the order of responses in the pull-down menu was different. This doesn't explain why that is, but it does suggest a puzzle that needs some resolution. Regardless of that resolution, I can say that I'm quite happy with the way Google does this. It works well for me.
Let's turn to the second example. When I am looking up a book title or an author, I will typically first search Amazon. Their site is more user friendly than the Campus Library site (which I will use mainly to search for individual articles that like are in some database). Further, I'm typically not trying to get a copy of the book. I'm just looking for some bibliographic information about it, perhaps so I can provide a link in a blog post. Invariably after this search has been completed, the next time I go to Facebook, typically in the sidebar but once in a while even directly in my News feed, there is an ad for said book at the Amazon site.
In this case it has to be my own search behavior in the browser that triggers the ad. This seems remarkably unintelligent to me. Why should I pay attention to the ad when I so recently had been to the Amazon site looking at the page for the book? If I hadn't bought the book the first time around, is it at all likely that the ad will now convince me to go back to the site and make a purchase? Somebody must think so, but I don't get it. At best, it is a heavy handed intervention, demonstrating the interests of Amazon and Facebook, but disregarding my interests as a user. I understand fully that they are both businesses and need to make a buck to continue to operate. But they are both making money hand over fist. They could afford to make a little less if it meant greater user satisfaction. (I may not be the best user as an example here, because I hate to be sold anything and if there is a hint of salesmanship in the process I will find it a turnoff. )
I want to note that the Facebook robot goes to my blog every time I post a Note, which in turn happens because I repost my blog entries to Facebook. So there is a lot of information on me from which to form a profile. But I believe this information is largely discarded because they don't know how to data mine it effectively. The searches at Amazon, in contrast, are data mined to the fullest. Then the action taken based on the data mining is very heavy handed, in my view.
* * * * *
Let's switch gears now and focus on the teaching and learning situation in college, particularly at the undergraduate level. Here are a series of questions informed by the examples above, each followed by a bit of commentary on how to consider the context in which the the question is posed.
Q1: What is the lag time between the generation of the data that triggers the feedback and the receipt of the feedback itself?
Commentary: Short lags, as in the Google pull-down list, facilitate learning. So, for example, in a recent post called Feedback Rather Than Assessment I discussed students doing a self-test that is auto-graded. After responding to a particular question the student is told whether the question has been answered correctly. If not, the student gets feedback aimed at helping the student to better understand what the question is asking and how to go about finding the correct answer. That feedback might be based on the prior experience of other students who answered the question in the same way or on how the particular student already answered other questions that are related to the current question. Well done feedback of this sort would most definitely facilitate learning.
In contrast, long lags, for example if the student has not submitted any of the work after several deadlines have past and that then triggers a phone call from an advisor who wants the student to make an office visit, are not really about learning. They are about (non) participation and then providing remediation for that. Participation analytics perhaps is not a jazzy label, but it would be a more accurate description of the use of data in this case. Further, to the extent that the meeting with the advisor produces a change in the student's behavior thereafter, it should be evident that there is a degree of coercion in this to get that behavioral change. The student has to submit to authority. If in retrospect the student agrees that the authority was in the right, then this bit of coercion is beneficial. That, however, should not be assumed. I will discuss this more in another question. Here let's note that there is no coercion entailed in the feedback triggered in the self-test, though there is some coercion in getting the student to initiate on the self-test to begin with. This issue of when and where coercion is appropriate in instruction is something that needs to be considered further.
Q2: What is the nature of the information on which feedback is based?
Commentary: Typing into a search box illustrates something about what the person is thinking. Enough of that sort of thing and you can get a good sense where that person is coming from. If the task is for the student to write a short paper, then the various searches the student does might very well inform how well the student did the homework necessary to write that paper.
In contrast, clicking on a link to a file to download it or to preview it online says essentially nothing about what the student's reaction is having seen the file or listened to it. Further it doesn't say anything about whether the student pays full attention to the content of the file or if instead the student is multiprocessing while supposedly looking at the file.
More generally, the issue is whether we are getting sharp information that brings the picture of the student into fine relief or if we are getting only dull information, which will speak mainly to participation at some level but not to learning.
One further point here is that with dull information it is much easier for the student to game the system so that if a few clicks will get the student out of some obligation the student would prefer to avoid, those clicks will be observed but might not signify what they are intended to otherwise indicate.
Q3: Is the sample size adequate to provide useful feedback based on it?
Commentary: I'm again going totally outside teaching and learning to illustrate the issue. I am a regular reader of Thomas Edsall's column in the New York Times. I like the way he polls a variety of experts in the field on a question and uses his column to let them do the talking, either contrasting views when that is the case or talking about the consensus in the event that is reached. Recently Edsall has been on a Donald Trump kick, just as many other columnists have been. In that I'm afraid Edsall has finally reached the slippery slope.
The Trump candidacy may be the electoral version of The Black Swan which, as a graduate school classmate informs me, is a colorful label for a random variable with an underlying distribution that exhibits fat tails, in which case outliers are not all uncommon and the sample mean can be quite volatile instead of settling down. Consider that on May 11, Edsall posted a piece called How Many People Support Trump but Don't Want to Admit it? That essay gave plausibility to the conclusion that Trump will be the next President. Yet in a piece dated today, Edall has a quite different message in a piece called How Long Can the G.O.P. Go? Here Edsall argues that Trump most likely is going down and he may bring down other Republican candidates with him.
How can there be two such varied pieces within such a short time span? I don't know. It could be that many undecideds made up their minds in the interim or that some who had been pro Trump changed their mind. Or it could be a fat tail problem and that the polling samples are mainly noise and not telling us what is really going on with the electorate. I am not a statistician. But here, even a statistician might not be able to tell. If the underlying model has changed and the statistician doesn't know that, taking a historical approach to consider the observed data will lead to erroneous conclusions.
Most learning technologists are not statisticians nor are the bulk of instructors whom they provide consultation to. Some people will utter the mantra - the data always tells the story. No, they don't. Sometimes they do. Other times there is a black swan.
Q4: Do students perceive the instructor (the university) to have their own interests at heart when recommending some intervention based on a learning analytics approach?
Commentary: In spring 2011 I taught for the first time since I retired. Of the two classes I had then, one was an advanced undergraduate class on Behavioral Economics. I had some issues with that class so I opted to not teach that particular subject matter again. In spring 2012 I taught a different course, on The Economics of Organizations, which is now the only class I teach. As it turned out the spring 2012 class size was very small - only 8 students; so we had a lot of discussion. Further, a few of the students had taken the Behavioral Econ class from me the year before. These students were extremely candid. They railed about their education and where quite critical of the place. I had previously heard criticism from students about my own teaching, on occasion, but usually that would amount to my course being too hard or that sometimes I wasn't encouraging enough to a student. I have never been criticized for not caring about the students.
Yet that was the essence of the critique those spring 2012 students were making. The Econ department didn't care about them. There were so many Econ majors (I believe around 850 at the time) and so few people to advise them that they felt they were being treated like a number, not like a human being. This was news to me at the time, but I've been alerted to it ever since. It is why I came up with that example of Amazon and Facebook in the previous section. My attitude there is essentially the same as the attitude these students portrayed to me. And there is guilt by association. If the Econ department didn't care about them, then the U of I as a whole didn't care about them either.
I don't know whether most students on campus come to this view or not, though I suspect it is more pronounced in LAS than it is either in Business or Engineering, since some of this is a resource matter and LAS, which doesn't have a tuition surcharge, is more resource challenged than those other colleges.
Learning analytics is being touted as a way to let data provide answers in resource scarce environments, particularly at large public institutions. But there is an underlying assumption that the students trust the institution to make good interventions on their behalf. That assumption needs to be verified. If it is found wanting, then it may be that learning analytics won't produce the outcomes that people hope it will deliver.
Q5: Is there a political economy reason (i.e., a budget reason) for learning technologists to advance a learning analytics agenda?
Commentary: I'm an economist by training and am comfortable making political economy arguments. Indeed, I will go so far to say there are always political economy factors to consider in any sort of social intervention. To me that is an entirely uncontroversial assertion. Yet to the non economist it might seem like a radical proposition. So here I want to say that I've been down this route before and made essentially the same argument in a different context. I will first review that argument. Then I want to try to update it to the present.
Soon after I started this blog, in spring 2005, I wrote a series of posts Why We Need a Course Management System, with Part 2 the particular essay that made the political economy arguments. At the time, my campus had many online learning systems supported at the campus level (with still other systems in the various colleges). The campus was in the process of moving to an enterprise CMS (now I would call it an LMS so as to distinguish from a Content Management System). This was in some sense necessary for scaling reasons. Usage had grown dramatically. But it is conceivable that several of the older systems could have been updated and continued instead of moving to one monolithic system.
The technical issues on this aside, my political economy argument said the case for many different systems - users pick which they prefer - doesn't work well in a tight budget environment. Further, home grown systems of this sort are particularly at risk, especially as they age. A larger commercial system could command a certain size budget to support it. The smaller systems, in contrast, could be nickle-and-dimed, and for that reason units were reluctant to claim ownership of such systems. At Illinois there was Campus Gradebook, a stand alone tool that was a derivative of the Plato System, very popular with instructors who used it. There was also the intelligent quiz tool Mallard, also quite popular with instructors who used it. I was the one who gave the kill order for Campus Gradebook. Mallard lasted longer, but eventually did die as well, after I had left the Campus IT organization. These tools did what they did better than the LMS. But they couldn't survive from a resource point of view in a tough climate.
Turning to now, the climate is even tougher financially, and the LMS is pretty much an old technology idea at this point. Further, with the exception of a few tools in the LMS, there are better alternatives out there, particularly for file sharing, communication, and calendaring. So the temptation budget-wise, to cut learning technology as an area, must be pretty large. Yet nobody wants to see their own budgets cut. Instead they want to put forward an argument that in the reinvention of their area they provide an essential function that needs full funding.
Which side of this political economy argument is right? I don't know but my sense of this is that the more learning analytics is tied to actual innovation in teaching practice or leaner strategies the more it makes sense to fund the area. If there is stasis on these matters, then to me this starts to look a lot like the arguments I was making 11 years ago. The message here is that that real payoff is not on what the technology can do but on its potential for beneficial impact on patterns of use. I wonder if the field can be sufficiently self-critical in this regard. There is a very strong temptation to play the role of cheerleader. I should add here that while they are not identical there are parallels between how learning analytics is considered now for college education and the entire accountability movement in K-12 education. Thinking about the latter gives me the shivers and that provides a good chunk of the motivation for writing this piece.
* * * * *
Let me wrap up. Particularly on big campuses there is a problem with IT in general that the people in the IT organization talk to each other, and thereby reinforce their own views, but don't talk nearly enough with others, especially those who don't speak geek. As a result the IT area develops its own conception of mission, perhaps based on the language in a fairly abstract campus strategic planning document, rather than determining its mission as part of solving a larger puzzle that emerges via extended conversation with the entire campus community.
Learning technology may have it even harder than IT in general in this regard because there are other campus providers to grapple with - particularly the Center for Teaching folks and folks in the Library - plus each of them may also have issues with too much internal discussion but not enough extended conversation with the entire campus.
These are ongoing concerns, whether in good resource times or bad. Tough times, however, tend to make us all hunker down even more. For the good of the order that hunkering down is the wrong thing to do, but for our own preservation it is perfectly understandable behavior.
When trying to look for universal truths, I find myself going back to the TV show, The West Wing, (though the show is getting dated now). In a particularly good episode entitled Harsfield's Landing, President Bartlet tells Sam over a game of chess to "see the whole board." That's the message I'm trying to deliver here.
The first example is Google search, but I will focus on a feature that usually gets very little commentary. Google is the default search engine for me and I make quite a lot of use of it. When I am writing I probably do a search every few minutes or so. I likewise might do a search when I am reading online and something occurs to me to follow up on. I interrupt the reading and search then and there for that something, rather than wait to conclude what I had been reading. Search, in this sense, is an alternative to taking notes on the reading. I almost never take notes and I rarely even bookmark the pages I've searched, relying on the browser history instead to do that for me.
The feature about the Google search that fascinates me is what happens after I type a few letters into the search box. A pull-down menu is generated that offers suggestions about what I am searching for. It is a remarkably good function and gives the impression that Google is reading my mind. For example, just now I've typed the letters "thel" in the search box (without the quotes). The second item on the pull-down menu is Thelonius Monk, the person I was thinking of when I started to type that search. This ability to match the likely search target based on just a few letters of the search offers powerful feedback to the user in part because it is so immediate, it really helps if spelling is an issue in that search, and it encourages repeated use because of its efficacy.
I do not know anything about the algorithm that generates the items on that pull-down list and in particular whether it is based only on the aggregate experience of all users in Google, an incredibly large data set, so that what is being returned in the pull-down list is the most common searches that start with those letters, ordered by their relative frequency, or if my own personal data also matters for what shows up on the list. As it turns out, I listen to Pandora in the browser rather than through a dedicated app (on the phone I use an app) and I have a Thelonius Monk station, though I listen to it infrequently. Does that matter in what Google returns? I don't know. But I did just try the same search at Yahoo and the order of responses in the pull-down menu was different. This doesn't explain why that is, but it does suggest a puzzle that needs some resolution. Regardless of that resolution, I can say that I'm quite happy with the way Google does this. It works well for me.
Let's turn to the second example. When I am looking up a book title or an author, I will typically first search Amazon. Their site is more user friendly than the Campus Library site (which I will use mainly to search for individual articles that like are in some database). Further, I'm typically not trying to get a copy of the book. I'm just looking for some bibliographic information about it, perhaps so I can provide a link in a blog post. Invariably after this search has been completed, the next time I go to Facebook, typically in the sidebar but once in a while even directly in my News feed, there is an ad for said book at the Amazon site.
In this case it has to be my own search behavior in the browser that triggers the ad. This seems remarkably unintelligent to me. Why should I pay attention to the ad when I so recently had been to the Amazon site looking at the page for the book? If I hadn't bought the book the first time around, is it at all likely that the ad will now convince me to go back to the site and make a purchase? Somebody must think so, but I don't get it. At best, it is a heavy handed intervention, demonstrating the interests of Amazon and Facebook, but disregarding my interests as a user. I understand fully that they are both businesses and need to make a buck to continue to operate. But they are both making money hand over fist. They could afford to make a little less if it meant greater user satisfaction. (I may not be the best user as an example here, because I hate to be sold anything and if there is a hint of salesmanship in the process I will find it a turnoff. )
I want to note that the Facebook robot goes to my blog every time I post a Note, which in turn happens because I repost my blog entries to Facebook. So there is a lot of information on me from which to form a profile. But I believe this information is largely discarded because they don't know how to data mine it effectively. The searches at Amazon, in contrast, are data mined to the fullest. Then the action taken based on the data mining is very heavy handed, in my view.
* * * * *
Let's switch gears now and focus on the teaching and learning situation in college, particularly at the undergraduate level. Here are a series of questions informed by the examples above, each followed by a bit of commentary on how to consider the context in which the the question is posed.
Q1: What is the lag time between the generation of the data that triggers the feedback and the receipt of the feedback itself?
Commentary: Short lags, as in the Google pull-down list, facilitate learning. So, for example, in a recent post called Feedback Rather Than Assessment I discussed students doing a self-test that is auto-graded. After responding to a particular question the student is told whether the question has been answered correctly. If not, the student gets feedback aimed at helping the student to better understand what the question is asking and how to go about finding the correct answer. That feedback might be based on the prior experience of other students who answered the question in the same way or on how the particular student already answered other questions that are related to the current question. Well done feedback of this sort would most definitely facilitate learning.
In contrast, long lags, for example if the student has not submitted any of the work after several deadlines have past and that then triggers a phone call from an advisor who wants the student to make an office visit, are not really about learning. They are about (non) participation and then providing remediation for that. Participation analytics perhaps is not a jazzy label, but it would be a more accurate description of the use of data in this case. Further, to the extent that the meeting with the advisor produces a change in the student's behavior thereafter, it should be evident that there is a degree of coercion in this to get that behavioral change. The student has to submit to authority. If in retrospect the student agrees that the authority was in the right, then this bit of coercion is beneficial. That, however, should not be assumed. I will discuss this more in another question. Here let's note that there is no coercion entailed in the feedback triggered in the self-test, though there is some coercion in getting the student to initiate on the self-test to begin with. This issue of when and where coercion is appropriate in instruction is something that needs to be considered further.
Q2: What is the nature of the information on which feedback is based?
Commentary: Typing into a search box illustrates something about what the person is thinking. Enough of that sort of thing and you can get a good sense where that person is coming from. If the task is for the student to write a short paper, then the various searches the student does might very well inform how well the student did the homework necessary to write that paper.
In contrast, clicking on a link to a file to download it or to preview it online says essentially nothing about what the student's reaction is having seen the file or listened to it. Further it doesn't say anything about whether the student pays full attention to the content of the file or if instead the student is multiprocessing while supposedly looking at the file.
More generally, the issue is whether we are getting sharp information that brings the picture of the student into fine relief or if we are getting only dull information, which will speak mainly to participation at some level but not to learning.
One further point here is that with dull information it is much easier for the student to game the system so that if a few clicks will get the student out of some obligation the student would prefer to avoid, those clicks will be observed but might not signify what they are intended to otherwise indicate.
Q3: Is the sample size adequate to provide useful feedback based on it?
Commentary: I'm again going totally outside teaching and learning to illustrate the issue. I am a regular reader of Thomas Edsall's column in the New York Times. I like the way he polls a variety of experts in the field on a question and uses his column to let them do the talking, either contrasting views when that is the case or talking about the consensus in the event that is reached. Recently Edsall has been on a Donald Trump kick, just as many other columnists have been. In that I'm afraid Edsall has finally reached the slippery slope.
The Trump candidacy may be the electoral version of The Black Swan which, as a graduate school classmate informs me, is a colorful label for a random variable with an underlying distribution that exhibits fat tails, in which case outliers are not all uncommon and the sample mean can be quite volatile instead of settling down. Consider that on May 11, Edsall posted a piece called How Many People Support Trump but Don't Want to Admit it? That essay gave plausibility to the conclusion that Trump will be the next President. Yet in a piece dated today, Edall has a quite different message in a piece called How Long Can the G.O.P. Go? Here Edsall argues that Trump most likely is going down and he may bring down other Republican candidates with him.
How can there be two such varied pieces within such a short time span? I don't know. It could be that many undecideds made up their minds in the interim or that some who had been pro Trump changed their mind. Or it could be a fat tail problem and that the polling samples are mainly noise and not telling us what is really going on with the electorate. I am not a statistician. But here, even a statistician might not be able to tell. If the underlying model has changed and the statistician doesn't know that, taking a historical approach to consider the observed data will lead to erroneous conclusions.
Most learning technologists are not statisticians nor are the bulk of instructors whom they provide consultation to. Some people will utter the mantra - the data always tells the story. No, they don't. Sometimes they do. Other times there is a black swan.
Q4: Do students perceive the instructor (the university) to have their own interests at heart when recommending some intervention based on a learning analytics approach?
Commentary: In spring 2011 I taught for the first time since I retired. Of the two classes I had then, one was an advanced undergraduate class on Behavioral Economics. I had some issues with that class so I opted to not teach that particular subject matter again. In spring 2012 I taught a different course, on The Economics of Organizations, which is now the only class I teach. As it turned out the spring 2012 class size was very small - only 8 students; so we had a lot of discussion. Further, a few of the students had taken the Behavioral Econ class from me the year before. These students were extremely candid. They railed about their education and where quite critical of the place. I had previously heard criticism from students about my own teaching, on occasion, but usually that would amount to my course being too hard or that sometimes I wasn't encouraging enough to a student. I have never been criticized for not caring about the students.
Yet that was the essence of the critique those spring 2012 students were making. The Econ department didn't care about them. There were so many Econ majors (I believe around 850 at the time) and so few people to advise them that they felt they were being treated like a number, not like a human being. This was news to me at the time, but I've been alerted to it ever since. It is why I came up with that example of Amazon and Facebook in the previous section. My attitude there is essentially the same as the attitude these students portrayed to me. And there is guilt by association. If the Econ department didn't care about them, then the U of I as a whole didn't care about them either.
I don't know whether most students on campus come to this view or not, though I suspect it is more pronounced in LAS than it is either in Business or Engineering, since some of this is a resource matter and LAS, which doesn't have a tuition surcharge, is more resource challenged than those other colleges.
Learning analytics is being touted as a way to let data provide answers in resource scarce environments, particularly at large public institutions. But there is an underlying assumption that the students trust the institution to make good interventions on their behalf. That assumption needs to be verified. If it is found wanting, then it may be that learning analytics won't produce the outcomes that people hope it will deliver.
Q5: Is there a political economy reason (i.e., a budget reason) for learning technologists to advance a learning analytics agenda?
Commentary: I'm an economist by training and am comfortable making political economy arguments. Indeed, I will go so far to say there are always political economy factors to consider in any sort of social intervention. To me that is an entirely uncontroversial assertion. Yet to the non economist it might seem like a radical proposition. So here I want to say that I've been down this route before and made essentially the same argument in a different context. I will first review that argument. Then I want to try to update it to the present.
Soon after I started this blog, in spring 2005, I wrote a series of posts Why We Need a Course Management System, with Part 2 the particular essay that made the political economy arguments. At the time, my campus had many online learning systems supported at the campus level (with still other systems in the various colleges). The campus was in the process of moving to an enterprise CMS (now I would call it an LMS so as to distinguish from a Content Management System). This was in some sense necessary for scaling reasons. Usage had grown dramatically. But it is conceivable that several of the older systems could have been updated and continued instead of moving to one monolithic system.
The technical issues on this aside, my political economy argument said the case for many different systems - users pick which they prefer - doesn't work well in a tight budget environment. Further, home grown systems of this sort are particularly at risk, especially as they age. A larger commercial system could command a certain size budget to support it. The smaller systems, in contrast, could be nickle-and-dimed, and for that reason units were reluctant to claim ownership of such systems. At Illinois there was Campus Gradebook, a stand alone tool that was a derivative of the Plato System, very popular with instructors who used it. There was also the intelligent quiz tool Mallard, also quite popular with instructors who used it. I was the one who gave the kill order for Campus Gradebook. Mallard lasted longer, but eventually did die as well, after I had left the Campus IT organization. These tools did what they did better than the LMS. But they couldn't survive from a resource point of view in a tough climate.
Turning to now, the climate is even tougher financially, and the LMS is pretty much an old technology idea at this point. Further, with the exception of a few tools in the LMS, there are better alternatives out there, particularly for file sharing, communication, and calendaring. So the temptation budget-wise, to cut learning technology as an area, must be pretty large. Yet nobody wants to see their own budgets cut. Instead they want to put forward an argument that in the reinvention of their area they provide an essential function that needs full funding.
Which side of this political economy argument is right? I don't know but my sense of this is that the more learning analytics is tied to actual innovation in teaching practice or leaner strategies the more it makes sense to fund the area. If there is stasis on these matters, then to me this starts to look a lot like the arguments I was making 11 years ago. The message here is that that real payoff is not on what the technology can do but on its potential for beneficial impact on patterns of use. I wonder if the field can be sufficiently self-critical in this regard. There is a very strong temptation to play the role of cheerleader. I should add here that while they are not identical there are parallels between how learning analytics is considered now for college education and the entire accountability movement in K-12 education. Thinking about the latter gives me the shivers and that provides a good chunk of the motivation for writing this piece.
* * * * *
Let me wrap up. Particularly on big campuses there is a problem with IT in general that the people in the IT organization talk to each other, and thereby reinforce their own views, but don't talk nearly enough with others, especially those who don't speak geek. As a result the IT area develops its own conception of mission, perhaps based on the language in a fairly abstract campus strategic planning document, rather than determining its mission as part of solving a larger puzzle that emerges via extended conversation with the entire campus community.
Learning technology may have it even harder than IT in general in this regard because there are other campus providers to grapple with - particularly the Center for Teaching folks and folks in the Library - plus each of them may also have issues with too much internal discussion but not enough extended conversation with the entire campus.
These are ongoing concerns, whether in good resource times or bad. Tough times, however, tend to make us all hunker down even more. For the good of the order that hunkering down is the wrong thing to do, but for our own preservation it is perfectly understandable behavior.
When trying to look for universal truths, I find myself going back to the TV show, The West Wing, (though the show is getting dated now). In a particularly good episode entitled Harsfield's Landing, President Bartlet tells Sam over a game of chess to "see the whole board." That's the message I'm trying to deliver here.
Thursday, June 16, 2016
The Attraction of Distraction
Reading the news can so abuse
One's sense of balance and taste
Pundits we choose who offer views
Their writing we cut and paste.
A kind of toy that brings no joy
Repeats again and again
For this here boy he must deploy
A different stratagem.
In search of mirth of which there's dearth
In normal conversation
From his wide girth he did unearth
Odd rhyme not straight narration.
What he did find to remain kind
Needs some release of tension
Yet be not blind in the eye's mind
To what folks tend to mention.
One's sense of balance and taste
Pundits we choose who offer views
Their writing we cut and paste.
A kind of toy that brings no joy
Repeats again and again
For this here boy he must deploy
A different stratagem.
In search of mirth of which there's dearth
In normal conversation
From his wide girth he did unearth
Odd rhyme not straight narration.
What he did find to remain kind
Needs some release of tension
Yet be not blind in the eye's mind
To what folks tend to mention.
Wednesday, June 08, 2016
Feedback Rather Than Assessment
At the end of July it will be six years since I've retired and then close to ten years since I left my post as Assistant CIO for Educational Technologies for the Campus to move to the College of Business. I am pretty out of it now and unaware of much of the day to day things that go on in learning technology, on Campus and in the profession broadly considered. But I remain on a bunch of listservs and once in a while I read what was posted. Last week somebody (whom I'm not citing, because I don't know him and it is not a public list) posted about learning management systems and made reference to this ELI essay about Next Generation Digital Learning Environments. I want to comment on it. But before I do here are several bits of background to consider.
Even while I had the Campus job I felt some obligation to criticize the profession when I thought it was going off base. So, for example, I wrote this post after the ELI conference in January 2007. This was part of an ongoing conversation with my friends and colleagues and perhaps also with readers of my blog whom I otherwise didn't know. I felt a little bad after writing that post, particularly the stuff near the end, so I wrote another called Learning Technology and 'The Vision Thing' in which I gave my preferred alternative of where the profession should be going. I really didn't expect it to change things, but people need to be aware of more idealistic alternatives to the status quo. Maybe after reading my piece some will consider those alternatives where beforehand they weren't doing so. Such prior thinking is necessary to produce attempts that aim to make matters better. Hence, I saw my role as a prod, to make others in the profession more thoughtful in the way they went about their business. Indeed, this is largely how I see my role as a teacher now.
A couple of years ago, quite a while after I had retired, I did what at the time seemed to me was similar though in retrospect was not, as I hope to illustrate. I saw this video of Jim Groom and Brian Lamb, where they were discussing their essay in Educause Review, Reclaiming Innovation. I agreed with the part of their argument that discussed the tyranny of the LMS. But in discussing the cure I thought they relied too much on other developments in technology and not nearly enough (really not at all) on developments in educational psychology. So, as is my wont, I wrote this rhyme on my blog, a mild critique. One of my friends in Facebook, who is knowledgeable about ed tech, gave it a love. Empowered by that reaction I emailed it to the authors for their response. They were mildly annoyed. They didn't see it as their job to address this criticism. Doing so was outside their area of expertise. At the time I bit my tongue and thought, oh well. But reflecting on that episode in the process of writing this piece, I realized they were right. While I'm sure we've bumped into each other on the various blogs, we don't really know each other, so there was no extended conversation in which such criticism might be a part. And my suggestions were way too high level to be at all useful. Something far more concrete was needed. When I do discuss my aspirations for changes in the LMS in this piece I will try to do so in a concrete way.
Next, let's turn to Writing Across the Curriculum (WAC) principles. In spring 1996 I attended a three-day workshop led by Gail Hawisher and Paul Prior. It was really excellent. I keep returning to WAC principles when thinking about effective pedagogy. Here are several of the points from it to consider. First, learners need to be able to respond to criticism and comment from instructors, also from peers. There is much learning in providing a good response. So in a WAC course papers entail multiple drafts. The second draft is itself a response to comments received on the first draft. Second, there is a tendency for instructors to write lengthy comments on the final draft, particularly if they give it a comparatively poor grade. The comments then assuage their own guilt feelings but are mostly ignored by the students, who have been stung by the harsh treatment they perceive to have received. Third, there is a tendency for instructor comments to be highly normative and aspirational - offering where they'd like to see the students go with the writing, but not situating those comments in where the students currently are, so even when provided in a draft the student often doesn't understand how to effectively revise the paper. Last, instructors tend to view the chore of responding to student writing as overwhelming. They lack the time to do it well. They then get angry about having to take half measures. That anger can then find its way back into the responses themselves.
I chose in my title feedback rather than response. I'd like to explain why and then consider the difference between the two. Many learning activities are other than producing a full second draft on a paper. But just about all learning activities entail going beyond the initial stab to make further headway. On what basis does the learner do this? The learner reacts to feedback, preferably in a reflective rather than instinctual way. This is just what is meant by the expression learning from mistakes. So response is a subset of feedback. Feedback might be automated, or indirect (e.g., learning from the comments provided on another student's paper), or serendipitous (for example, I might stumble upon an essay written by somebody else on a similar topic but do so only after I've produced my blog post). Ultimately, learners need to develop learning-to-learn skills, part of which is finding and identifying appropriate feedback. This will only happen if learners hunger for getting feedback on their early thinking. They therefore need to develop a taste for it.
I want to turn to a different set of experiences. If you've been around for a while and teach economics or some business discipline you're likely aware of Aplia, an experiment in homework tools and content, originally offered up entirely separate from textbooks. Aplia was the brainchild of the economist Paul Romer, who at the time, like many of us teaching economics, was dejected that the assessments bundled with textbooks were so weak, when there was the potential to do much better. As it turns out, Romer was on campus at Illinois near the time that Aplia was founded and he was aware of my content and use of Mallard. So I had a friendly chat with him then and later interacted a little online with him and some of his staff about early content Aplia was providing. I don't know that I entirely embraced their approach, but I certainly looked on it as a promising development. Alas, in 2007 Aplia was bought out by Cengage. As an economist who used to teach industrial organization, this was certainly not a surprising development. Aplia in its original form was a threat to textbook publishers. Students already at the time weren't reading the textbook and if they didn't need the book to access the assessment content, then textbook sales would plummet. The textbook publishers then (and I believe this is true still now) didn't have a real revenue model based purely on assessments. There was lock-in to the textbook model.
This issue with lock-in needs to be confronted squarely. I, for one, have been vexed by it, as some of the innovations I'd have liked to see and that were clearly possible nonetheless have not emerged beyond the trial balloon stage. For example, more than a decade ago my friend Steve Acker had me write this piece on Dialogic Learning Objects, based on the idea of producing a virtual conversation between the student and the online content and thereby blending presentation and feedback, which really is the natural thing to do. I continue to write content of this sort in Excel for the class I teach on the economics of organizations. But as Michael Feldstein points out, the authoring of such content is arduous. Further, for there to be a functioning market for such content, potential adopters must be able to identify (a) the content is high quality and (b) the content is consistent with the way they teach their course. This sort of verification is also difficult and time consuming to do. The textbook market largely gets around these verification problems by having the nth edition of an already well known textbook in the field or, in the case of a new offering, having it written by a well-known scholar in the field. In either case, the textbook authors themselves are very unlikely to author dialogic content. The publishers don't pay very well for ancillary content to the textbook and, in particular, don't offer royalties for that. So there is tyranny of the status quo and not just with the LMS.
Let me turn to one more set of experiences and then conclude this section of the essay. This is about lessons I learned early when I was in SCALE. Our mantra back then was: it's not the technology, it's how you use it. We championed interesting and clever use, especially since our benefactors, the Alfred P. Sloan Foundation and in particular our grant officer Frank Mayadas, were not interested in software development. I have made some of this sort of use myself. It typically marries a learning idea that comes from outside the technology to some capability that the technology enables, where such marriage is not immediately apparent. So, for example, consider my post on The Grid Question Type in Google Forms. The illustration there is something I learned from Carl Berger. It is called the Participant Perception Indicator, which gives a multiple dimensions look at the participant's understanding of some concept. The PPI provides a very good illustration of how grid questions can be utilized.
This post is far and away the one with the greatest number of hits on my blog. Most of my posts have fewer than 100 hits. This one has over 24,000. And what's most interesting about it is the variety of questions and comments received. People want to tweak the tool or customize it for their own use. In considering this sort of customization, what I've learned over time is the need for a judo approach - which combines understanding what the tool can and can't do with knowledge of the goals in use and then allows jerry-rigging of the initial design to better achieve those goals. Further, I've learned that most users don't have the mindset to perform this sort of jerry-rigging. Innovators and early adopters are different that way. So, in fact, when one considers a Rogers story of diffusion of innovations, what actually diffuses when the innovation is effective is the combination of the technology and the effective use. Then the impact is powerful. When it is only the technology itself that diffuses, the impact may be far less profound. This is especially true with educational technology and in particular with the LMS. Unfortunately, there is an abundance of dull use.
Here is one further point to consider and one way the world is quite different now than when I was running SCALE back in the late 1990s. There is now an abundance of online environments, free to the end user, which might serve as alternatives to the campus-provided environments intended for instruction. That itself is not news. However, the question that doesn't seem to get entertained along with that observation is: where are the innovators and the early adopters? Are they in the LMS because they figured out how to practice their judo in a way to incorporate their own teaching goals and because they are publicly spirited and want to support learning on their campus? Or are they in some of these other environments, because they view the LMS as an impediment and they can exercise more control in the free commercial tools?
I don't know the answer to these questions except in how I myself answer them, though now I no longer consider myself as an innovator or early adopter. I think of the LMS as an impediment. It is too rigid and affirming of the traditional approach. I have been able to implement certain practices by going outside the LMS and by willingly engaging in more course administration than most instructors would put up with. The examples I provide in the next section are based on my own experience. I believe all of this might be done in a redesigned LMS. The real issue is not whether it is possible. The questions are whether it should happen and whether learning technology as a profession should embrace these suggestions.
Finally, I'd like to give a little disclaimer before I get started with that. I know these things are possible because I've tried them and implemented them. There may be other approaches that would have even bigger impact and are also possible. So I don't want to claim that my suggestions are exhaustive. They are sufficient, however, in the sense that the profession doesn't currently seem to be talking about them and thus to ask: might the profession begin to have those sort of discussions in the future?
* * * * *
The following graphic offers a simple way to frame the issue. It is Figure 2 from the paper, The Theory Underlying Concept Maps and How to Construct and Use Them.
Preceding this graphic there is a discussion to explain it. This particular point is especially useful to understand.
The authors go on to point out that since there is much variation across learners on both prior preparation and on motivation, it is important not to think of meaningful learning versus rote learning as a binary choice but rather as a continuum between these two poles. I do think the vertical line segment between the two antipodes is correct. Meaningful learning is higher order than rote learning. I belabor that because I want to consider a third category not included in the graphic where the vertical alignment is less sure. These are students who have totally tuned out and don't even go through the motions of rote learning.
Now let's get at the issues. The first is this. Does the LMS exert some influence on the learner's choice of how to learn and, if so, is the bias up or down in that choice? The second is this. How do learning analytics fit in this framework? Is it mainly about getting students who have totally tuned out back into the game, even if that means they are then mainly operating near the rote learning part of the spectrum?
My sense about the answer to the first question is that there is bias and it is downward toward rote learning. There are a lot of other factors in operation here. The LMS is not the only culprit, not by a mile. But the LMS helps to enforce the grades culture, which in turn encourages the students to be rote learners. This issue doesn't get much discussion among learning technologists. It should, in my opinion.
My sense about the answer to the second question is yes, learning analytics coupled with appropriate interventions from instructors and advisers can effectively move students from tuned out to rote learners. But without other changes in place it will not move them up to become meaningful learners. If that is right, is it something the profession should nonetheless champion? To answer that here is a quick aside about the economics of higher education.
Rote learning endures, in large part, because it is an approach that will get students to pass the courses they take. If the pure rote learner always failed or earned only the lowest possible passing grade, D-, students would have very strong extrinsic incentive to move away from rote learning. The extrinsic incentive is far weaker when rote learning can produce high grades in itself. That there is grade inflation means grades are much less meaningful about what they communicate to others regarding how much the student has learned. (As George Kuh argued in describing the disengagement compact, grade inflation may be the inevitable consequence when quality of teaching is determined largely by student course evaluations, as is now the common practice.) New graduates then are valued in the labor market at the average learning of all those who graduate from the institution (this is called a pooling equilibrium). So a degree can have value even for a learner who is a nearly pure rote learner, because the market interprets that student as having learned more than he or she actually did, as well as perhaps giving some bonus points for persisting on through to the degree. This is the behind the scenes economic argument for why learning analytics should be encouraged.
However, if there are large swaths of students who are in the tuned out category and if learning analytics does succeed en masse by turning these students into near pure rote learners, as was suggested above, the consequence will be to lower the average learning among graduates and therefore to depreciate the value of the degree. (This is Akerlof's Market for Lemons model applied to Spence's model of Job Market Signaling.) The better prepared students, in response, will look elsewhere to attend college and a vicious cycle might ensue at any place that pursues learning analytics with too much vigor and gusto. This offers some background on why people who think hard about digital learning environments should be asking what they might do to promote meaningful learning. It seems to me it is an important question to ask. My answer, in a nutshell, is provided by the title to this post.
* * * * *
In this section I want to get at specific modifications to the LMS that I think would be helpful. To me it is useful to divide classes into two categories based on how reliant they are on the LMS and other factors that influence how those courses are taught.
Large classes: These classes make extensive use of the LMS, quite likely rely on the auto grading function in the quiz tool, which is depended on for giving online homework, and are typically quite reliant on a textbook, which is closely followed in the presentation of course content. Depending on the nature of the assessment done with the quiz tool, large classes are more likely to encourage rote learning than are small classes.
Small classes: These classes may use the LMS for some administrative function but are more likely to rely on other online environments for collaboration and other course work. Students may very well engage in projects as an integral part of such courses. Readings might come from multiple sources as might other multimedia course materials.
Let me note that usage of the LMS is critical in considering these categories. Enrollments themselves may encourage a certain type of usage pattern, but just as a low enrollment course can nonetheless be taught as lecture rather than as a seminar, a low enrollment class can rely on auto grading of homework and stick closely to the textbook in its topic coverage.
Let's make one other point before going further. The Large classes are usually taken earlier in the students' time at college. To the extent that students choose how much to commit to meaningful learning and those choices are made, in part, by habits formed in prior classes the students have taken, there can be persistence of the rote learning choice even in environments that aim to encourage meaningful learning.
In this essay there are two suggestions about modification of the LMS meant primarily for the Large class environment and another two suggestions meant mainly for the Small class environment.
First Suggestion: Elevate the importance of the self-test tool so it is on a par with the quiz tool. Allow students to get credit for completing a self-test. In this case completion means ultimately answering all questions correctly, no mater how many tries it takes to do that.
Discussion: I don't know if these terms are used the same from one LMS to another. Here I'm using self-test tool to refer to a quiz where the learner can get immediate feedback after answering an individual question and then can adjust the answer to that question based on that feedback. The pattern of question - response - feedback - revision of response... ultimately getting the question right and then moving onto the next question with the pattern repeating until the entire self-test is completed is meant as a virtual conversation that intends to have the student learn and produce understanding while at the same time measure that the student has done the requisite work.
For this to possibly work, it means the feedback must be useful and promote student thinking. It also means that the student can't get to the right answer readily by brute force methods. Simple true-false questions will not satisfy that requirement. The question must be substantially more complex in the scope of possible answers, if not in the difficulty of what it is actually asking. For example, consider matching questions. Matching five alternatives numbered 1 to 5 to five to other alternatives lettered a) to e) has 120 possible ways of matching. With several such questions in the self-test, brute force ways would be expected to take a long time to complete the assessment and encourage the student, instead, to think through to the answer because that would be faster than brute force.
No partial credit would be allowed. Students who do the self-test and complete it would therefore be encouraged to spend time on task and the hope is that after doing a few of these the students would get the sense that the homework is there to help the students learn the material, not to judge how well they perform. The aim is to make the homework into a learning tool. Of course, whether it is effective or not will depend on how well the content is written. That is true for all online learning materials. The point is that even with good questions, the more typical online quiz enables partial credit and the feedback is only given after the quiz is completed. For a student who feels that enough partial credit has been earned, so is not willing to retake the quiz (assuming that is a possibility) the feedback probably won't be effective. This makes the students orientation much more focused on earning points and much less on producing understanding of the subject matter.
While on an individual homework the consequence might not be large, if the approach were embraced in many large courses the impact on students potentially could be quite considerable.
Second Suggestion: Students getting credit for a self-test is an example of the receipt function. They get a receipt in the LMS just like anyone gets a receipt after completing an online commercial transaction. The receipt function must also be present for all the other assessment tools including the survey tool and the assignment dropbox. There must be a ready way for the instructor to offer course points in exchange either for a given receipt or for a set of receipts. An example of the former would be 10 points per receipt. An example of the latter would be out of 12 possible receipts, 100 points will be given if 10 receipts are presented.
Discussion: The receipt function is meant to convey that there is some course work that should be done and credit will be assigned for completing the work, but the work will otherwise not be graded based on quality or correctness. When applied to surveys on course content, this is very much like how clickers are used in class now, except the content surveys can be done ahead of time before class, to facilitate Just In Time Teaching. Further, unlike clickers, the surveys can include a paragraph question so the students can communicate about their reasoning after they have responded to the short answer question. Surveys typically don't allow students to attach files or provide links to online documents or presentations. This is why the receipt function is needed for the assignment dropbox as well. For submissions that are done by receipt, there is an all or nothing aspect. For submission done the usual way, those allow partial credit. The presence of the receipt conveys that no partial credit will be allowed.
Let's recognize that with an item that provides a receipt a student can be sandbagging in the submission. In that case the student does enough to generate a receipt, but no more than that. In the current grades culture, sandbagging might be a rational response to some intended learning activity that offers a receipt, because the student cares about the points first and foremost and otherwise wants to conserve time and effort. One should ask first what the culture must be like for the vast majority of students to take the activity seriously and not sandbag. One should then follow up with the question whether it might be possible to move the culture in that direction by broad embrace of the receipt function.
In thinking about these matters I want to note the strong parallel between grades as extrinsic incentive for students and cash payments as extrinsic incentive for those who work for a living. For the latter, it is my strong belief that there are limits to the effectiveness of pay for performance, as I've written about in this post called The Liberal View of Capitalism. In the alternative, people do the work seriously mainly out of a sense of obligation. They also have an expectation in that case that their co-workers and their supervisor will appreciate their efforts and suitable recognition will eventually be provided. Translating this back to the learning setting, the instructor has an obligation to make all receipt generating activity meaningful for the student. Such a perception encourages the student to take the activity seriously. In contrast, if the activity is perceived as busy work, surely that will encourage sandbagging.
There is then a further issue that due to heterogeneity of the students, some might find an activity meaningful while other find the same activity busy work. Let's consider that case for students who vary along the rote-learning to meaningful-learning interval and let's say it is the meaningful learners who find the activity busy work, because it is too easy for them. Would this doom the use of the receipt generating activity or might it still survive after suitable modification? The answer to this question depends in large part on whether the student's sense of obligation covers only the student's own learning or if it extends to the learning of fellow students as well. In the latter case the meaningful learner might take the activity seriously for the good of the order or might be willing to receive an exemption from the activity, foregoing the points that would be earned from a submission, in exchange for helping out another student who is struggling with the activity, and earning the points by doing that, even if helping the other student is much more time consuming yet not sufficiently large that it can be listed as a service activity on the resume. In any event, this sort of additional complication hints at asking how much further the LMS needs to be modified to accommodate it or whether such accommodations can readily be accomplished by other means. I really don't know. I raise the issue here mainly to argue that there does need to be some experimentation with the receipt function before the learning technology community can come to agreement as to how it should be implemented.
The next two functions are meant for the small class setting but do assume the receipt function is already in place.
Third Suggestion: Enable different forms of grading. In particular allow for portfolio grading wherein many items under receipt receive a single qualitative grade that is based not just on the average quality of the items but also on whether later produced items show higher quality than earlier produced items. In other words, portfolio assessment is meant to track growth in student performance and to communicate the importance of measured growth as a way to indicate that students actually are learning.
Discussion: Portfolio grading is already the norm in certain disciplines, notably those that entail a studio approach, where students produce artifacts as their way of doing their course work. But portfolio grading is entirely alien in other disciplines, such as courses that have problem sets and exams. There each item is evaluated on its merits and not compared or contrasted to any other items the student has produced. The thought here is that small classes should entail at least some amount of students producing artifacts and that production must be meaningful for learning. That should happen in all classes where enrollments are sufficiently low. (Let us leave the question of where to draw the line on class size for another day.)
Some faculty will resist this, of course, because providing portfolio assessment in a serious manner is time consuming and because these instructors may have no prior experience in doing so and therefore might not be convinced of the pedagogic value in requiring such work as a significant component on which students will be evaluated. Knowing this, some institutions might not embrace a portfolio grading functionality even if it were present in the LMS and fairly easy for the instructor to use.
Now let's make some counterarguments. Teaching small classes typically involves fewer headaches and more joy for the instructor than teaching larger courses. If the two activities are to balance and give the same amount of teaching credit for instructors, the smaller class really should be taught more intensively. Smaller classes are also better environments for encouraging meaningful learning. If a student writes about a page a week, with these essays meant to tie the subject matter of the course to the student's relevant prior experiences or prior thinking on the matter, that activity would indeed promote meaningful learning. Further it can serve as fodder for in class discussion. In that sense it would be the small class analog to the content surveys that are used for Just In Time Teaching when in the large class setting.
Naturally, the faculty would have to experiment with this for themselves if they are to eventually embrace these counterarguments. Nobody should expect them to accept these arguments at face value without trying them out on their own first. But there is one further point to stress here. The administrative overhead from doing such an experiment should not be an important factor in determining whether the experiment is deemed successful or not. So the function needs to already be in the LMS to facilitate these experiments, before the function is broadly adopted by the faculty.
Last Suggestion: Embrace a soft deadline approach were there is a marked late date that precedes the deadline and where the normal expectation is that students will complete the work before the marked late date but in extraordinary circumstances students can turn in work after the marked late date. If they don't abuse this privilege they can do so without penalty. A variety of penalty schemes can then be implemented to handle the case where students are chronically late with their work.
Discussion: A functionality of this sort might be implemented in the large class setting, in which case substantial early use of late submissions might trigger some intervention with the student, just as in other learning analytics cases. But that is not the intended purpose of this recommendation. Indeed, it is my view that in the high enrollment setting deadlines need to be hard, so students learn to get the work done ahead of time. That sort of time management skill is critical and should be learned early in the student's time on campus (when the student is apt to be taking many large classes).
Here the reason for soft deadlines is different. It is there mainly to allow the students some discretion on their time allocation and to recognize that on occasion other obligations (courses, part time jobs, extra curricular activities, social obligations, and family obligations) place strong demands on the student and a mature student will sometimes need to balance these in a way that the student sees most appropriate. The current "solution" to this problem is for students who are under high stress to pull an all nighter, possibly several in a row. This can lead to depression and impair student performance. The system should help the student manage this in a more sensible way and thereby help the student become adult in making life decisions. It should be the small classes that are the first to accommodate on late submissions, because that will be less disruptive overall.
Soft deadlines in small classes, in other words, are a form of buffer or insurance, against excessive student time obligation overall.
There are other possible benefits from soft deadlines. For example, in many places of work where email is used as the distribution vehicle, soft deadlines are the de facto business practice, dictated by the distribution medium. It would help students in preparing for that work after graduation to experience soft deadlines while still a student. There is further that many students are immature about the relationship between time put into producing work and the ultimate quality of that work. I have had students tell me that procrastination is actually efficient because they really get cranking when operating near the deadline. Of course, for these students their first draft is also the final version of what they submit Their immaturity in expressing this view is telling.
Consider how students might learn the value of prior preparation and giving work the time it needs to produce something of high quality. So ask, how would an occasional procrastinator operate under a soft deadline approach? Might that student dissipate the soft deadline buffer unnecessarily, because the student lacks the discipline to save it for when it is really needed? If so, and if the work is subsequently deemed mediocre, wouldn't that undermine the belief that procrastination is efficient? Admittedly several such experiences are likely necessary to come to that conclusion. What of the student who has had at least a few of those? Is this a lesson that can be learned from experience and evidence about perceived quality of work?
My concluding remark in this section is that now the students' internally held beliefs on these matters are largely confounded by the grade culture in which students operate, but which is not replicated in the world of work thereafter. Students need to condition their own expectations about that world of work based on their own experiences as students, to be sure, but student experiences that are more akin to that future world where they will ultimately operate would be preferable to what we have now. Soft deadlines in small classes would facilitate that process.
* * * * *
Whether the suggestions offered up in the previous section actually would lead to improvements in learning I leave for readers to determine. Here I want to conclude with some different issues. These are guided by asking the question, what would it take for the LMS vendors to implement these suggestions in their products? Others may have different ways to address that question. I'm an economist and that informs how I think about these things. The economist's core tool is supply and demand. I'm going to use that here to consider the issue of LMS vendors seriously entertaining these recommendations.
On the supply side, almost all changes in the code of the software should be viewed as modifications in fixed costs. Minor changes of the code constitute small increases in fixed cost. Big rewrites of the code constitute large increases in fixed cost. Small increases in fixed cost require only modest increases in demand to cover them. In contrast, to rationalize large increases in fixed cost, a dramatic increase in demand is required. I am not nearly knowledgeable enough about the software to say which of the suggestions can be done without increasing fixed cost in a big way. But I have deliberately tried to contain myself to what I perceive is do-able. Given that, I would hope none of the recommendations would increase fixed cost drastically. And, after all, the vendors are always changing code to keep their software up to date in subsequent releases. That modernization aspect is built into the business process and should not otherwise necessitate increasing price of the software to cover the development costs. Only code changes in the the software beyond modernization will do that.
It is the demand side that is more perplexing. In the NGDLE paper (page 5) it says:
In the odd chance that those 50 thought leaders read my essay, what would they make of it? Would they be appalled? Isn't specific tool modification within the LMS terribly old fashioned and not at all what we think of in considering next generation learning environments? Would it therefore mean the ideas in my essay would be rejected out of hand, never to see the light of day? I'm afraid that's exactly what would happen, absent some other sort of intervention. So consider this.
Imagine a different group comprised of 50 faculty member who are dedicated teachers, identified mainly by the fact that they are regular attendees at events put on by the Center for Teaching on their respective campuses. Ask this group to read my essay. Ask them first and foremost not about the recommendations but about the issues those recommendations are meant to address. Do they buy into the distinction between meaningful learners and rote learners and do they find that too many of their own students operate near the rote learning end of the spectrum?
I'm going to assume here that the group would make that much of an identification. The next question would be to ask them what modifications do they make in their own classes to address this issue? Dedicated instructors sharing tips and tricks on this matter would be a very good thing in its own right. Then, the final point of discussion for these instructors would be to develop a wish list for the online environments in which they operate that might help them better address these issues.
The results from that faculty group discussion should then be brought to the attention of the learning technology thought leaders for them to reflect on. And the main question to ask here is this. Do the NGDLE recommendations speak at all to the faculty members concerns? What would then happen if many of the thought leaders concluded that the NGDLE recommendations at best only tangentially addressed these issues?
On page 2 of the NGDLE paper, in the section on The Learning Management System, there is a sentence that probably wasn't intended to be provocative at all but actually is in the context I've just presented.
If the context were one of decrying the lecture, this is not a controversial assertion. But here the context is whether learning technologists need to listen to faculty who care a great deal about their teaching. If the answer is that the learning technologists don't need to listen to these faculty, that would be provocative! It would imply, in particular, that the learning technologists are better arbiters of the learner's needs than these faculty are. Do the thought leaders actually believe this?
Of course, I don't have the evidence from that group of faculty to share. But I have participated in numerous such discussions with faculty over the years at a variety of venues, and my sense of these discussions is that the topics of conversation don't vary that much, with the possible exception that recently more will claim that the problems are getting worse.
Given that some of these thought leaders might conclude: (a) these are indeed issues that we too should be concerned with and (b) we therefore need to think through what we can do to help address these issues. If that is the conclusion reached, this essay has hit its aim.
Even while I had the Campus job I felt some obligation to criticize the profession when I thought it was going off base. So, for example, I wrote this post after the ELI conference in January 2007. This was part of an ongoing conversation with my friends and colleagues and perhaps also with readers of my blog whom I otherwise didn't know. I felt a little bad after writing that post, particularly the stuff near the end, so I wrote another called Learning Technology and 'The Vision Thing' in which I gave my preferred alternative of where the profession should be going. I really didn't expect it to change things, but people need to be aware of more idealistic alternatives to the status quo. Maybe after reading my piece some will consider those alternatives where beforehand they weren't doing so. Such prior thinking is necessary to produce attempts that aim to make matters better. Hence, I saw my role as a prod, to make others in the profession more thoughtful in the way they went about their business. Indeed, this is largely how I see my role as a teacher now.
A couple of years ago, quite a while after I had retired, I did what at the time seemed to me was similar though in retrospect was not, as I hope to illustrate. I saw this video of Jim Groom and Brian Lamb, where they were discussing their essay in Educause Review, Reclaiming Innovation. I agreed with the part of their argument that discussed the tyranny of the LMS. But in discussing the cure I thought they relied too much on other developments in technology and not nearly enough (really not at all) on developments in educational psychology. So, as is my wont, I wrote this rhyme on my blog, a mild critique. One of my friends in Facebook, who is knowledgeable about ed tech, gave it a love. Empowered by that reaction I emailed it to the authors for their response. They were mildly annoyed. They didn't see it as their job to address this criticism. Doing so was outside their area of expertise. At the time I bit my tongue and thought, oh well. But reflecting on that episode in the process of writing this piece, I realized they were right. While I'm sure we've bumped into each other on the various blogs, we don't really know each other, so there was no extended conversation in which such criticism might be a part. And my suggestions were way too high level to be at all useful. Something far more concrete was needed. When I do discuss my aspirations for changes in the LMS in this piece I will try to do so in a concrete way.
Next, let's turn to Writing Across the Curriculum (WAC) principles. In spring 1996 I attended a three-day workshop led by Gail Hawisher and Paul Prior. It was really excellent. I keep returning to WAC principles when thinking about effective pedagogy. Here are several of the points from it to consider. First, learners need to be able to respond to criticism and comment from instructors, also from peers. There is much learning in providing a good response. So in a WAC course papers entail multiple drafts. The second draft is itself a response to comments received on the first draft. Second, there is a tendency for instructors to write lengthy comments on the final draft, particularly if they give it a comparatively poor grade. The comments then assuage their own guilt feelings but are mostly ignored by the students, who have been stung by the harsh treatment they perceive to have received. Third, there is a tendency for instructor comments to be highly normative and aspirational - offering where they'd like to see the students go with the writing, but not situating those comments in where the students currently are, so even when provided in a draft the student often doesn't understand how to effectively revise the paper. Last, instructors tend to view the chore of responding to student writing as overwhelming. They lack the time to do it well. They then get angry about having to take half measures. That anger can then find its way back into the responses themselves.
I chose in my title feedback rather than response. I'd like to explain why and then consider the difference between the two. Many learning activities are other than producing a full second draft on a paper. But just about all learning activities entail going beyond the initial stab to make further headway. On what basis does the learner do this? The learner reacts to feedback, preferably in a reflective rather than instinctual way. This is just what is meant by the expression learning from mistakes. So response is a subset of feedback. Feedback might be automated, or indirect (e.g., learning from the comments provided on another student's paper), or serendipitous (for example, I might stumble upon an essay written by somebody else on a similar topic but do so only after I've produced my blog post). Ultimately, learners need to develop learning-to-learn skills, part of which is finding and identifying appropriate feedback. This will only happen if learners hunger for getting feedback on their early thinking. They therefore need to develop a taste for it.
I want to turn to a different set of experiences. If you've been around for a while and teach economics or some business discipline you're likely aware of Aplia, an experiment in homework tools and content, originally offered up entirely separate from textbooks. Aplia was the brainchild of the economist Paul Romer, who at the time, like many of us teaching economics, was dejected that the assessments bundled with textbooks were so weak, when there was the potential to do much better. As it turns out, Romer was on campus at Illinois near the time that Aplia was founded and he was aware of my content and use of Mallard. So I had a friendly chat with him then and later interacted a little online with him and some of his staff about early content Aplia was providing. I don't know that I entirely embraced their approach, but I certainly looked on it as a promising development. Alas, in 2007 Aplia was bought out by Cengage. As an economist who used to teach industrial organization, this was certainly not a surprising development. Aplia in its original form was a threat to textbook publishers. Students already at the time weren't reading the textbook and if they didn't need the book to access the assessment content, then textbook sales would plummet. The textbook publishers then (and I believe this is true still now) didn't have a real revenue model based purely on assessments. There was lock-in to the textbook model.
This issue with lock-in needs to be confronted squarely. I, for one, have been vexed by it, as some of the innovations I'd have liked to see and that were clearly possible nonetheless have not emerged beyond the trial balloon stage. For example, more than a decade ago my friend Steve Acker had me write this piece on Dialogic Learning Objects, based on the idea of producing a virtual conversation between the student and the online content and thereby blending presentation and feedback, which really is the natural thing to do. I continue to write content of this sort in Excel for the class I teach on the economics of organizations. But as Michael Feldstein points out, the authoring of such content is arduous. Further, for there to be a functioning market for such content, potential adopters must be able to identify (a) the content is high quality and (b) the content is consistent with the way they teach their course. This sort of verification is also difficult and time consuming to do. The textbook market largely gets around these verification problems by having the nth edition of an already well known textbook in the field or, in the case of a new offering, having it written by a well-known scholar in the field. In either case, the textbook authors themselves are very unlikely to author dialogic content. The publishers don't pay very well for ancillary content to the textbook and, in particular, don't offer royalties for that. So there is tyranny of the status quo and not just with the LMS.
Let me turn to one more set of experiences and then conclude this section of the essay. This is about lessons I learned early when I was in SCALE. Our mantra back then was: it's not the technology, it's how you use it. We championed interesting and clever use, especially since our benefactors, the Alfred P. Sloan Foundation and in particular our grant officer Frank Mayadas, were not interested in software development. I have made some of this sort of use myself. It typically marries a learning idea that comes from outside the technology to some capability that the technology enables, where such marriage is not immediately apparent. So, for example, consider my post on The Grid Question Type in Google Forms. The illustration there is something I learned from Carl Berger. It is called the Participant Perception Indicator, which gives a multiple dimensions look at the participant's understanding of some concept. The PPI provides a very good illustration of how grid questions can be utilized.
This post is far and away the one with the greatest number of hits on my blog. Most of my posts have fewer than 100 hits. This one has over 24,000. And what's most interesting about it is the variety of questions and comments received. People want to tweak the tool or customize it for their own use. In considering this sort of customization, what I've learned over time is the need for a judo approach - which combines understanding what the tool can and can't do with knowledge of the goals in use and then allows jerry-rigging of the initial design to better achieve those goals. Further, I've learned that most users don't have the mindset to perform this sort of jerry-rigging. Innovators and early adopters are different that way. So, in fact, when one considers a Rogers story of diffusion of innovations, what actually diffuses when the innovation is effective is the combination of the technology and the effective use. Then the impact is powerful. When it is only the technology itself that diffuses, the impact may be far less profound. This is especially true with educational technology and in particular with the LMS. Unfortunately, there is an abundance of dull use.
Here is one further point to consider and one way the world is quite different now than when I was running SCALE back in the late 1990s. There is now an abundance of online environments, free to the end user, which might serve as alternatives to the campus-provided environments intended for instruction. That itself is not news. However, the question that doesn't seem to get entertained along with that observation is: where are the innovators and the early adopters? Are they in the LMS because they figured out how to practice their judo in a way to incorporate their own teaching goals and because they are publicly spirited and want to support learning on their campus? Or are they in some of these other environments, because they view the LMS as an impediment and they can exercise more control in the free commercial tools?
I don't know the answer to these questions except in how I myself answer them, though now I no longer consider myself as an innovator or early adopter. I think of the LMS as an impediment. It is too rigid and affirming of the traditional approach. I have been able to implement certain practices by going outside the LMS and by willingly engaging in more course administration than most instructors would put up with. The examples I provide in the next section are based on my own experience. I believe all of this might be done in a redesigned LMS. The real issue is not whether it is possible. The questions are whether it should happen and whether learning technology as a profession should embrace these suggestions.
Finally, I'd like to give a little disclaimer before I get started with that. I know these things are possible because I've tried them and implemented them. There may be other approaches that would have even bigger impact and are also possible. So I don't want to claim that my suggestions are exhaustive. They are sufficient, however, in the sense that the profession doesn't currently seem to be talking about them and thus to ask: might the profession begin to have those sort of discussions in the future?
* * * * *
The following graphic offers a simple way to frame the issue. It is Figure 2 from the paper, The Theory Underlying Concept Maps and How to Construct and Use Them.
3. The learner must choose to learn meaningfully. The one condition over which the teacher or mentor has only indirect control is the motivation of students to choose to learn by attempting to incorporate new meanings into their prior knowledge, rather than simply memorizing concept definitions or propositional statements or computational procedures. The indirect control over this choice is primarily in instructional strategies used and the evaluation strategies used......
The authors go on to point out that since there is much variation across learners on both prior preparation and on motivation, it is important not to think of meaningful learning versus rote learning as a binary choice but rather as a continuum between these two poles. I do think the vertical line segment between the two antipodes is correct. Meaningful learning is higher order than rote learning. I belabor that because I want to consider a third category not included in the graphic where the vertical alignment is less sure. These are students who have totally tuned out and don't even go through the motions of rote learning.
Now let's get at the issues. The first is this. Does the LMS exert some influence on the learner's choice of how to learn and, if so, is the bias up or down in that choice? The second is this. How do learning analytics fit in this framework? Is it mainly about getting students who have totally tuned out back into the game, even if that means they are then mainly operating near the rote learning part of the spectrum?
My sense about the answer to the first question is that there is bias and it is downward toward rote learning. There are a lot of other factors in operation here. The LMS is not the only culprit, not by a mile. But the LMS helps to enforce the grades culture, which in turn encourages the students to be rote learners. This issue doesn't get much discussion among learning technologists. It should, in my opinion.
My sense about the answer to the second question is yes, learning analytics coupled with appropriate interventions from instructors and advisers can effectively move students from tuned out to rote learners. But without other changes in place it will not move them up to become meaningful learners. If that is right, is it something the profession should nonetheless champion? To answer that here is a quick aside about the economics of higher education.
Rote learning endures, in large part, because it is an approach that will get students to pass the courses they take. If the pure rote learner always failed or earned only the lowest possible passing grade, D-, students would have very strong extrinsic incentive to move away from rote learning. The extrinsic incentive is far weaker when rote learning can produce high grades in itself. That there is grade inflation means grades are much less meaningful about what they communicate to others regarding how much the student has learned. (As George Kuh argued in describing the disengagement compact, grade inflation may be the inevitable consequence when quality of teaching is determined largely by student course evaluations, as is now the common practice.) New graduates then are valued in the labor market at the average learning of all those who graduate from the institution (this is called a pooling equilibrium). So a degree can have value even for a learner who is a nearly pure rote learner, because the market interprets that student as having learned more than he or she actually did, as well as perhaps giving some bonus points for persisting on through to the degree. This is the behind the scenes economic argument for why learning analytics should be encouraged.
However, if there are large swaths of students who are in the tuned out category and if learning analytics does succeed en masse by turning these students into near pure rote learners, as was suggested above, the consequence will be to lower the average learning among graduates and therefore to depreciate the value of the degree. (This is Akerlof's Market for Lemons model applied to Spence's model of Job Market Signaling.) The better prepared students, in response, will look elsewhere to attend college and a vicious cycle might ensue at any place that pursues learning analytics with too much vigor and gusto. This offers some background on why people who think hard about digital learning environments should be asking what they might do to promote meaningful learning. It seems to me it is an important question to ask. My answer, in a nutshell, is provided by the title to this post.
* * * * *
In this section I want to get at specific modifications to the LMS that I think would be helpful. To me it is useful to divide classes into two categories based on how reliant they are on the LMS and other factors that influence how those courses are taught.
Large classes: These classes make extensive use of the LMS, quite likely rely on the auto grading function in the quiz tool, which is depended on for giving online homework, and are typically quite reliant on a textbook, which is closely followed in the presentation of course content. Depending on the nature of the assessment done with the quiz tool, large classes are more likely to encourage rote learning than are small classes.
Small classes: These classes may use the LMS for some administrative function but are more likely to rely on other online environments for collaboration and other course work. Students may very well engage in projects as an integral part of such courses. Readings might come from multiple sources as might other multimedia course materials.
Let me note that usage of the LMS is critical in considering these categories. Enrollments themselves may encourage a certain type of usage pattern, but just as a low enrollment course can nonetheless be taught as lecture rather than as a seminar, a low enrollment class can rely on auto grading of homework and stick closely to the textbook in its topic coverage.
Let's make one other point before going further. The Large classes are usually taken earlier in the students' time at college. To the extent that students choose how much to commit to meaningful learning and those choices are made, in part, by habits formed in prior classes the students have taken, there can be persistence of the rote learning choice even in environments that aim to encourage meaningful learning.
In this essay there are two suggestions about modification of the LMS meant primarily for the Large class environment and another two suggestions meant mainly for the Small class environment.
First Suggestion: Elevate the importance of the self-test tool so it is on a par with the quiz tool. Allow students to get credit for completing a self-test. In this case completion means ultimately answering all questions correctly, no mater how many tries it takes to do that.
Discussion: I don't know if these terms are used the same from one LMS to another. Here I'm using self-test tool to refer to a quiz where the learner can get immediate feedback after answering an individual question and then can adjust the answer to that question based on that feedback. The pattern of question - response - feedback - revision of response... ultimately getting the question right and then moving onto the next question with the pattern repeating until the entire self-test is completed is meant as a virtual conversation that intends to have the student learn and produce understanding while at the same time measure that the student has done the requisite work.
For this to possibly work, it means the feedback must be useful and promote student thinking. It also means that the student can't get to the right answer readily by brute force methods. Simple true-false questions will not satisfy that requirement. The question must be substantially more complex in the scope of possible answers, if not in the difficulty of what it is actually asking. For example, consider matching questions. Matching five alternatives numbered 1 to 5 to five to other alternatives lettered a) to e) has 120 possible ways of matching. With several such questions in the self-test, brute force ways would be expected to take a long time to complete the assessment and encourage the student, instead, to think through to the answer because that would be faster than brute force.
No partial credit would be allowed. Students who do the self-test and complete it would therefore be encouraged to spend time on task and the hope is that after doing a few of these the students would get the sense that the homework is there to help the students learn the material, not to judge how well they perform. The aim is to make the homework into a learning tool. Of course, whether it is effective or not will depend on how well the content is written. That is true for all online learning materials. The point is that even with good questions, the more typical online quiz enables partial credit and the feedback is only given after the quiz is completed. For a student who feels that enough partial credit has been earned, so is not willing to retake the quiz (assuming that is a possibility) the feedback probably won't be effective. This makes the students orientation much more focused on earning points and much less on producing understanding of the subject matter.
While on an individual homework the consequence might not be large, if the approach were embraced in many large courses the impact on students potentially could be quite considerable.
Second Suggestion: Students getting credit for a self-test is an example of the receipt function. They get a receipt in the LMS just like anyone gets a receipt after completing an online commercial transaction. The receipt function must also be present for all the other assessment tools including the survey tool and the assignment dropbox. There must be a ready way for the instructor to offer course points in exchange either for a given receipt or for a set of receipts. An example of the former would be 10 points per receipt. An example of the latter would be out of 12 possible receipts, 100 points will be given if 10 receipts are presented.
Discussion: The receipt function is meant to convey that there is some course work that should be done and credit will be assigned for completing the work, but the work will otherwise not be graded based on quality or correctness. When applied to surveys on course content, this is very much like how clickers are used in class now, except the content surveys can be done ahead of time before class, to facilitate Just In Time Teaching. Further, unlike clickers, the surveys can include a paragraph question so the students can communicate about their reasoning after they have responded to the short answer question. Surveys typically don't allow students to attach files or provide links to online documents or presentations. This is why the receipt function is needed for the assignment dropbox as well. For submissions that are done by receipt, there is an all or nothing aspect. For submission done the usual way, those allow partial credit. The presence of the receipt conveys that no partial credit will be allowed.
Let's recognize that with an item that provides a receipt a student can be sandbagging in the submission. In that case the student does enough to generate a receipt, but no more than that. In the current grades culture, sandbagging might be a rational response to some intended learning activity that offers a receipt, because the student cares about the points first and foremost and otherwise wants to conserve time and effort. One should ask first what the culture must be like for the vast majority of students to take the activity seriously and not sandbag. One should then follow up with the question whether it might be possible to move the culture in that direction by broad embrace of the receipt function.
In thinking about these matters I want to note the strong parallel between grades as extrinsic incentive for students and cash payments as extrinsic incentive for those who work for a living. For the latter, it is my strong belief that there are limits to the effectiveness of pay for performance, as I've written about in this post called The Liberal View of Capitalism. In the alternative, people do the work seriously mainly out of a sense of obligation. They also have an expectation in that case that their co-workers and their supervisor will appreciate their efforts and suitable recognition will eventually be provided. Translating this back to the learning setting, the instructor has an obligation to make all receipt generating activity meaningful for the student. Such a perception encourages the student to take the activity seriously. In contrast, if the activity is perceived as busy work, surely that will encourage sandbagging.
There is then a further issue that due to heterogeneity of the students, some might find an activity meaningful while other find the same activity busy work. Let's consider that case for students who vary along the rote-learning to meaningful-learning interval and let's say it is the meaningful learners who find the activity busy work, because it is too easy for them. Would this doom the use of the receipt generating activity or might it still survive after suitable modification? The answer to this question depends in large part on whether the student's sense of obligation covers only the student's own learning or if it extends to the learning of fellow students as well. In the latter case the meaningful learner might take the activity seriously for the good of the order or might be willing to receive an exemption from the activity, foregoing the points that would be earned from a submission, in exchange for helping out another student who is struggling with the activity, and earning the points by doing that, even if helping the other student is much more time consuming yet not sufficiently large that it can be listed as a service activity on the resume. In any event, this sort of additional complication hints at asking how much further the LMS needs to be modified to accommodate it or whether such accommodations can readily be accomplished by other means. I really don't know. I raise the issue here mainly to argue that there does need to be some experimentation with the receipt function before the learning technology community can come to agreement as to how it should be implemented.
The next two functions are meant for the small class setting but do assume the receipt function is already in place.
Third Suggestion: Enable different forms of grading. In particular allow for portfolio grading wherein many items under receipt receive a single qualitative grade that is based not just on the average quality of the items but also on whether later produced items show higher quality than earlier produced items. In other words, portfolio assessment is meant to track growth in student performance and to communicate the importance of measured growth as a way to indicate that students actually are learning.
Discussion: Portfolio grading is already the norm in certain disciplines, notably those that entail a studio approach, where students produce artifacts as their way of doing their course work. But portfolio grading is entirely alien in other disciplines, such as courses that have problem sets and exams. There each item is evaluated on its merits and not compared or contrasted to any other items the student has produced. The thought here is that small classes should entail at least some amount of students producing artifacts and that production must be meaningful for learning. That should happen in all classes where enrollments are sufficiently low. (Let us leave the question of where to draw the line on class size for another day.)
Some faculty will resist this, of course, because providing portfolio assessment in a serious manner is time consuming and because these instructors may have no prior experience in doing so and therefore might not be convinced of the pedagogic value in requiring such work as a significant component on which students will be evaluated. Knowing this, some institutions might not embrace a portfolio grading functionality even if it were present in the LMS and fairly easy for the instructor to use.
Now let's make some counterarguments. Teaching small classes typically involves fewer headaches and more joy for the instructor than teaching larger courses. If the two activities are to balance and give the same amount of teaching credit for instructors, the smaller class really should be taught more intensively. Smaller classes are also better environments for encouraging meaningful learning. If a student writes about a page a week, with these essays meant to tie the subject matter of the course to the student's relevant prior experiences or prior thinking on the matter, that activity would indeed promote meaningful learning. Further it can serve as fodder for in class discussion. In that sense it would be the small class analog to the content surveys that are used for Just In Time Teaching when in the large class setting.
Naturally, the faculty would have to experiment with this for themselves if they are to eventually embrace these counterarguments. Nobody should expect them to accept these arguments at face value without trying them out on their own first. But there is one further point to stress here. The administrative overhead from doing such an experiment should not be an important factor in determining whether the experiment is deemed successful or not. So the function needs to already be in the LMS to facilitate these experiments, before the function is broadly adopted by the faculty.
Last Suggestion: Embrace a soft deadline approach were there is a marked late date that precedes the deadline and where the normal expectation is that students will complete the work before the marked late date but in extraordinary circumstances students can turn in work after the marked late date. If they don't abuse this privilege they can do so without penalty. A variety of penalty schemes can then be implemented to handle the case where students are chronically late with their work.
Discussion: A functionality of this sort might be implemented in the large class setting, in which case substantial early use of late submissions might trigger some intervention with the student, just as in other learning analytics cases. But that is not the intended purpose of this recommendation. Indeed, it is my view that in the high enrollment setting deadlines need to be hard, so students learn to get the work done ahead of time. That sort of time management skill is critical and should be learned early in the student's time on campus (when the student is apt to be taking many large classes).
Here the reason for soft deadlines is different. It is there mainly to allow the students some discretion on their time allocation and to recognize that on occasion other obligations (courses, part time jobs, extra curricular activities, social obligations, and family obligations) place strong demands on the student and a mature student will sometimes need to balance these in a way that the student sees most appropriate. The current "solution" to this problem is for students who are under high stress to pull an all nighter, possibly several in a row. This can lead to depression and impair student performance. The system should help the student manage this in a more sensible way and thereby help the student become adult in making life decisions. It should be the small classes that are the first to accommodate on late submissions, because that will be less disruptive overall.
Soft deadlines in small classes, in other words, are a form of buffer or insurance, against excessive student time obligation overall.
There are other possible benefits from soft deadlines. For example, in many places of work where email is used as the distribution vehicle, soft deadlines are the de facto business practice, dictated by the distribution medium. It would help students in preparing for that work after graduation to experience soft deadlines while still a student. There is further that many students are immature about the relationship between time put into producing work and the ultimate quality of that work. I have had students tell me that procrastination is actually efficient because they really get cranking when operating near the deadline. Of course, for these students their first draft is also the final version of what they submit Their immaturity in expressing this view is telling.
Consider how students might learn the value of prior preparation and giving work the time it needs to produce something of high quality. So ask, how would an occasional procrastinator operate under a soft deadline approach? Might that student dissipate the soft deadline buffer unnecessarily, because the student lacks the discipline to save it for when it is really needed? If so, and if the work is subsequently deemed mediocre, wouldn't that undermine the belief that procrastination is efficient? Admittedly several such experiences are likely necessary to come to that conclusion. What of the student who has had at least a few of those? Is this a lesson that can be learned from experience and evidence about perceived quality of work?
My concluding remark in this section is that now the students' internally held beliefs on these matters are largely confounded by the grade culture in which students operate, but which is not replicated in the world of work thereafter. Students need to condition their own expectations about that world of work based on their own experiences as students, to be sure, but student experiences that are more akin to that future world where they will ultimately operate would be preferable to what we have now. Soft deadlines in small classes would facilitate that process.
* * * * *
Whether the suggestions offered up in the previous section actually would lead to improvements in learning I leave for readers to determine. Here I want to conclude with some different issues. These are guided by asking the question, what would it take for the LMS vendors to implement these suggestions in their products? Others may have different ways to address that question. I'm an economist and that informs how I think about these things. The economist's core tool is supply and demand. I'm going to use that here to consider the issue of LMS vendors seriously entertaining these recommendations.
On the supply side, almost all changes in the code of the software should be viewed as modifications in fixed costs. Minor changes of the code constitute small increases in fixed cost. Big rewrites of the code constitute large increases in fixed cost. Small increases in fixed cost require only modest increases in demand to cover them. In contrast, to rationalize large increases in fixed cost, a dramatic increase in demand is required. I am not nearly knowledgeable enough about the software to say which of the suggestions can be done without increasing fixed cost in a big way. But I have deliberately tried to contain myself to what I perceive is do-able. Given that, I would hope none of the recommendations would increase fixed cost drastically. And, after all, the vendors are always changing code to keep their software up to date in subsequent releases. That modernization aspect is built into the business process and should not otherwise necessitate increasing price of the software to cover the development costs. Only code changes in the the software beyond modernization will do that.
It is the demand side that is more perplexing. In the NGDLE paper (page 5) it says:
At the 2014 EDUCAUSE Annual Conference, 50 thought leaders from the higher education community came together to brainstorm NGDLE functionality. This group identified and prioritized 56 desirable NGDLE functions....
In the odd chance that those 50 thought leaders read my essay, what would they make of it? Would they be appalled? Isn't specific tool modification within the LMS terribly old fashioned and not at all what we think of in considering next generation learning environments? Would it therefore mean the ideas in my essay would be rejected out of hand, never to see the light of day? I'm afraid that's exactly what would happen, absent some other sort of intervention. So consider this.
Imagine a different group comprised of 50 faculty member who are dedicated teachers, identified mainly by the fact that they are regular attendees at events put on by the Center for Teaching on their respective campuses. Ask this group to read my essay. Ask them first and foremost not about the recommendations but about the issues those recommendations are meant to address. Do they buy into the distinction between meaningful learners and rote learners and do they find that too many of their own students operate near the rote learning end of the spectrum?
I'm going to assume here that the group would make that much of an identification. The next question would be to ask them what modifications do they make in their own classes to address this issue? Dedicated instructors sharing tips and tricks on this matter would be a very good thing in its own right. Then, the final point of discussion for these instructors would be to develop a wish list for the online environments in which they operate that might help them better address these issues.
The results from that faculty group discussion should then be brought to the attention of the learning technology thought leaders for them to reflect on. And the main question to ask here is this. Do the NGDLE recommendations speak at all to the faculty members concerns? What would then happen if many of the thought leaders concluded that the NGDLE recommendations at best only tangentially addressed these issues?
On page 2 of the NGDLE paper, in the section on The Learning Management System, there is a sentence that probably wasn't intended to be provocative at all but actually is in the context I've just presented.
Higher education is moving away from its traditional emphasis on the instructor, however, replacing it with a focus on learning and the learner.
If the context were one of decrying the lecture, this is not a controversial assertion. But here the context is whether learning technologists need to listen to faculty who care a great deal about their teaching. If the answer is that the learning technologists don't need to listen to these faculty, that would be provocative! It would imply, in particular, that the learning technologists are better arbiters of the learner's needs than these faculty are. Do the thought leaders actually believe this?
Of course, I don't have the evidence from that group of faculty to share. But I have participated in numerous such discussions with faculty over the years at a variety of venues, and my sense of these discussions is that the topics of conversation don't vary that much, with the possible exception that recently more will claim that the problems are getting worse.
Given that some of these thought leaders might conclude: (a) these are indeed issues that we too should be concerned with and (b) we therefore need to think through what we can do to help address these issues. If that is the conclusion reached, this essay has hit its aim.