Sunday, April 23, 2017

The Economic Value In Freely Available Online Content

Some actions once done can't be undone.  The costs entailed in taking such an action are referred to as sunk costs, which are costs that can't be recovered.  Economists teach that sunk costs don't matter, in the sense that they don't enter into what is termed "producer surplus" and therefore shouldn't impact decisions that aim to make producer surplus as big as possible.  I know this and I used to teach this in intermediate microeconomics when explaining the theory of the firm in the short run. Yet I'm finding that with my own content creation activities I often seem to care whether potential users access the content and also care when they do access the content how they react to it.  Is this narcissism on my part only?  Or is there some way the sunk cost metaphor is not appropriate here and my concern about user access has productive value?  These are some of the questions I want to get at in this essay.

A different set of questions comes from considering how to assess value of a public good when there are no market transactions to observe.  Is it possible to impute value to the public good?  If so, how would one go about doing that?

I want to also briefly try to tie this discussion to the issue of whether college should be free for those who attend.  This gets at who should pay for the public good (and why).  It's one of those things that needs discussion at a first principles level.  We tend to consider actual policy without knowing what first principles to appeal to when evaluating the policy.

Let me begin with a little personal history, which will explain the technology considered here.  In spring 2011 I taught for the first time since I retired the previous summer.  While I taught a regular section of intermediate microeconomics, there was a possibility I might teach a blended section in the future, so I made a lot of online micro-lecture content with that in mind.  I will get back to that in a bit.

I had previously taught intermediate microeconomics ten years earlier.  Many of the lessons I head learned the hard way from teaching it were apparently forgotten.  I made many of the same mistakes I made when teaching intermediate micro back in the early 1980s, making the course much too difficult for students and discouraging them further in the process.  Many of the students in intermediate micro are Business majors.  For them intermediate micro plays a similar role to what organic chemistry plays for pre-med students.  In these courses the students tend to be quite mercenary about their grades and not care much about learning the fundamentals of the subject, because they don't see the relevance.  Further, because the Econ major itself is seen largely as a proxy for Business by those students who didn't have the standardized test scores to get into the College of Business, the attitude about the class by most Econ majors mirrors the views of the Business students.  The course is generally liked by a handful of students, 10% or so, who are either in Engineering or Business students quite proficient in math. These students don't find the course overly challenging on the technical front and then can make some sense of what is being taught.  The rest of the students can't see the forest for the trees.

The micro-lectures I made were screen capture movies of Excelets with my voice over.   At the time I was using Jing (now extinct) to make the captures.  Now I would use Snagit for this purpose.   Jing had a 5 minute maximum on the length of these movies.  So each video was short and to the point.    As I briefly described in my previous post, I captioned every one of these videos.  At the time I was on a campus committee for media accessibility, which partly gave the motivation to do the captioning.  But I also reasoned that for students to get familiar with some of the jargon in the course, it would help to see the words in print while hearing them spoken.  The captioning would facilitate that.  Students were expected to watch the videos before the live class session and then we'd review them in class.  There would several of these short videos associated with a single live-class session.

The videos themselves were posted to YouTube.  I developed a profarvan channel for that purpose.  I posted the Excel files to Google Docs (now Google Drive).  I was using the public version of Google Docs, but the campus had recently gone to Google for student email and with that the students had access to Google Apps for Education.  I don't know if this is generally true or if it is only true because of how the campus managed authentication to Google Apps, but it turned out that if students were logged into their campus Google, then that blocked access to the public Google for them.  This took a while to figure out and proved somewhat unworkable,  both because students wouldn't remember to log out of their campus account and because at that time instructors did not have access to Google Apps.  So ultimately I had to find a different solution for this.

As I've indicated, my students were quite instrumental about this online content.  They evaluated it by how well they were prepared for the exams.  They found those exams quite tough (the means were below 60%).  Given these results, the micro-lectures weren't regarded very highly.  I found this quite discouraging, on the one hand, but then I was not bothered when I later learned that I would not be teaching the blended learning version of the course, on the other hand.  Indeed, I haven't taught intermediate microeconomics since then.  But I had a pleasant surprise that made me reconsider the value of the micro-lectures, as learning artifacts that might be considered entirely separate from the course I taught.

The Analytics section in YouTube provides a map of where people are when they access the videos.  My map showed global access even though all of my students were in Illinois.  This was clear evidence that people outside the class were watching the videos.  But I couldn't tell how significant that outside-the-class use was.  There were about 8,000 views in total that semester.  I was unable to accurately separate usage between my students and people elsewhere.  The following year, however, convinced me that the external use was substantial.  There were about 33,000 views during the next year and then I wasn't teaching the course.  Further, I would occasionally get comments from some of these users.  I surmised that most of the viewers were students taking the course elsewhere.  Some of those comments expressed appreciation for the videos.  In many others, where I hadn't put the link in to the Excel, they wanted access to that.  Apparently, both the videos and the Excel files were useful to them.

The experience opened my eyes to an audience that previously I didn't know existed.  In my subsequent content generation, I've been keenly aware of that audience, while making content that is also intended to be useful for the class that I am now teaching - The Economics of Organizations.   Let me describe that a little before moving onto the analysis.  The Excelets I mentioned above are reasonably good for considering the geometry of the economic model.  I also have developed homework in Excel, that can be auto-graded and has some capability to test students on the algebra entailed in the models.  But to explicate that algebra Excel is not the right tool.  I have subsequently learned how to use PowerPoint for this purpose.

In the old days when I'd do a two-hour lecture on a chalkboard while teaching graduate microeconomics, the bulk of the class session would be derivations of mathematical results.   I had a reputation then for being thorough, fully working through the thinking needed to reach the conclusion.  My belief then, which I still cling to now, is that you can only understand the result if you can reproduce the derivation, not by having memorized it but by thinking it through.  At the undergraduate level I opted for a similar approach, but modified so the arguments are mainly intuitive, which is why geometry plays such a prominent role.  The math used in the undergraduate class can't go beyond analytic geometry and algebra.  However, undergraduate students in economics are no longer used to mathematical derivations.  (Maybe they weren't used to them 30 years ago, but then I deluded myself that it was the necessary way to teach intermediate micro.)  If you do them in the live class session these days, after only a couple of minutes the students' eyes glaze over.  As long as your head is facing the blackboard and writing the equation, you can just be into that and ignore whether you are getting through.  But once you turn around to see if the students are paying attention and all you get is that glazed look, it is very discouraging.

So the micro-lectures that are online allow me to satisfy my long standing belief about how the subject should be taught while not ramming it down the students' throats.  It is opt in on their part.  Some students (actually the better ones) have told me they can complete the homework without watching the micro-lectures.  If their goal is to get through the homework and the exams, that's probably right.  I can't test properly on this stuff because the grades would be too grim if I did.  But if the student actually wants to understand what's going on, the micro-lectures are there to provide the derivations.

Let's close this section with a little more on the technology.  I ultimately put my non-video content (PowerPoint, PDF versions of those, and Excel files) in my campus account at Box.com and then made it publicly available.  Box is reasonably functional, giving the user a preview of the file before deciding whether to download or not.  For a PowerPoint or PDF file, the preview may be sufficient.  The Excel files I use for homework have to be downloaded. Box gives reasonably good access stats, if you care to look.  It also sends me an email every time a file is downloaded.  Since I do check email fairly often, this gives me some sense about how much the content is utilized.  I still use YouTube for the videos, which gives its own access stats.  Years ago I experimented with using Archive.org as a different possible host.   I put up a variety of multimedia there, both video and audio only (podcast content).  (Here is an example of a video of mine at archive.org)  In some sense it did a very nice job of rendering the content and allowing for different file formats for download, to accommodate different users.  But it generated essentially none of the serendipitous use that I've described happened for the videos at YouTube.  If users are rather important in determining the value of the online content, and indeed they are, then the creator needs to put the content in a place where the users will find it.

* * * * *  

In much of what I talk about in this section I am going to consider one particular video and the steps needed to assess its value.  The video is called the Shapiro Stiglitz Model.  It derives many of the equations found in the paper Equilibrium Unemployment as a Worker Discipline Device.   As that paper is widely available to students, one should first ask - what incremental value might the video provide?  This is quite similar to the question - if there is a good textbook on the subject, what value do lectures provide?  Can't students teach themselves the subject by reading the textbook?

There are several possible answers to these questions that give lectures value into themselves. First, lectures may economize on student time.  If students can follow the lecture well, they can penetrate the subject faster than by slugging through the readings as the path to understanding.  Indeed, nowadays in a lecture class most students opt for the lecture as the gateway to the subject, with the textbook used as a reference thereby playing a supporting role only, probably for this very reason.  Second, students may not have the wherewithal to penetrate the subject matter on their own.  The paper by Shapiro and Stiglitz that I linked to above was published in the American Economic Review and intended for academic economists.  To read a paper like this requires having pencil and paper on the side and deriving all the equations manually.  Students may be ill equipped to that.  Third, while covering the same subject matter, the lecture may take a different approach than the reading materials to get at that content.  In my video I derive asset equations.  In the paper, the authors write flow versions of these same equations.  I show how to go from the asset equations to their flow version.  This helps students understand the paper.  Fourth, the lecture approach serves as a model of the type of behavior we'd like to see in the students as they work through the paper.  If that modeling works well, perhaps the students learn to read through other papers on their own, without needing a lecture.  Fifth, the lecture can provide context both for the assumptions in the paper and for related research that is in a similar vein.  And there may be still other reasons why the lecture creates value.

For any of these reasons, however, the lecture only has value if the students access it and make good sense of what is in it.  Like the tree that falls in the forest, there needs to be somebody present for the falling tree to make a noise.  There is no value to the online content as a thing in itself.  The value is in its use.

When making a micro-lecture such as this one, I am uncertain about how potential viewers will use the content.  I design it as best as I can to fit my conception of what would be useful to learners.  However, I don't know if it hit the mark or not.  Users may communicate that to me indirectly via their use.  (This is a revealed preference argument.)  So I learn about the effectiveness of the content by observing the patterns of use.  Comments are a more direct form of communicating this sort of information.  Typically, however, there are only a few comments.  To learn about effectiveness more broadly, one has to consider the usage patterns.

I don't need to be paid for making content like this.  I do get paid for teaching the course and I have ample retirement income.  The making of such content and ensuring it is publicly available is not a requirement of teaching.  It is something I do voluntarily as a complement to teaching.  Knowing that the content is effective encourages me to make more content in a similar style.  Conversely, if the content seems ineffective or if the content isn't accessed at all even when it could be found by potential users, that would discourage me in making additional content.  Why bother in that case?

There is also uncertainty for the potential user who finds the content.  Will the content be helpful or not in increasing their understanding?  Ahead of time, the user can't know the answer to that question.  The answer is revealed only through use.  It is this insight that explains how we might go from use data to imputing value for the online content.

The methodology is similar to the method I described in the paper The Economics of ALN: Some Issues.  In that paper, student time is identified as the primary input in instruction.  The paper then worked through an exercise to impute the student time value.  Here, while the video is itself freely available to users and satisfies the requirements of non-excludability -  meaning that one person's use in no way impedes the use of others, so it fits the definition of a public good, it is nonetheless true that the user can't consume the video without watching it.  Watching takes time.  There is thus an implicit time cost in watching. But further, watching is a voluntary act.  So the act of continued watching must mean the user perceives the value from doing so to be at least as great as the opportunity cost of time.  (If it is strictly greater, the difference represents a surplus that accrues to the user from further watching.)  Once the perceived value of continued watching drops below the opportunity cost of time, the user will stop viewing and do something else.

With this in mind, let's consider the usage data I have.  It's far from perfect, but it is enough to make some intelligent guesses about what is going on.  The two sources of data are Box and YouTube.  I put the data into an Excel Workbook, with the first spreadsheet showing the data from Box and the second showing the data from YouTube.  The Box data show there have been 154 previews and 76 downloads of the PowerPoint file associated with the video.  In order to download the user must preview first.  So the download rate is about 50%.  As there have been 2,745 views of the video, the preview rate is about 5.6%.  In other words, the bulk of the views have no preview associated with them.  There is also location data for the last 50 "events," where an event is either a preview or a download.  I thought that information interesting.  It shows quite a lot of geographic diversity in access, with a distinct international flavor.  It's also a bit ego deflating in that there is only one entry for Urbana, Illinois, meaning only one of my students from the class last fall cared to preview the file.

The data from YouTube on the second spreadsheet shows the top 10 videos for the year 2016, measured by minutes viewed, from my profarvan channel.  The Shapiro Stiglitz Model came in second in this ranking.  There are also data on number of views.  I put in the average duration per view, the duration of the full video and the percentage duration per view.  For none of these videos does the percentage duration per view exceed 50% and in some cases it is quite low.  (These results are again somewhat ego deflating.  As the creator of the content, I'd like to see these numbers much closer to 100%.  It is enlightening, however, to see what the numbers actually look like)  The percentage duration per view is lowest for the Shapiro Stiglitz Model, at around 15%.  Note that this video is also the longest, more than a half hour, and regarding duration it is an outlier.  The other videos are substantially shorter.

Why would one watch a half hour lecture for only a minute or two?  One reason is to make a quick determination on whether to watch the rest of the thing and then deciding that it's not worth it.  One might think of fishing as an apt analogy.  The first fish that is caught is very small, so is thrown back in.  But then maybe the same thing happens for the second and third fist caught, at which point a determination is made that this is not a good spot to fish.  The fisherman then packs up his gear and leaves.  A different reason is to consider users who make repeat visits.  A return visit is an indicator that the site does provide value for the user.  But the sort of access for a user on a return visit may be different from the first time through.  The second or third time the user goes to a particular spot in the video, a place of importance or one where it was difficult to understand the first time around.  One needs much more granular data about usage than what YouTube provides creators to sort this out well. Nonetheless, it is helpful to keep these different alternatives in mind

In absence of those granular data I conjecture a bi-modal distribution of users.  The first group are the serious users, who watch the video in full and may come back for repeat visits.  They account for the bulk of the minutes watched, but are comparatively small when considered from the perspective of views.  The second group are the quick hitters, who watch for a minute or two only and then don't come back.  Their brief viewing is experimental consumption, where the experiment didn't pan out.

The quick hitters will not preview the PowerPoint file, so that one does preview it is an indicator of being a serious user.  There may be some serious users who don't preview the PowerPoint file, as the video is sufficient for them and they prefer to do their work with pencil and paper at their side rather than on the computer.  Let's now guess our way through some plausible numbers.  If the preview rate is around 50% among serious users, then there are around 300 of them.  Their average viewing time might be something like 35 minutes, including one full viewing and some repeat viewing.  That gives a total or 10,500 minutes or 175 hours of viewing by serious users.  The total number of minutes viewing is 12,866, so there are about 2,300 minutes of viewing done by the quick hitters, which works out to about 1 minute per view.    Of course there is a lot of guessing here in coming up with these numbers, but something like this has to be what is going on to explain use.

The last part of the calculation converts the use by the serious users into value.  For this we need a conversion rate, something that measures the opportunity cost of time for the students.  In that JALN paper of mine from 20 years ago, where I first did this sort of imputation, for illustration I used a wage rate of $6/hour, which was then more than the minimum wage but not by a lot.  It was what I paid my undergraduate TAs at the time. There's been some inflation since, but to offset that the labor market is softer now and some of the serious users are from third world countries (though they are surely elite students within those countries).  The Federal minimum wage now is $7.25/hour.  With that as background, I will use $8/hour as the opportunity cost of time for the serious users.  Then 175 hours X $8/hour = $1,400.  This gives a lower bound of the benefit that the serious users get in aggregate from watching the video.  This imputation does not measure the surplus that the serious users accrue from the video.  There is no way to get at that surplus value from these usage data.  In a social welfare calculation that surplus value would need to be accounted for, as should the time value of the quick hitters, which should be subtracted off as a cost with no associated benefit.  While I have no basis for making this claim, I'm going to ignore both of them, as if the two just offset.

Since the video will remain online, additional serious use will contribute to further benefit generated.  I no longer can remember how long it took me to make this video and the PowerPoint file, but I'm quite confident that the benefit I calculated in the previous paragraph well exceeds my cost from making the video.  Most other creators will not go to these lengths to make this sort of imputation.  They will quickly eyeball the usage data to make their determination.  But then they likely will respond the same way I have.  They will be happy with their own efforts if those generate substantial serious usage and otherwise feel their efforts have been wasted.

* * * * *

I now want to connect this analysis with the discussion about free college education, but confine myself to a rather abstract view only.

First, lets note that they type of social welfare calculation we went through in the previous section can be done for any good-citizen voluntary act.  It doesn't require online content.  All that is required is for there be providers and beneficiaries.  Then a cost-benefit calculation can be performed.  Indeed, some might argue that the public interest in college education is to encourage such good-citizen acts, and that the education itself should train students to be good citizens, for this very reason.  Last year on campus there was a talk by Harry Boyte that made just this point.

Second, lets note that an outsider to the system could inject funds into the system to change the fundamentals and thereby impact the social welfare calculation.  For example, the Econ department could fund a student assistant to help me in creating other online content.  If the the social value of the additional content exceeds the cost of compensating the student assistant, then that enhances social welfare overall.  So it would be a good thing to do.

However, the Econ department might not care about social welfare overall.  It might care only about whether it can internalize enough of the benefit to cover its costs.  This would happen, for example, if enough of the serious users were students on campus, meaning they were current or future students in my class.  The data I showed are pretty discouraging on this point, since the bulk of the serious use is geographically dispersed.  The Econ department can't internalize this benefit.

This is not an argument that I shouldn't have a student assistant.  It is an argument that the funds for such an assistant need to come from elsewhere, possibly from a grant program that the State Department administers or a grant program that some U.N. agency administers.

Finally, the social benefit might be increased not by me having a student assistant but rather by generating a larger serious user population by increasing enrollments in the appropriate programs at institutions around the globe.  If there are limited funds to invest overall then one should ask which way of funding the activity will increase social welfare the most.  In other words, we should try to separate in our discussions of free college education, and other social policy as well, the underlying objectives from the means of achieving those objectives.  We tend to garble the two and make a muddle of things as a result.

In the case of free college possible objectives include: (1) getting students who otherwise wouldn't go to college at all to attend, (2) getting students who would have gone to a commuter school to attend a residential college instead so the educational experience is enhanced, (3) keeping the debt burden for students and their families manageable, (4) increasing voter participation among students and their families who might receive such benefits, and (5) rewarding voter participation among students and their families who might receive such benefits.   One might design quite different programs depending on how these objectives are prioritized.  Let's first consider the potential benefit from each of these.  Our discussion would be richer if we did that.

* * * * *

Let's wrap this up.  Everywhere I turn I'm bombarded with spiels about the virtues of big data and using analytics to make sense of things.  I'm old school enough to believe that data is not enough.  You need some framework for understanding what the data might tell you.  I've tried to provide that sort of framework for online instructional content and then bring the data I had available to consider things in that setting  As we continue to debate social policy like free college education, we need frameworks to make sense of the issues and then look at the numbers as applied to those frameworks.  This takes some skill to do and it might take effort by readers to wade through the arguments.   We need to do that and not let our impatience for answers let us short circuit the thinking about the policies that we should want to see get put in place. 

Sunday, April 16, 2017

Automated Captioning - Is It Good Enough?

Back in 2011 I made quite a few screen capture movies with my voice over to narrate what's going on and I captioned all of them.  This amounted to:

  1. ripping the audio track out of the video,
  2. running the audio track through Dragon Naturally Speaking to produce a transcript,
  3. editing the transcript (this is the labor intensive part) to put in punctuation, correct errors, and take out the ums and ahs, and
  4. uploading the transcript into YouTube, which had automated the part about putting in the timings needed to convert the transcript to a file that can be used for captioning.

Here is an example of content produced this way.  The video will be uninteresting content-wise unless you are a student taking intermediate microeconomics.  So 30 seconds should more than suffice.  You may have to push the cc button to get the captions to appear. 




A couple of days ago Drew posted to a campus list for learning technologists that the campus video service, which utilizes Kaltura, has now enabled automated captioning by request.  (The content owner must initiate this.)  Drew duly told us that the quality of the automated captioning will vary depending on a variety of factors.  So there is an expectation that the content owner will edit the video, the way I had done.  There are built in tools in Kaltura for this purpose.

I have my doubts about other faculty members doing such editing, so I wanted to know how good the raw captions are.

I do make screen capture videos now, though not in as great a volume.  The recent ones have not been captioned by me.  I went back to one that I made near the end of the semester last fall.  To my surprise, it had captions.  They are delivered in kind of an eerie way, word by word in a scrolling manner.  Though there is no punctuation, I thought the quality amazingly good.


For the stuff I made back in 2011, I wouldn't have trusted anyone but me to edit the transcript file.  There were some errors that were way off.  In order to correct them you had to know what was right.  But here the errors are really modest, so a student could edit these and I believe do a tolerably good job.

In my class, students who bomb the first exam typically ask if they can do something for extra credit.  I'm not sure of the ethics here, since the extra credit thing should be educative for the student.  But given how scarce instructor time is, if this sort of thing gets the transcript edited, maybe it is the sort of deal with the devil that should happen.

I leave that others.  I will merely note here that that quality of voice to text is getting such that what seemed like an impossible mandate a few years back may be getting do-able now.

Wednesday, April 12, 2017

The Excess Supply of PhDs - Thought Leaders versus Intellectuals

Last week there was a piece in the Chronicle by Daniel Drezner about the difference between public intellectuals and thought leaders.  (I'm linking to a pdf of it here, since not everyone might have access to the Chronicle.)  Yesterday in the NY Times, David Brooks' column was about Drezner's new book, The Ideas Industry.  (So the Chronicle piece was Drezner promoting his book. A week earlier there was piece by Laura Kipnis in the Chronicle promoting her book, Unwanted Advances.  The Kipnis piece was not behind the Chronicle's pay wall, while Drezner's piece is.  I don't understand the Chronicle's approach here, but as few people access my blog these days, I think a reasonable Fair Use case can be made for me linking to the pdf of his Chronicle article.)  Drezner's hypothesis is thus getting a broad airing.  Drezner's hypothesis is that the public intellectual is on the decline and the thought leader is on the rise.  Public intellectuals are ascetics who come to their ideas with dispassion and maintain skepticism both about the ideas of others and about their own thinking.  Thought leaders are advocates who proselytize their own thinking.  In so doing thought leaders may produce fame and fortune for themselves.

One way to understand Drezner's hypothesis is to consider how research universities gets funded.  We live in an era where much funding is solicited from rich donors.  It stands to reason that those donors have some expressed interest in the type of research that their money supports.  The work of a thought leader (in the proper disciplinary area and with the appropriate  slant on the issues in that field) is more apt to please the donor than the work of a public intellectual, who may produce ideas that raise the donor's ire.  Similarly to extent that there are corporate sponsors of university research, they want good ROI.  They hunger for new ideas from which they can make a handsome profit.  They are far less interested in new ideas as things in themselves, even if those ideas might eventually provider the fodder for other ideas that they can profit from down the road.  

This gives an external driver theory, based on the rise of private funding of the research activity.  I think there is also an internal driver theory.  This is based on two different ideas.  One is that departments which teach large numbers of undergraduates in lecture courses need TAs to run discussion sections.  This creates a demand for graduate students as inexpensive teaching labor, quite apart from how those same graduate students will fair after they get their PhDs.  The other idea is that research faculty prefer to teach graduate classes as compared to undergraduate classes.  The former complements their research.  The latter is service work only.  In order to satisfy this research faculty preference, there must be enough graduate students around so that the demand for graduate classes in a given discipline matches the faculty desire to offer those classes.  I taught graduate classes in Economics through the mid 1990s.  At the time the rule was you could get credit for teaching a graduate course as long as enrollments were 5 or greater.  A class of 2 or 3 would have to be offered as an independent study and wouldn't give the instructor a teaching credit.  We had a two-course load per semester then and the norm was one graduate course and one undergraduate course per term.

I should point out here that even if the TA function is absolutely necessary, having graduate students to staff that function is not.  Indeed as a Freshman at MIT in spring 1973, taking the second semester course in physics on electricity and magnetism, I had a professor as my TA/instructor.   That faculty run discussion sections instead of graduate students, or faculty teach small class versions of a course that had previously been offered in large lecture mode, is surely possible.  Most faculty, however, would prefer to teach upper level undergraduate courses or to teach graduate courses exclusively.  We have tendency to think something is necessary because that's the way we've always done things. 

There are similar issues in STEM disciplines, particularly in those where the research is done in a laboratory.  The director of the lab needs staff to do the work.  These staff members are typically graduate students or postdocs. They are not fellow faculty members.  Getting grants provides a measure of faculty productivity.  Working under the grant of another faculty member may not, particularly if the other faculty member is not a big shot.

The undergraduate teaching, graduate teaching, and research demands for graduate students are myopic, in the sense of being in the here and now.  Those demands exist in a universe that largely is unconcerned about what the graduate students will end up doing after they have earned their PhDs.  The graduate students themselves, however, are greatly concerned with that matter, perhaps not when they enter the doctoral program and are young and idealistic, but surely at the dissertation stage when they want to graduate, not just from their academic studies but also from the frugal graduate student existence they have been living.  

If the graduate student interests were put front and center and if the number of academic jobs that graduate students could obtain are in decline then either fewer graduate students would be admitted to PhD programs or there must be nonacademic jobs that new PhDs in the field deem worthy of all the effort they put in during graduate school. 

In the Humanities there has been chronic excess supply of new PhDs for quite some time.  Rationally, a person would enter a doctoral program in the humanities only if the person believed he or she were much better than her classmates, so thought there was a reasonable chance to land one of the few academic jobs still out there, or had already come to terms that there'd be a paying job for which the PhD didn't provide preparation (driving a cab or teaching junior high school, for example) while working on the manuscript at night, during so-called leisure time.  Otherwise, it would seem a certain amount of zealotry is required to persist under these sort of conditions.   Many have noted the consequences of this zealotry, most recently here, a snipped from which is below.

Rather, a kind of intellectual intolerance, a political one-sidedness, that is the antithesis of what universities should stand for. It manifests itself in many ways: in the intellectual monocultures that have taken over certain disciplines; in the demands to disinvite speakers and outlaw groups whose views we find offensive; in constant calls for the university itself to take political stands. We decry certain news outlets as echo chambers, while we fail to notice the echo chamber we’ve built around ourselves.

What I've not yet seen other do is connect this situation of chronic excess supply of PhDs in these fields to the increase in zealotry.  There appears to be a self-enforcing feedback loop between the two, at least as I perceive it from afar.  This gives one driver

Monday, April 10, 2017

Socialism Reconsidered - Part 4 - Thoughts on Income Redistribution

Looking for a silver lining in the dark cloud, in this case the dark cloud is the debacle in the House with the failed plan to replace the Affordable Care Act, one thing that the process made evident is that there are strong elements of income redistribution in ACA.  Subsidies are provided for those who purchase their health insurance in one of the exchanges.  Those subsidies are funded by a tax on wealthy taxpayers.  That so many have come to benefit from the coverage they get through ACA and to appreciate that benefit, in spite of the harsh rhetoric against Obamacare, suggests (at least to me) that one might consider other social policies not in the healthcare arena that are just as beneficial and also entail elements of income redistribution. How intensive should such policies be?  What determines this?  Are there limits to income redistribution?  If so, what is the source of those limits - politics, economics, ethics, or perhaps something else?  Those are the sort of questions I want to take up here.

Usually in our discourse we associate income redistribution with liberalism - Social Security, Medicare, and ACA were all introduced under Presidents who were Democrats, with Congress controlled by the Democrats as well.   So one way to think about the questions here is to ask what it would take to restore such control, say in the election of 2020.  However, a different way to consider this is to ask whether some income redistribution currently not underway would be endorsed even by conservatives.  While our national politics now seems broken, there was certainly a lot of rhetoric around a national infrastructure plan after the election in November.  If implemented, such a program would provide many construction jobs, boosting the demand for blue collar workers. This, in turn, would have substantial income redistribution consequences.

Part of what needs to be thought through regards who pays the tax and whether that is done willingly, as a matter of social obligation, or instead if it is coerced even while it is openly resisted.  Social Security, for example, is mainly income redistribution across generations of people, from (younger) current workers to (older) retirees.  As long as this appears stable and not a Ponzi scheme, current contributors should do so willingly, as they will eventually become recipients when they retire.  Paul Samuelson's paper An Exact Consumption-Loan Model.... gives the economic foundations behind this idea.   At issue now is whether demographic changes have rendered the situation unstable.  Why contribute while working if you will not be a beneficiary later?  (I should add that as a retiree within the State of Illinois, where the retirement system was designed as a substitute for Social Security, this is a big time issue for the State, which carries a very large debt now.)  A sense that the situation is unstable (most people don't have the wherewithal to do the math needed to consider whether shoring up the system is feasible or not, so they rely on the hearsay of others) would then make a current worker reluctant to contribute.

This gets me to a different type of income redistribution, you might call it Robin Hood income redistribution, where the rich contribute and the poor are the beneficiaries, with no reciprocation at all. In prior essays in this series I've referred to a Rawls Veil-of-Ignorance approach.  If ahead of time you don't know whether you will be rich or poor, how much income redistribution would you agree to is needed to make the system as a whole work?  (For those who haven't read Rawls, he focused on the welfare of the worst off in society to measure how well we are doing socially.)  Related to this is an after the fact sense among those who are rich.  Is there a feeling of social obligation to contribute, simply as part of being a good citizen?  If there are such feelings of social obligation, is that in itself sufficient to remedy the various problems that inequality engenders?  In other words, does individual initiative suffice or is government necessary to channel those feelings of social obligation to productive use?

In trying to consider these questions, I think it useful to look at a parallel activity of social obligation, namely voting. People should be aware of the paradox of voting, which argues that voting in any election with a large number of voters is irrational, as the vote of any one person almost certainly doesn't matter to the outcome, while there are attendant costs to the activity implicit in the time it takes to vote.  With charity there is something similar.  An individual's contribution to a charity that has a broad constituency hardly matters to the well being of that constituency.  One can have charitable giving that does matter, by having it targeted narrowly.  But this puts the giver in the position of King Solomon, having to determine a worthy recipient.  Most people, and I count myself in this category, don't want to make such a choice.  This limits the extent to which they will give to charity, even if they do have a social conscience.

Partly for this reason, those among the left who think there should be much more income redistribution than there is at present, focus on the benefits to the recipients and don't seem to care whether the donors give willingly or by coercion.  I haven't heard this expression in a while, but when I was a teen and on into my early twenties many a sentence began with "When the revolution comes..."  This thinking of income redistribution as a revolutionary activity is Marxist in conception.   It may be appealing from a sense of social justice.  But one wonders whether it can be expected to happen via our normal democratic processes.

Voter participation is quite low in our country.  Poor people vote less than everyone else.  A true populism that emerges from universal voting participation might be sufficient to produce a Marxist conception of economic justice, but we are nowhere close to that now. As things currently stand, money matters a lot in elections.  The monied interests that actively want to resist Robin Hood policies are winning the political game as of now, evidenced by the Republican control of Congress and the White House, as well as their control of many Governorships and State Houses.

For this reason it seems sensible to try to identify those who are well off who do have a social conscience, make an appeal to them to join cause with the rest of the population, and develop an electoral strategy and a set of policies around income redistribution that are consistent with this approach.

There are obvious political risks in doing so, as being open about raising taxes on upscale Democrats might drive such voters into the Republican fold.  However, given the present situation where the Democrats are in the minority, it seems to me this is a risk worth taking. To date, there has been little leadership on this point. Sometimes leadership requires delivering unpleasant but necessary messages.  So it seems to me that those who want to see more income redistribution should be focusing much of the message on the upscale people who will pay more in tax and make the case to them, with a lot of talk about the additional obligations they need to assume for the good of the order.  The rest of this piece tries to do just that.

* * * * *

Many of these ideas are not new for me.  I have been cooking on them for some time, driven to consider these issues by the ascendancy of the Tea party.  I found the "taxed enough already" message offensive.  I needed to think through an alternative.  I started to do this in writing in April 2011.  I began writing a variety of posts, with the very first this one, a riff on a well known Henny Youngman joke, called Raise My Taxes --- PLEASE!  That was followed by a more serious post where I gave my then current ideas about what should happen to taxation.

As I was recently retired when I wrote that piece and my mom had outlived the estate to support her home care, much of the suggestions addressed those particular concerns.  But I did offer up some other ideas which I still subscribe to.  One of those is that income tax should be based not just on the previous year's income, but rather on an average over the last several years.  This gets a little closer to taxing wealth (a stock) rather than taxing income (a flow).  Further, to the extent that there are variations in income that are not predictable, this means that some of that variation will be taken out of the equation in determining the amount of tax owed.   This other suggestion is most germane to the current discussion.  The benchmark income level in the recommendation, $100,000, comes from the income distribution tables for the year 2009.

(F) Raise marginal tax rates gradually for all households starting at the 80th percentile, $100,000 a year, in such a manner that reaching the 98th percentile you have the Obama proposal to eliminate the Bush cuts. That is the burden of tax increases should be much more broad than is being proposed at present. The message needs to be shared burden, not punitive on the rich.

Note that with this suggestion, the tax increase is entirely separated from its possible use. This is how many people will evaluate it, whether it is a tolerable burden or too severe.  We have this peculiarity in our public policy that we do not try to allocate the existing taxes we pay to their various uses, so in general people don't expect such a linkage.  But for a fundamentally new program that has a redistributive element, the incremental taxes needed to pay for the program have to be identified.  If the voters are to approve of the new policy, they presumably compare their incremental tax to their perceived social benefit.  This is not a good way to get at income redistribution, especially when done program by program, with those proposed sequentially.  The voter wants to know the bottom line for the entire package of programs to be offered.  If the bottom line is acceptable, then the voter expects that the programs in aggregate to be designed to balance with the incremental tax revenue collected.

I will return to this suggestion after I give a fuller accounting of my efforts to discuss income redistribution.  On tax day in 2011 I started a Facebook Group called For A More Compassionate and Saner America.  It is dormant now (so even if the ideas of the group are appealing there is no reason to join it at this point).  Back then I was more naïve and idealistic about what such a group might accomplish.  This is from the description.

The primary goals of the group are to restore responsibility and rationality in American politics. The immediate pressing issue is taxation.

By joining the group members who are in the upper tax brackets indicate their willingness to restore the tax rates which existed under President Clinton.

While the group did have interesting discussions for a while, it soon plateaued in its membership.  I am not entirely sure why but a few months after the group started Occupy Wall Street began.  Occupy was much more visible and garnered a lot of attention.  Given that, For a More Compassionate and Saner America seemed superfluous to me.  Occupy gave us the language of the 1% and the 99%.  On the one hand, that is helpful as there is terrible income inequality in the country.  On the other hand, it ignores that income inequality has increased between the 50th percentile and the 99th percentile.   To see this, consider the following table, which I showed in Part 2.  The data comes from Census table H-1 for All Races.  I differenced the numbers in adjacent columns, which is what is in blue.   Those numbers measure the quintile "width."

Table

Consider the fourth quintile, which includes the 60th percentile through the 80th percentile.  Quintile width has risen over time from 2010 through 2015.  Total growth is more than 15%.  Inflation over the same period was less than 10%.  So the quintile got wider in real terms, meaning the 80th percentile folks got further away from the 60th percentile folks in income (and therefore further away from the median).  You can do the same sort of thing for the 9th decile (from the 90th percentile to the 95th percentile). Its width grew even faster, more than 20% over that period. 

So Occupy made what I believe was an error, to not require greater sacrifice from the "professional class," which I define as those people with income at the 80th percentile or above but below the upper boundary that defines the 1%.  (This page does this with Adjusted Gross Income, rather than with Income so excludes income that might be placed in a tax deferred saving vehicle.)  Exactly where that upper boundary is I am not sure, but I think it not bad to use the focal point of $500,000 while keeping that line fuzzy.  

One big point I want to make here is that social responsibility is an ethical matter and with such matters there is a tendency to follow the lead set by others, even as we think we are exercising our own acts of conscience.  If it is only the 1%  (or even the smaller group of uber rich, the 0.1%) that group will more actively resist suggested tax increases.  If it is the entire professional class and the 1% in addition, then acceptance of the tax increases is more likely, especially if there is the appropriate leadership to promote the idea.

Indeed on how acceptance of these ideas might happen, I wrote the following in a post specifically about salaries in Higher Education, where I described my personal fantasy that well paid people in Higher Education would voluntarily accept compression of their salaries so their pay would be closer to the mean. 

With the underlying salary mechanics understood at this basic level, another part of the fantasy is that a Gladwell-like Tipping Point mechanism emerges in service of the salary compression function idea.  It might begin with other economists, much better able to deal with the real empiricism of the situation than I am, to establish the extent and magnitude of the sector specific rents and the shape of the salary distribution function.  (Usually reported are mean salaries, perhaps sorted by academic rank, but that is really insufficient to understand the issue.  One needs to look at the entire distribution and see how that has changed over time.)  Then journalists and others spread the word about self-regulation as a possible alternative to government interference.  After that the star performers themselves would begin to embrace salary compression as the embodiment of a Ron Hunt, take-one-for-the-team approach to the hyperinflation issue.   Here I'm talking about Nobel Prize winners, MacArthur "genius" Award winners, and other illustrious scholars.  This group would form the vanguard of the movement.   In turn they would convince forward thinking high level administrators - university presidents and chancellors, that salary compression would be good for the entire sector and good for their individual institutions.  With this leadership group on board, faculty governance groups then take up the matter in earnest.  (At Illinois, this is the Faculty Senate.)  They too express their approval.

This sort of mechanism is what I have in mind, considered more broadly to sustain income redistribution in the society as a whole, funded by increases in tax on those households at the upper end of the distribution. 

Let me close this section with the salary compression function itself, which I made for illustration purposes only.  It was the simplest possible function that did what I wanted it to do.  It held harmless people at the low end of the salary distribution.  (The cutoff point, $100,000, was chosen because it is easy to remember.  But note that if in a two earner household at least one of the earners makes $100,000 or more then the household will be in the professional class.)  It respected the pecking order.  Further, it was in accord with the principle of progressive taxation so the percentage decrease in salary after compression was an increasing function of salary prior to compression.  These properties remain desirable in any "tax increase function" that we might come up with. 

* * * * *   

In this section I want to consider an illustration of such a tax increase function, with the illustration meant to promote further discussion about the ideas, not to be a concrete proposal in itself.  But before doing that let me briefly note that income taxes are a complicated animal.  Once source of complication is differences in filing status.  Another complication is given by differences in AGI for people who have the same filing status and before tax income, say because one household has a 401K plan, while the other does not.  Then within fixed AGI households will differ both in the number of exemptions they claim and in the deductions they take.  Taxable income is the income left over from AGI when exemptions and deductions have been accounted for. Within each filing status, there is a tax function of taxable income.  But there is no tax function of before tax income.  I'm going to ignore that in what follows and act as if there is only one filing status and that before tax income maps to taxable income in a unique way.  For that reason, when I talk about raising taxes, I don't care whether that occurs by closing loopholes or by raising rates.  For a more nuanced discussion of income taxes those matter, but I'm going to ignore the nuance here.  Likewise I'm going to ignore demographic characteristics of households - location, age of family members, and health status just to mention a few key parameters.  These factors do matter in fact.  But bringing them into the discussion now just makes things more complicated than they need to be.

Before writing this piece I spent considerable time to see if I could identify some principles that taken together would determine how much more in tax a household should pay.  The underlying thought is that if we could agree on principles, determining them would be the hard work.  Then deriving the tax function from the principles would be a straightforward process.  For example, one might consider some base year where we thought income inequality was not too bad and tax rates were reasonable.  (I'm thinking that 1983 might be such a year.  The first set of tax cuts under Reagan had been put in place by then, but iincome inequality was still not an issue in the public consciousness at the time.)  Then a tax reform that produced a similar rate structure and similar inequality might be what we're looking form.  But before tax incomes are definitely more unequal now.  So that would seem to require more income redistribution.  A mere return to tax rates of 1983 would not suffice.  Plus, there are many confounding factors.  To mention just one, interest rates are much lower now.  This impacts the interest deduction for the home mortgage.  Taxable income is higher because interest rates are lower.  Do you try to account for this or ignore it?

Ultimately I decided it was just too hard to appeal to first principles here, meaning I wouldn't be able to write this piece for quite a while.  So instead, I eyeballed the numbers and made up a hypothetical tax increase function off the top of my head.  As my purpose is illustration only, I hope this admission doesn't prevent readers from taking a look at what I produced. 

You can see what I did by downloading the Excel Workbook Income distribution Modified and looking at the spreadsheet called Wikipedia2014.  I copied the table for household income distribution for 2014 that is from Wikipedia, which in turn is from the Census.  That information is in columns A through G of the Excel.  I split the screen for easier viewing.  You can scroll in the lower pane with the two header rows in the upper pane intact.  Let's describe the information that is given in this table.

There are roughly 124.5 million households.  Those households vary in size, they vary in the number of earners, and they vary in the income earned.  If you do this more carefully than I will, all those variables matter.  A single person making $50,000 is not doing too badly, though that household would be below median in income.  A family of eight with $80,000 in income might be struggling to make ends meet.  Noting that, from now on I will focus on household income only.  

Household income is divided into bands of width $5,000 starting at zero and going through $200,000.  There is then one band of width $50,000, which goes from $200,000 to $250,000.  The last band has in essence infinite width.   It goes from $250,000 to whatever the top earning household in the country made (I'm guessing that is more than $1 billion but less than $10 billion).   Unfortunately, there is no band for just the 1%, so this last band includes households in the professional class as well as the 1%.

The table gives the number of households, the percentage of the population, and the mean household income in each band.  That mean has to be greater than the lower endpoint of the band and less than the upper endpoint of the band.  The percentile cumulates the percentages of all bands of lower income.  So median household income (50th percentile) is somewhat below $55,0000.  The 80th percentile is just below $115,000.  Thus households at the 80th percentile have more than double median household income. The 90th percentile is just below $160,000.  Households at the 90th percentile have not quite triple the median household income. 

What I did is in columns H through M.  I started the tax increase policy in row 26, with lower income in that band of $115,000.  This is slightly above the 80th percentile, but by the nature of the table it is not possible to be more granular than that.  In column H the percentages are cumulated for all bands of higher income and include the percentage in the given band.  In theory, the entry in column D added to the entry in column H should equal 100%.  It's close, but not quite.  I suspect this is due to round-off error with the percentages.  So there is a lack of precision here.  But it is good enough for our purposes.

Given that there already is mean income for each band, column I gives cumulative mean income including the current band as well as all bands with higher income.  What would be nice to have is tax already paid by these folks.  We don't have that information.  Here let me just say what I have in mind.  For my own household we pay federal income tax, state income tax, and local property tax.  There is also sales tax and various licenses and registrations, but I'm going to ignore those.  If you sum the federal and state income tax along with the property tax you get a tax burden figure that you can compare to total income.  I'm suggesting that for the professional class and the 1% that the tax burden rise and I'm giving suggestions as to the amount they should rise, but doing that without knowing the prior burden makes it hard to say if the increases are too much or not enough.  What I'm hoping for here is that the reader, aware of his or her own income and tax situation, can make some determination about whether the tax increases are reasonable or not.  The mean income and cumulative mean income are intended as help in that determination.

Also, let me give a word of caution that the reader not confuse tax bracket, which gives the marginal tax rate for the household (the amount of additional tax to be paid if there were an additional dollar of income) from the effective tax rate on taxable income or the average tax rate on AGI.  The tax bracket will be higher then the effective tax rate.   For 2014, the tax brackets can be found here.  My household was in the 28% bracket.  Our effective tax rate on taxable income was 18.25% and our average tax rate on AGI was 17.4%.  Across years, the boundaries of the brackets adjust for inflation.  If your income went up with inflation and all your deductions did likewise, your effective tax rate should not change from one year to the next.  If your income rose faster than inflation and/or your deductions didn't rise as fast as inflation, you taxable income will have grown in real terms.  Then, because the bracket is higher than your effective tax rate, your effective tax rate will be higher this year as compared to last year.  

In what follows we are talking about increases in the average tax rate on AGI.  The policy to consider is given in column J and gives the percentage increase in the average tax rate.  The policy is a step function.  There are several adjacent bands that get the same percentage increase.  Then there is a step up so that the next several bands get a higher percentage increase.  The impact of the policy can be seen in columns K and L.  In column K the increase in tax paid is for the lowest income in the band.  In column L the increase in tax paid is for the mean income in the band.  This step function approach to a policy is for simplicity.  The reader should be readily able to figure out what is going on. 

However, I should note that this step function approach violates preserving the pecking order.  (Preserving the pecking order means that if one household has larger before tax income than another household, the first household should also have larger after tax income.)  To preserve the pecking order there can't be any steps up.  The policy must be a continuous function of income.  So while looking at the policy in the spreadsheet, the reader should imagine that the percentage increase at the lowest income in the band matches the percentage increase at the highest income of the previous band and that the percentage increase rises gradually within the band.  But including that detail in the spreadsheet seemed more effort than it was worth.  One can construct a piece-wise linear and continuous policy that does respect the pecking order and generates the same revenue within each set of bands that constitute a step.

Now some general comments on the shape of the policy.  The steps are all upward, so in that sense the policy is progressive.  Higher incomes get higher increases in tax within a step.  Then the rate of increase in tax goes up with the next step.  The steps also increase in magnitude.  This fits my sense of taste so I want to explain why I did that.  I'd rather have fewer steps, if possible.  Too many and it gets very complicated.  But the first step is very small, only 0.25%, leading to a small increase in household tax of around $300.  Any policy should start with a very small step.  If there are only a handful of steps in total and the last one is reasonably high (I'm not going to define what that means but I'm referring to the 12.5% increase for the top band) then the steps have to increase in magnitude as we go along.  

The boundaries of the steps were done by eyeballing only.  I knew ahead of time that I wanted the penultimate step to be 5% and the ultimate step to be much higher than that.  While the reader may disagree, I felt it reasonable that a household with $200,000 of income pay an additional $10,000 in tax.  The shapes of the preceding steps were done hastily, just to get something that works.   The one place where this work doesn't agree with my own sensibilities is the 12.5% increase at $250,000, leading to an increase in tax of $31,250.  That seems too large to me.  Indeed, the highest band probably should be divided into two or three bands and a household with income $250,000 should see an average tax increase of 6% or 7% but no more than that.  It is the uber rich who should see the highest step.  But the uber rich are not singled out in this table.

Column M then gives the increase in tax revenue generated by the policy for each band of income.   For incomes below $160,000, the policy generates less than $1 billion per band, chump change by the Everett Dirksen standard.   While the bulk of the tax revenue generated comes from the top band, there is substantial revenue generated by penultimate band and not inconsequential revenue generated by the bands between $160,000 and $200,000.  

The aggregate generated is in excess of $243 billion, a hefty amount.  Nonetheless it limits how much income redistribution can be done.  One reason for computing the per household increase in income for households below median in income is to serve as a benchmark for other income redistribution programs.  Some low income households will end up receiving more than this amount, perhaps in the form of free college tuition, or as a subsidy on health insurance, or in wages for a construction job that came about from an infrastructure program.  But that means other low income households will not get even that benefit.  This means we need to prioritize the recipients of income redistribution (how to do that is something to consider elsewhere, not in this essay) as well as to prioritize which government programs should be undertaken with these revenues.  

* * * * *

In this section I want to take on various caveats and potential criticisms that can be anticipated in advance.  I don't intend to fully resolve these, but I do want to acknowledge the issues so others can reflect on them further.  

Might households or groups take actions to undo the effect of the approach to income redistribution discussed in the previous section?  If so, does that render the approach inert?

One should ask how the household pays for the tax increase.   If the household had been giving substantially to charity and reduces the charitable giving to pay for the tax increase, the upshot is that there wouldn't be any more income redistribution.  Likewise, household members could have been doing a substantial amount of volunteer work that benefits low income people, but after the tax increase they do more paid work instead.  This too would undo the policy.  In contrast, reduction in consumption purchases or in how much gets socked away in various savings accounts is the type or response that is hoped for.  So one might ask how does one get the latter but not the former.  My belief is that if people voted for the income redistribution, that would mean they understood what was expected of them and behave accordingly.  People who had such a tax increase imposed on them unwillingly would be more prone to undo the effect of the approach.

While I haven't said this explicitly, I've been assuming that these tax increases occur at the Federal level.  Might some of the states undo this by lowering tax rates on their wealthy citizens and reducing government services to their poorer citizens?  Just this sort of thing seems to be happening in many Midwestern states now, even without the tax increase at the Federal level.  This points to some coordination in electoral strategy needed between State contests and Federal contests.  If further income redistribution makes sense it has to be done in both locales.   

One other way that the policy might be undone is if high level executives in well-to-do corporations can shift their income, which is taxed heavily, to business earnings that can be shielded from tax, while nonetheless utilizing the income for personal consumption or personal saving.  Of course, some of this is going on already.  The issue here is whether more of it would happen were the proposed household tax increases to go into effect.  Assuming that some more of it would happen, the issue then is whether it would be a big deal or not.  I don't know.  But it does seem clear that before such an income redistribution program is introduced, we need good answers here.  This leads to the next question.  

It is said that corporate America is sitting on something like $2 trillion in liquid assets, a good chunk of which is currently abroad for tax avoidance purposes.  Is that money a potential target for income redistribution?  Put more colloquially, if corporations are individuals, shouldn't they pay their fair share?

I want to respond in two distinct ways to these questions.  First, the idea that social responsibility can be expressed as paying more in taxes should be considered an innovation.  The embrace of this idea can then be thought of as adoption of the innovation.  Like any innovation, adoption will follow along a diffusion curve.  Since above I've already linked to Malcolm Gladwell's book, The Tipping Point, those who want to see substantial income redistribution should want there to be few if any impediments to reaching the tipping point.  Those who actively resist taxation might provide such impediments.  Thus, it makes sense to me to suspend going after corporate wealth until the innovation has diffused substantially.  

Second, the tax policy discussed in the previous section was a guess, nothing more.  I hope it is a good guess.  If it were to be implemented, I'd want to do implementation in two or three stages, where each stage entails a higher step function than in the previous stage.  The idea is to err on the side of making the steps too small.  Let the taxpayers respond that the increase is not overly burdensome and they can do more. That sort of response would aid the diffusion of the idea.  If enough people gave a contrary response that the tax increase is too burdensome, it would end up blocking the idea altogether. The staged approach to implementation would also allow some learning as to what the right tax increase function should be. 

Won't the approach to income redistribution impact the underlying income distribution itself?  If so, might we be shooting ourselves in the foot by implementing redistribution policies for that reason?

There are two different ways that income redistribution might impact the underlying distribution itself.  One is via the Keynesian multiplier.  The other is via incentive effects on income generation.  My own biases are that I am a Keynesian and I think the case for taxes as a (dis)incentive is way overblown.  I will briefly sketch the Keynesian story, since I believe it is valid.  I will not take on the tax as incentive issue here.  Somebody else should make that argument.  

Our economy is in slow growth mode.  It is demand constrained.  A Robin Hood income redistribution raises aggregate demand because the poor person who receives the money is more apt to spend it, while the rich person who paid the tax would have more likely saved it.  The increased aggregated demand boosts GDP.  That boost, in turn, raises aggregate demand more.  It is that second effect which is the multiplier.  Under this story, the income distribution is impacted in a beneficial way.

* * * * *

Let's wrap this up.  There may be psychological reasons why we focus on the needy when considering income redistribution.  The giver then gets assurance that the gift goes to a worthy cause.  But I believe that a rational approach to income redistribution needs to consider the giver in much more detail and ask what makes giving a willing thing.  Unlike the Libertarian perspective that conceives of government as Leviathan and taxation as an act of coercion, my view is that government is a necessary construct both to provide public goods and to help us engage our social conscience.  We want to help others less well off than ourselves.  Some of the taxes we pay are how we who are fortunate provide assistance to others who are not doing as well.  This view may seem laughably naïve, but it makes sense to me.  I have tried to give it voice in this piece and to sketch how it might be operationalized. But really, I only meant this to start the conversation.  I hope others will chime in to extend and critique what is here in this post.

Wednesday, April 05, 2017

Logrolling, the Repeated Prisoner's Dilemma, and the Gorsuch Nomination

Reading stuff on this and participating in some online conversations on this, I haven't seen the following discussed yet, so I thought I'd get it out there.

The debacle in the House with the Republican's health plan suggests one of two possibilities will happen.  Either there will be gridlock for the foreseeable future, because the Republicans themselves can't make legislation that get enough votes, or there will be a change in approach to break the gridlock that brings in enough Democrats as partners to create a majority.  For the latter, however, the legislation that goes through must represent real compromise.   Is the White House willing to go for that?  I don't know, but let's say it is possible.

Now the game theory part a la Prisoner's Dilemma.  A good strategy is tit-for-tat, which means this time around do what your rival did the last time around.  If the rival cheated the last time, you cheat this time.  If the rival cooperated last time, you cooperate this time.  This simple approach rewards good behavior and punishes bad behavior.

In politics, tit-for-tat can appear as logrolling.  Reciprocation of favors is how things get done.  When I was first taught about logrolling in social studies during high school, it was presented as sleazy behavior.  That sleaziness is relative to an ideal where each issue gets debated on the merits and the better ideas that emerge from the debate then prevail.  In that ideal world issues aren't linked for strategic reasons.  But that simple world can be gamed for individual or small group advantage.  And it has been gamed - quite a lot, actually.  From the point of view that such gaming is inevitable, logrolling can look better than the alternative.  The alternative is not the ideal (which is not feasible).  The alternative is gridlock.

If the White House actually wants to bring in Democrats as partners to get something done, how does one get there from here?  The above suggests there is a need for some up front show of cooperation on the part of the Republicans.   Yet in the Senate, as distinct from the White House, it seems evident that Gorsuch will now get through but with fewer than 60 votes, perhaps even requiring Vice President Pence to break the tie.

It appears too late at this point to have a different candidate, one who is more moderate and hence who could garner 60 votes.  But, the President proposed Gorsuch before the health care debacle in the House, under the then thinking that he could have it all his way.  Surely that thinking no longer holds sway.  Indeed, the President (and/or his close advisers) may now regret having nominated Gorsuch.

If there is such regret it would be evidence that the White House recognizes there is a trade-off here.  The choice is either gridlock for some time to come or pull Gorsuch as a first step towards cooperation.  In saying this, one should recognize a different possibility, bringing the Freedom Caucus on board.   But that bridge was apparently burned by having a rush job on health care.  This other bridge hasn't been burned just yet.

If I were betting I'd put my money on gridlock from here on out.  But until the nuclear option gets exercised and Gorsuch gets approved, one can still hope that the improbable will happen.

Tuesday, March 21, 2017

When We Lost Our Mojo

In my course on The Economics of Organizations I do a section on personal reputations.  As we use student blogs to ready the class for discussion on the issues, it may be interesting to consider the prompt I gave students on this topic.

The topic is personal reputations and their role in influencing behavior. Describe some domain where you have a strong reputation with others (it could be with friends, it could be with your family, it could be at some place you worked, etc). Then discuss how your reputation developed. Consider what you do to keep your reputation intact or enhance it further. Finally, reflect on whether there are occasions where you'd like to stray from the behavior suggested by your reputation and what you do on those occasions. Have you ever "cashed it in" by which I mean you abandon your reputation altogether in favor of some immediate gain?

There is an economics theory of reputations based on play in a repeated Prisoner's Dilemma.  Without getting into technicalities, what the theory shows is that if the future looks sufficiently optimistic then players don't cash it in.  In contrast, a more pessimistic forecast leads to doing what is myopically optimal, disregarding its impact on the future.

I'm now going to switch from the language of economics to the language used in considering diffusion of innovation.  Early adopters of an innovation tend to be optimistic.  They embrace the new thing because of the possibilities it enables.  Those possibilities suffice for them.  High likelihood of success is not necessary.  Their enthusiasm combined with their experimental play with the innovation ends up being the driver of success, as much or more so than the innovation itself.  In this sense, adoption is itself a creative act rather than a mere flip of a switch.  Adoption entails fitting the innovation to purposes that adopters see, which were not apparent to the inventor of the innovation.

I am specifically thinking of teaching online in considering the above, but I think it applies in many other areas as well.  Examples include Muhammad Yunus' development of micro credit and Atul Gawande's championing of hand washing to prevent the spread of infection in hospitals.  In each case, apparently successful adoption encourages spreading the innovation further, till it is embraced broadly.

In contrast, latter adopters of innovation may do so for defensive reasons and/or to create more immediate gain for themselves.  In the online learning arena, one finding from more than ten years ago is that many instructors used the learning management system in a dull way, primarily to share files, and completely eschewed the possibility of experimenting with their teaching with the technology as an enabler of such experimentation.  In the business arena, one might consider subprime lending with essentially no underwriting standards (loans equal to 100% of the value of the house offered to any borrower whatsoever regardless of ability to pay off the loan).  In retrospect, it is difficult to fathom how this could have occurred.

It is worth noting that Gawande's piece on hand washing occurred roughly at the same time that Countrywide was making all those inappropriate subprime loans.   Indeed, at any one time we are likely to experience early adopters doing creative things that improve matters and  latter adopters of some other innovation creating harm via some cashing in approach.  For example, the late 1980s to early 1990s were a period that can be generally characterized by a sense of optimism for a variety of reasons, one being that the PC revolution was well underway.  Yet during that time my parents, who were retired and then snowbirds living in Florida in the winter and in New York the rest of the time, were scammed by their financial advisor at Prudential Bache.  Less than a decade later, there was the well known Enron debacle, much of which occurred during an even more optimistic period, now referred to as the dot.com bubble, with Enron's bankruptcy an emblem of the bursting of that bubble.

Optimism and pessimism vary over the business cycle - bulls charge while bears shy away.  But what we seem to have now is a very broad malaise, even while the stock market itself has been faring pretty well. 

I won't offer up a precise time, though I suspect in retrospect it will seem earlier than it did in prospect.  I will also try to do this both for me as an individual and for the country as a whole.

I switched careers, from doing economics full time to becoming an administrator in online learning, in that same optimistic period we associate with the dot.com bubble.  I was caught up in the enthusiasm, not for financial reasons, but for the potential that the technology seemed to unleash for learning.  But the job gradually changed, moving from supporting experimentation in a variety of ways, to offering production services for ordinary faculty.  In spring/summer 1999 when I became the director of a new campus Center for Educational Technologies, there still was a soft-money organization SCALE that was giving us a good chunk of our funding and enabled the experimental approach.  We had a chance to extend SCALE's funding, with the Mellon Foundation replacing the Sloan Foundation as the outside funder.  But that didn't pan out.  Thereafter, our funding was mainly internal.  It had a consequence I didn't fully appreciate at the time but seems quite clear looking backward at then.

Six years later we are in the midst of the full campus roll out of our enterprise learning management system, which was branded on campus as Illinois Compass.  It was a disaster.   The service had to be taken offline for a better part of a week.  After that we threw money at the problem and did a variety of reorganization to shore up the service.  Thereafter things were better for Illinois Compass, but I'm somebody who believes you can and should solve problems in prospect.  In this case I was unable to address these matters and this was to be my big moment.  It was all very deflating.

A year later I had a horrific fall, severing all the tendons on my left leg between the quadriceps and the knee. This piece is from four weeks after the surgery.  During this time I was between jobs, moving from the campus job I had to the College of Business.  There was potential that the new work would revive my enthusiasm.  But I found some blocks to that in how the College was structured that I hadn't anticipated when I applied for the position.  It's not that there weren't some successes and forward motion.  There were several of those.  But there was no home run, no fully online program.  The College has that now.  It wasn't ready for it then, though I only realized that in retrospect.

Let me switch to the country as a whole and draw some temporal parallels to my personal experience.  The aftermath of Hurricane Katrina happened at roughly the same time as the Illinois Compass debacle.  It seemed the Federal Government was incompetent and uncaring.   How the Airlines responded to 9/11 gives a different look, that it was the private sector as much as the government that was incompetent and uncaring.  This piece is talking about a time around when I had my leg accident.

The U.S. Commercial Air Transportation Analysis concluded in 2006 that despite advancements in technology, the overall customer flying experience was going down.  Cuts in food, customer service, capacity, onboard conditions were just some of the reasons given by the report, which came to this conclusion: "Virtually all travelers would likely say that travel through the aviation system today is less rewarding and more onerous than it was 5 years ago."

A year later there was the Surge in the Iraq War, a war that had been ongoing for five years.  People disagree about the efficacy of the surge, but that the American public was wearing down from the ongoing involvement with no obvious win to claim for the effort.  Surely that was demoralizing.    And this is still a year before the burst of the housing bubble.

Let me close by moving earlier in time, to the Bush Tax Cuts.  These were initially passed in the aftermath of 9/11, with the Republicans in control of both the White House and Congress, as is the case now.  Piketty's much discussed book Capital in the Twenty-First Century has a 2014 copyright.  Looking at the Bush Tax Cuts from the perspective of Piketty, what were we thinking?  When we willfully make such bad choices we eventually have to pay the piper.

And we're still paying the piper now.

Monday, March 20, 2017

Sweet Little Sixteen

This post is a diversion, allowing me to dovetail two different "sweet sixteens," each of which is timely.  The first is the Chuck Berry classic song from 1958.  The tune and lyric are both so familiar.  Also, it is kind of amazing to see a young Johnny Carson with Dick Clark before the song starts.


The other Sweet Sixteen is in reference to the NCAA Men's basketball tournament.  Yesterday, they completed the round of 32.  I put the remaining teams in Excel, along with their seed and their conference.  Then I sorted the information, once by conference, then again by seed.  The results are below.  A bit of analysis follows.



















After the round of 64 had concluded, there was some criticism of the seedings, particularly Wisconsin and Witchita State being under seeded.  One factor that they don't use is prior tournament experience.  For both of those teams that may have mattered, perhaps a lot.  Yet at this juncture the seeding looks pretty good.  Of the top sixteen seeded teams (those who were seeded 1, 2, 3, or 4) twelve of them remain.  All of those teams won their games in the round of 64.  The Twelve that remain also won their games in the round of 32.  So they collectively went 28-4.  That seems pretty good work to me.

Three teams seeded 1, two teams seeded 2, three teams seeded 3, and all four teams seeded 4 remain.    Only one team seeded above 8 remains, Xavier, an 11 seed.

Regarding conferences, I read yesterday some stuff about the ACC being overrated.  A couple of points should be made on that score.  First, the sample size is small.  Odd things can happen then.  Second, injury that doesn't get reported in the press can matter, but we  fans are unaware so think the team we are rooting for is in a funk.  Third, some teams do find a rhythm near the end of the season so their level of play then is higher than it was earlier in the year.  It is hard to sort out the importance of the various effects - talent, chemistry, and experience.  This is what makes watching fun.  There is unpredictability in it.  See my post from several years ago, Small Samples, Hot Hands, and Flow.

Finally, you can look here to see how all the conferences fared since the round of 32 was completed.  With the exception of Gonzaga, all the remaining teams are from power conferences.  The results don't show that any one conference is better than the others.  But they do seem to indicate that schools from non-power conferences may be at a disadvantage, especially those that haven't previously broken into the upper echelon.  Middle Tennessee State put on a good showing as did Rhode Island.  They are the exceptions that prove the rule.