Sunday, December 27, 2020

Freedom of Blather

The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.
F. Scott Fitzgerald

I have been slowly reading through the pieces David Brooks recommends in his latest Sidney Awards piece.  He begins with this interview, How To Tell If You're Being Canceled.  To motivate that selection, Brooks precedes it with a discussion of the experience of James Flynn.  He wrote a book that defended free speech.  The book was pulled by his publisher.  I am going to take on these pieces in my post.  

As a basis for my critique, I'll make reference to a post from a few years ago, The Demagoguery of the Reasonable Conservative Commentator, which does a rather thorough deconstruction of a column by Bret Stephens. In that post, I refer to Stephens' column as a hatchet job and that is mainly from what he omits in his argument that I thought essential to include.  I often have this feeling of reading a hatchet job when I do read pieces by Conservative authors (which I do with less frequency as of late).  So, here I will put in the missing pieces.

First, let me consider self-publishing such as in this blog post.  It is an option available to anyone with an Internet connection.  The host will have some terms of service that must be abided by, but otherwise, the author can say whatever he wants - complete freedom.  This freedom doesn't imply there will be any readers.  But the message can certainly get out.  For somebody like me who has been at it for some time, perhaps the quality of past posts provides some indicator of the quality of the current post.  But maybe not.  The term blather, as I'm using it in my title, is meant to be writing or speech that has no mechanism of quality assurance to accompany it.  Freedom of blather is a condition of the world we live in.  (For those in a university setting where there are codes of conduct, which may even govern blather on platforms not associated with the university, one can post under an alias and bypass the regulation that way.)

Is blather, as I've described it in the previous paragraph, sufficient when considering speech?  If it is, then all of this is much ado about nothing.  Stop complaining.  Freedom of speech is alive and well. 

Second, let us consider the examples that were presented in these pieces.  The examples feature some organization with a strong reputation itself, the New York Times is one, Middlebury College is another, giving its imprimatur to a piece of writing by publishing it or likewise giving its imprimatur by inviting the person to give a campus lecture.  The imprimatur is an implicit statement about quality assurance.  In each case that implicit statement is made to the organization's constituency.  For the New York Times, that is readers.  For Middlebury College, that is students, faculty and staff, and alumni.  

In both cases, it seems that there is an obligation by the organization to provide the constituency with well argued ideas that the constituency would tend to resist.  If most readers of the New York Times are Liberal, these would be pieces written by Conservative authors and likewise for students at Middlebury.  In doing this, the role is something akin to a parent making their kids eat their vegetables at dinner.  It's good for them, even if they don't like the taste.  Does paternalism of this sort survive, even when the constituency is itself comprised of adults?  (I will leave the question of whether college students are children or adults unanswered here.) The quote that I began this piece with provides a basis for arguing that the type of adults we aspire for in these constituencies should be able to handle the variety without undo difficulty.

In the economics analysis of quality assurance of this sort, as described in Klein and Leffler's article, The Role of Market Forces in Assuring Contractual Performance, it is the repeat purchases of customers that provides incentive for a firm to produce high quality of an experience good (a good where the customer can't tell the quality at the time of purchase) even when there is a one-shot gain for the firm from producing low quality, because doing that is cheaper.  In that event we say the firm "cashes in" on its reputation.  Then the customers will "punish" the firm by shopping elsewhere in the future.  Likewise, what Jonathan Rauch refers to as the Cancel Culture, can be thought of as institutions not honoring their trust relationship with their constituencies and then being punished for that. 

So, what I find missing in the Rauch interview, and I suspect it is missing as well in Flynn's book, it is certainly missing in the Webpage that Brooks linked to, is a look at what the implicit promise is between such a high profile organization and its constituency and what would constitute a breach of this promise.  Not everyone has an Op-Ed in the New York Times and not everyone gets to give a public lecture at Middlebury.  So there surely are criteria for ruling people out as speakers or ruling their written work out for publication.  Were those criteria applied well in the cases cited or not?  Rauch bypasses this entirely:

The notion here is that emotional injury is a kind of harm like physical injury, and because it's a kind of harm it's a rights violation. The problem is this is a completely subjective standard, and it makes any form of criticism potentially subject to censorship and cancellation and lumps science into a human rights violation.

Having dismissed emotional injury, it appears the organizations are never in a position to violate the trust.   The blame can then fall only on those doing the canceling.   In my view, that is a poor analysis of the situation.

Let me make specific comments about the cases cited.  Students at Middlebury could very well learn about the ideas of Charles Murray by reading The Bell Curve or by watching one of the many videos of him on YouTube.  It is a fatuous argument to say he has to be invited to campus for students to hear that view.  In the case of the New York Times publishing the Op-Ed by Tom Cotton, a quite different argument can be made for not doing it, which does not deal with emotional injury.  The Times, like it or not, is part of the political landscape.  When a well known Conservative political figure publishes a piece in the Times, it lends legitimacy to the argument being made.  We are living in a wackadoodle world now where Trump does crazy things that should be strongly resisted.  Publishing Cotton's piece was a mild form of embrace of the Trump view.  I would hope that in the future we can return to where this sort of consideration is not relevant.  But it would be naive to think it is not relevant now. 

Third, the interview with Rauch makes no mention of sexual harassment nor of misogynistic speech.  I can't tell whether the interviewer, Nick Gillespie, deliberately steered the discussion away from that example or not.  Had it been included, there would be a clearer linkage between the emotional harm and the physical harm.  There is no doubt #MeToo is a form of the Cancel Culture.  But if you interviewed women, many would say that it has had a material impact on reducing sexual harassment.   Including this, as example, would weaken Rauch's argument.  It's lack of inclusion is bothersome, for that reason. 

Let me close by noting that over the years I've written a handful of posts that focus on freedom of speech.  In case the current post stimulates thinking in the reader, its status as blather notwithstanding, the reader might find some of these others interesting as well. 

Thursday, December 24, 2020

What Small Non-Profit Organizations Want From Santa

A few days ago a friend from junior high and high school posted a link to this piece about the charitable giving of MacKenzie Scott, the ex-wife of Jeff Bezos. She is an incredibly wealthy woman.  Her charitable giving is unlike yours and mine, even if you are quite generous in this regard.  Our giving is a drop in the bucket, one that we hope fills up with the collective giving of others.  Her giving matters in itself.  It is perhaps 5 orders of magnitude greater than what we give.  The piece, as I read it, was meant at least in part as a critique about how most foundations give out their money, particularly that Scott doesn't require grant proposals and she gives out large amounts with no restrictions on how the money is spent.  The recipients are identified with verifiable information about prior performance. (More on that idea below.)  These are organizations that have proven themselves trustworthy. 

Linked within that piece is an essay that MacKenzie Scott wrote, which describes the giving and provides a list of the recipient organizations that have received funds this year.  In this post I'm going to offer a critique of that post.  Before I do, however, I want to note that it is her money she is giving away.  She can do with it what she'd like, with the proviso that for tax purposes the recipients must have the right status to receive charitable donations as given by the IRS.  So my critique can be thought of as imagining a hypothetical where the funds were public monies instead, or a different hypothetical where I'm offering consulting advice to MacKenzie Scott that she doesn't seem to be getting yet from her team of advisers, for how she might modify the giving in the future.  

For those who don't already know me, I'm retired now but was an academic economist for about half my career at the University of Illinois.  I made a switch in the mid 1990s to educational technology and was a high level administrator on campus in that area until I retired.  I stopped publishing in economics journals at around the time of the career switch, but I continue to write on economics and sometimes on economics and higher education, such as this recent very long blog post.  (It is not peer reviewed.  It reads as something of a combination between an opinion piece and an economic analysis, with its own hypothetical to consider.)  Also, when I led the SCALE project on campus, from summer 1996 to summer 2000, I ran an internal grant program to support novel ed tech projects and I was also able to make "officer grants" that were smaller but could be given without the approval of our review committee.  This is clearly not the same thing as what MacKenzie Scott is doing, but it does suggest I have some experience that may be relevant in considering the perspective of the donor.

Let me make one other caveat before beginning, by focusing on the various food banks that are on MacKenzie Scott's list.  Recently I've been making donations to a food bank in my area, which did not make the list.  Indeed, I searched the list for organizations from Illinois.  There were 5 of them, but no food banks.  I then searched Google Maps for food banks in Illinois.  There are a multitude of them and it seems by their names that there are several different providers.  I began to ask myself the question, should this sort of service be provided by a charity or should it instead be provided by government?  Is there a way that economic theory can answer this question or is it simply a matter of individual preference?  Let me note that the University of Illinois, considered a public university, gets its funding from a variety of sources.  When I started here back in 1980, the bulk of the funding came out of State of Illinois tax dollars.  That is no longer true.  So one might want to consider the historical record on food banks and their equivalent.  I have a vague notion (one I haven't tracked down) that soup kitchens during the Great Depression were funded, at least in part, by the New Deal.  And the War on Poverty from when LBJ was President, amplified on these sorts of services - via Food Stamps, for example.  Then, since Reagan, much of this was undone.  So one might argue that Liberals have one answer to this question while Conservative/Libertarians have a different answer.  On the supply of funds for such services, maybe that is far as we can get. 

But there is a different argument for the Liberal side that the social safety net is needed in a just society a la Rawls, and should be considered a public good.  Public goods should be provided by government.  In the absence of government provision, private charities can take up some of the slack.  But they may end up excluding a good segment of the population.  There is also a supply side argument which, frankly, I don't know enough about to argue except by waving my hands.  This is in regard to the geographic distribution of the food banks and their respective supply chains.  Is that, perhaps, inefficient because of wasteful duplication?  I want to observe there are two such food banks in the Champaign-Urbana area, which is what suggested the question to me.  After all, we are a college town, not a major urban area.  But maybe we are a large enough MSA to warrant two food banks.  (Wikipedia lists us as ranked 202 in the U.S.)  In any event, if MacKenzie Scott will be funding food banks in the future, some of those not yet on the list and perhaps repeat funding for some that already are on the list, then it behooves us to understand not just the good function of an individual food bank, but function of the entire system.  Could things be made more efficient?  Could hungry people who find access to be difficult at present be granted access by some other approach?   These questions deserve an answer from looking at the system as a whole. 

Now let me turn to my critique.  I will begin with a brief mention of some small non-profits I'm aware of, either because I've made donations to them at the urging of a friend, or because I've had some other interaction with them.

  • Montgomery's Kids - helping kids who live in foster homes in Montgomery County, Maryland.
  • Imerman Angels - helping those with cancer via personal connections to others who have been through the experience. 
  • Fit to Recover - helping former addicts lead a healthy lifestyle through exercise.
  • Spirit In Action - A micro-grant organization that makes some of its grants internationally.
  • Universal Love Alliance Foundation - A U.S. charity that takes donations and funnels them to Universal Love Alliance, a human rights organization that focuses on the rights of marginalized people in Uganda.  (I work for both ULAF and ULA and will elaborate about that some below.)
  • BlaqOut - An organization to support and advocate for Gay Black males. 

To complete the picture here, the first three on the list I know through friends.  Universal Love Alliance has received several grants from Spirit In Action.  BlaqOut provided funding for ULA to do a session about HIV prevention, testing, and treatment held at the ULA office.  Also, none of the organizations listed above made MacKenzie's Scott list.  Do they have even an epsilon probability of getting funded by her in the future? Or is that completely out of the question?  Finally, I want to note that I tried to list the organizations by "edginess," perhaps not a well defined concept though I hope the reader gets the point.  Would edginess matter in whether an organization receives funding or not?

There are two criteria that I want to challenge.  One is how MacKenzie Scott is managing risk, trying to make sure that each grant has a high chance of success.  For someone with limited funding, this may be right.  But Scott isn't in this situation.  If there is some positive correlation between risk and return, then minimizing risk does not produce the socially optimal solution.   Consider this piece from more than a decade ago about cancer research during the previous 40 years.  It argues that while progress has been made, the big breakthrough we've been hoping for has eluded us, because individual researchers have been too risk averse in their approach.  But taking on  risks means failure is more likely and in the presence of failure there are apt to be Monday morning quarterbacks who challenge the entire approach.  In order to take on such risk, the funders must be secure from such external pressure.  Turning away from cancer research and to organizations that MacKenzie Scott might support, edgy organizations that pioneer new ways of doing things and do so at small scale in some particular locale might produce a replicable model in the event they succeed.  In this case the benefit would be far beyond the benefit created in their locale.  This sort of thing should be encouraged, even if there are substantial risks at the get go. 

The other criterion I want to consider is focusing exclusively on American organizations whose beneficiaries are also in the U.S. or Puerto Rico.  Why not fund internationally, and particularly to organizations in Third-World countries, perhaps through U.S. based organizations that already do that, like Spirit In Action (or ULAF)?  Is this too a matter of managing risk?  Or is there some other consideration?

Having been involved with some scams in my ULA/ULAF work, I will concede that risk is there and perhaps is larger than with domestic non-profit organizations.  But the need is enormous.  Further there are large social/political factors that matter.  Taking Uganda as but one example, it is worthwhile to watch the documentary, God Loves Uganda, which provides depth on American Evangelical Christians coming to Uganda, largely to spread homophobia.  This needs a credible counter force.  Where will that come from?  Further the U.S. Department of Defense has given military aid to many African countries, Uganda included.  The funds are diverted from their intended purpose by President Musuveni and then used to quash the political opposition.  So, even if one's concerns are purely humanitarian, there is a need to show presence in Uganda; so that ordinary people there can have realistic hope of a better future. 

Now I want to turn to the human rights work that ULA does.  In 2018 ULA received a grant from the U.S. Embassy in Kampala to do workshops for religious leaders, teaching them about tolerance and acceptance of LGBTI people.  While this might seem a strange thing from a U.S. perspective, Uganda is a very religious country and these religious leaders have important positions in their respective communities, as they guide the views and beliefs of other community members.  The workshops were transformational and highly regarded by the Embassy members who sponsored the grant, some of whom attended for a day or two.  Near to the end of the first workshop, video interviews were done with some of the participants; each interview has just one participant, who described what was learned.  These videos are compelling to watch and speak to the strong impact of the workshop.  Unfortunately, they can't be publicly shared.  Doing so might put those who attended the workshop in danger.  So we have credible information about workshop effectiveness, but we can't broadcast it.  The best we can do is to rely on word of mouth, via the endorsement of sponsors.  And, I'm not sure why this is, but Embassy personnel seem to turnover fairly frequently, as these people are assigned to other U.S. Embassies elsewhere around the globe.   Based on this experience, my expectation is that others who do edgy type of work often can't share information about the effectiveness of the work, for similar reasons.  This makes it harder to get future funding.  Maybe the funders, such as MacKenzie Scott, should go looking for such projects, to counter these difficulties.  Or they should go looking for organizations like Spirit In Action, which through their own funding have identified useful projects and organizations that would otherwise defy identification.  

I want to make one other observation.  I am the ghostwriter for ULA.  I help them with grant proposals, training documents, and correspondence.  I described what I do at some length in this blog post.  The question for MacKenzie Scott is this:  Can the ghostwriting function be replicated, performed by others and incorporated by different organizations? I'm going to guess at the answer here.  It is not immediately replicable, but we can learn how to replicate it, so the service is provided by others and made available to organizations that have neither the skill nor the resources to devote to writing grants themselves.  One impediment that surely must be overcome happens when the ghostwriter has substantially more formal education than the leaders of the organization that employ the ghostwriter, which seems likely to me.  The ghostwriter must be a partner along with those leaders, but the ghostwriter can't act as a boss without the relationship breaking down. 

If that guess is right, then MacKenzie Scott could fund a variety of pilots that would be aimed at replication.  If some of those pilots are promising, the next step would be to scale up the approach, then "make a market" between the ghostwriters and organizations they support, by covering the full cost of the activity and developing a matching process that is effective.  Viewed in its entirety, this would be a kind of hedge.  It may be that other large foundations abandon their grant funding approach and embrace, instead, the approach MacKenzie Scott has already taken.  But if there is a lot of inertia, the old ways will stick.  The hedge then would accommodate that and yet make many organizations credible for grants from these foundations where in the past these organizations wouldn't have bothered to apply. 

Let me close.  As the Treasurer for ULAF, I dream for a very generous donor, or a small number of regular donors who make substantial contributions.  Now it's a constant struggle to assure adequate funding for ULA to operate, especially given all the uncertainty in Uganda at present.   On the other hand, I don't expect Santa to fulfill this wish soon.  But I hope he can leave a note - keep at it, good things come to those who wait.

Friday, December 18, 2020

Doing What Is Right - Following An Ethical Code Or Figuring It Out By Situational Analysis?

I accidentally deleted this post.  I found the editor version and it is republished below.  But the url of the post is different now.  Sorry about that.

-----

Fiction, whether a short story or a novel, a movie and a TV show as well, offers entertainment, certainly, but also gives the author's point of view, sometimes on issues where the characters must make ethical choices.  This author point of view stands in contrast to the point of view of the reader, who may not have thought about the underlying issues at all, or considered them only from a substantially different perspective.  We readers thus get a bonus from having a go at a work of fiction.  We get pleasure from the story, but we also get moral instruction of a kind that might actually reach us, as long as we embrace the story and consider it in reflection as well as reading it the first time through. 

As I've been a John le Carré fan for quite some time, typically reading one of his novels during the winter holidays, I can report that I was first drawn to them purely for the entertainment, but in the more recent ones I've read the ethical components were more evident to me.  Perhaps I've reached the age where I'm ready for these ethical lessons, or especially want something of this sort as a contrast to the tenor of our time, a way to keep a bit of idealism alive inside me when it is so easy to become completely jaded.  Given le Carré's recent passing, in the next day or two I do plan to read one of his older books for a second time, one that has gathered dust on my bookshelf.  And I do want to note that the ethical education in the stories is not about purity in behavior, but rather a way to navigate turbulence of actual life.

“Thematically, le Carré’s true subject is not spying,” Timothy Garton Ash wrote in The New Yorker in 1999. “It is the endlessly deceptive maze of human relations: the betrayal that is a kind of love, the lie that is a sort of truth, good men serving bad causes and bad men serving good.”

However, rather than use le Carré as my focus for discussing the question in the title of this post, I'm going to rely instead on Game of Thrones, which I've been watching recently as I do the treadmill in our basement.  Indeed, I've learned to turn on captions while doing this so the noise of the treadmill doesn't block the message that I should be hearing.  I recently finished going through all 8 seasons and am now going through the episodes again, to make further identification with the characters and the story line.  I actually want to focus only on one little bit of the plot.  Jon Snow's true identity is revealed near the end of season 7.  When Jon learns this he feels obligated to tell others, first Daenerys, then his "sisters," Sansa and Arya.  Daenerys pleads with Jon not to tell them, but he does what he feels he must.  He swears them to secrecy.  Yet Sansa doesn't keep the secret and that leads to many adverse consequences. This makes much of season 8 a tragedy.  So, one wonders, could the tragedy have been avoided if Jon exercised discretion rather feeling bound by some code?  

Before trying to answer this question, I'd like to make several asides.  First, I've been noodling on this post for several weeks, but not fully satisfied with what I had come up with.  I did come to an overall conclusion, which was based in part by my son Nathan, who had read all the books by George R.R. Martin, telling me that seasons 7 and 8 of Game of Thrones went beyond the books.   David Benioff and D.B. Weiss, who wrote the screenplays for TV series based on the books in the first six seasons, were the authors of that part of the story.  The conclusion is that they should be encouraged to write a few different versions of the end of season 7 and season 8, keeping the war with Ice King intact, but varying the story by whether Jon Snow ever learns his true identity or, if he does, whether he keeps it secret. I'm guessing there would be quite an audience for these alternative endings, even if they never were made into a TV show.  

Second, in the middle of the noodling it occurred to me that some of my old economics research is actually somewhat relevant to the question in the title. Thirty years ago I wrote a paper called Flexibility Versus Commitment in Strategic Trade Policy Under Uncertainty: A Model of Endogenous Policy Leadership.  (The Working Paper is dated April 1990 while the published version appeared in the Journal of International Economics the following year.)  Indeed, in the second quarter of my first year of graduate school, we learned about the debate in macroeconomics between those who favored rules for fiscal and monetary policy (Monetarists) and those who favored discretion (Keynesians).  So in other contexts, where the concern is not fundamentally ethical, though you could take economic performance as a kind of ethical imperative, I've been exposed to this type of question for quite a long time and find myself engaged with it.  I'm not sure whether others who watched Game of Thrones would be likewise engaged, but perhaps they would.  

Third, this bit about Jon Snow's true identity exposes a variety of potential inconsistencies in the story.  I'll mention only a couple here.  There may be others.  Daenerys views Jon Snow's true identity as giving him more of a legitimate claim to the Iron Throne than she has, because he is a male heir in the Targaryen bloodline and the society is quite sexist that way.  Yet Ned Stark doesn't consider Jon Snow as a possible successor to Robert Baratheon at all.  Indeed, in retrospect it seems Jon was sent to join the Night's Watch so the question would never come up.  But shouldn't Ned Stark have seen Jon as the rightful successor rather than Stannis Baratheon?  The other inconsistency is that Catelyn Stark never questioned the story that Jon Snow was Ned's bastard son.  But Ned was otherwise such a good and upstanding person.  Did he whore around nonetheless?  And Ned's sister is Jon's true mother.  Catelyn must have known approximately when she died, though she wouldn't have known that she died from complications in childbirth.  But might Catelyn have put two and two together, given what she did know?  For the story to work as it was told, it required that Catelyn, for whatever reason, didn't do this. 

The last aside comes from a lesson I learned the last time I taught a class for the Campus Honors Program, back in fall 2009.  The CHP students are among the best we have on campus.  Yet they asked me pointedly, more than once, to be very direct with my instructions and to avoid subtlety.  I gathered from this experience that they felt they weren't very good at reading between the lines and/or they had been badly burned from having made what they considered a minor mistake of this sort.  This point actually manifests in Game of Thrones repeatedly.  Once he became the King's Hand, Ned Stark is terrible at playing the game of palace intrigue.  He doesn't understand fully what is going on and he doesn't seem to care to devote his attention and energy to figuring this out.   The characters who are good at this game in the first season - Cersei, Lord Varys, Littlefinger, and Tyrion, are each ethically challenged, though some more than others. The story makes it seem that one truly understands what's going on only if one wants to practice deception.   I don't think that is true, but in the world created in Game of Thrones, the ones who practice deception surely need to have a good understanding of what's actually happening. 

Now I want to briefly sketch how a "what if analysis" might be done by the character of Jon Snow on whether to tell others about his true identity.  First, one might imagine playing out the story as if Bran never learned about Jon's identity nor conferred with Samwell that it was a birth done in wedlock, so Jon wasn't a bastard.  Without this as part of the plot line, the story still holds interest because of the evident rivalry between Sansa and Daenerys.  Would Jon have been able to resolve this in a way that was tolerable, if not amicable?  

Here is an interesting wrinkle that some viewers might have liked to see.  Sansa became good at the palace intrigue game by carefully observing others who were good at the game and, of course, by being the victim of many of those decisions without having the power to undo them herself.  This provided the strong motivation for her learning.  Jon recognized this in Sansa early on after his return to Winterfell. Could he have Sansa design an arrangement that Dany would accept which would also work for her. In this arrangement Jon would somehow be released from his being King of the North or be allowed to hold that title in absentia, while Jon would be living with Dany in King's Landing.  Sansa would be the Lady of Winterfell and the surrogate King of the North (or some other title to that effect).  The bending of the knew part might be moot because of the great distance between Winterfell and King's Landing. One would think a peace along these lines would be possible.

There is a different issue in this scenario that would have to be confronted.  Daenerys' downfall was an example of power corrupting and absolute power corrupting absolutely.  In this case, substitute feeling betrayed for corrupting.  Of course, everyone was betrayed by Cersei, who didn't send her troops to fight the Ice King. That sense of betrayal was still there.  But Daenerys' sense of betrayal was so much stronger in the version that aired, because after Jon Snow's secret was made known to certain characters, they betrayed Daenerys, Lord Varys, certainly, Tyrion for telling Lord Varys, and Jon Snow too, both for telling Sansa though Dany had requested not too, and for no longer loving her physically, as it became apparent that Dany was Jon's aunt and intimate relations between close relatives was unseemly.  With all this betrayal, Daenerys felt a rage that couldn't be contained and blocked her good judgement.  It made her a fierce opponent in fighting the war against Cersei's forces, but it made her unsuitable to rule. If, instead, none of the other betrayals had occurred, would Daenerys then have been able to thread the needle, having enough rage to still win the war but then, with that accomplished, being able to pull back and show compassion for the vanquished as long as they would lay down their swords.  Two possible story lines emerge depending on whether the needle gets threaded or not.

In the next scenario, Jon does learn about his identity, but he decides to sit on this information rather than tell those people he felt obligated to tell in the show as it aired.  This would keep the secret, as Bran and Samwell wouldn't tell anyone else.  In some ways, it would play out the same as in the case where there was no secret to reveal. The big issue in this case is Dany being Jon's aunt, and how he would deal with that fact.  Two alternatives are first, that once Jon verified Dany couldn't bear children, her being his aunt pretty much becomes immaterial, as inbreeding wouldn't happen. The other is that the possibility of a child can't be ruled out, so Jon figures out a way where he wouldn't sleep with Dany thereafter.  As King in the North, he would return to Winterfell after the war while Dany would stay at King's Landing.  Could this be pulled off so that he avoided situations to sleep with Dany both before before the big battle and after?  Again there are two possible story lines.  In the one where it doesn't work out well, Dany starts to question whether Jon loves her, but she won't have a good reason for why he doesn't. This could lead to a personal tragedy for them, without it leading to the large tragedy for all the resident of King's Landing.

That last scenario I'll consider here is where Jon obeys Dany's wish and tells nobody else but her.  They then would have it out about the aunt thing.  (It is odd to me that it was never discussed between them and it didn't seem to bother Dany at all.) Somehow they work through a mutual understanding, even if that is laden with tears and emotional pain.  

I do have this feeling that the bending of the knee, as a symbol, could be modified if Jon was at Dany's side.  She could be more generous then and treat those in leadership of the other kingdoms as partners rather than as subordinates. The show was long on tragic outcomes, which for pure entertainment might be right.  But as an approach to good management, the purely traditional approach was lacking.  Further, the external threats to the seven kingdoms seemed to have been eradicated, at least for the time being.  Once the war with Cersei was won, this appeared to be a time where all should get along.  If that was obvious to the viewers, why wasn't it obvious to the characters in the show.

Monday, November 23, 2020

Tough Love for Republican Enablers of Trump Who Will Still Be Serving in Government in 2021

As it now appears that Trump's refusal to recognize the electoral victory of President Elect Biden is likely more grifting and not a true play to stay in office, one wonders why elected Republicans have been so slow to recognize the election results and, moreover, why they didn't immediately repudiate the behavior as a threat to democracy.  I confess that for a while I was quite preoccupied with the possibility that this was a real threat and I'm still having trouble letting go of the idea entirely.  But at least now I can think forward to other considerations about our national politics. 

So, let's begin with a series of questions where we can't know the answer but we can guess at the likelihood of the answer going one way or the other.  

1. If there had been no pandemic, would Trump have won reelection?  

2. If the Republicans in the Senate had a crystal ball at the time of the Senate Trial after Trump was Impeached in the House about all the events that would follow, not just the pandemic but also how Trump mismanaged it, would they then have taken the trial seriously, allowed additional evidence to be presented, and found Trump guilty, in which case Pence would have become President?  

3.  Is Trump the likely front runner for the Republican nomination for President in 2024 or will he be out of the picture by then?  If not Trump, is Pence the front runner?

4.  Will the Republicans become the minority party for some time to come until they find a new message and way to appeal to voters that stands in contrast to the approach taken by the Democrats?

These questions are about the future.  I now want to talk about the past and the unholy alliance between Republicans in Congress and the Trump administration.   There was an implicit contract of the form:  

a) The Republicans in Congress would unequivocally support the President as long as he continued to deliver on his side of the bargain. 

b) President Trump, on his part, would deliver the goods - tax cut legislation that benefited the very rich and nominating very Conservative Justices and Judges. 

As long as this implicit contract would continue to hold, each party to the contract could otherwise behave as they might.  The Republicans in Congress felt secure in appealing to Trump's base, as his loyal supporters, so could count on that in the elections.  That the House flipped in the midterm election of 2018 showed there was perhaps some weakness in this underlying assumption that held the contract together.  Trump, for his part, could make Twitter posts that were utter fabrications, but in this way speak directly to his base.  He could do likewise with his public utterances.  The vast majority of the Republicans in Congress acted as if this Orwell-speak was perfectly normal.  As most of those messages were meant to fire up the base, this actually served to cement the deal. 

In other words, the implicit contract meant there would be no Republican Congressional oversight of President Trump, apart from those items in the contract.  Thus, Republicans in Congress share a good deal of the blame for Trump's mismanagement of the pandemic.  Of course, there were other enablers.  Attorney General Barr is probably the most prominent, especially his turning the Mueller report into a dead letter. So, Republicans in Congress don't bear all the blame.  But many of them will continue to be in Congress in 2021.  Barr and other enablers currently in the Executive Branch will not. 

Then, considering judicial appointments, the stark difference in circumstances between not taking up President Obama's nomination of Merrick Garland to the Supreme Court, which happened on March 16, 2016, while taking up President Trump's nomination of Amy Coney Barrett, on September 26, 2020, shows there was absolutely no principle that guided these two decisions, at least no principle that Democrats could respect.  Instead, this was done for pure expediency.  Further, the first of these almost surely had material impact on the Presidential election in 2016, as religious voters who supported Trump turned out in great numbers, because it seemed a possible path to repealing Roe.   This has left an incredibly bad taste in the mouths of many Democrats, one that won't improve unless drastic steps are taken.  The Democrats demand retribution for these bald acts of hypocrisy.  But what drastic steps are there?  I wrote about this not too long ago in a post called Honor Among Thieves and Supreme Court Justices.  The best possible solution would be for at least one of the three Justices appointed under Trump to step down from their position and do so of their own accord, for the greater good and to unify the country.  I don't expect that to happen, but it would work wonders if it did.  Absent that, I will speculate on other possible drastic steps below.

* * * * * 

I've always hated it when Conservative pundits would write a piece about what the Democrats should do, as if they knew better.  I'm saying that here because I'm going to indulge a little on this in the other direction, a Liberal telling Republicans what they should do.  But, mainly, the message is to simply tell the truth, where before there has been nothing but denial.  Now I want to note that this can be difficult psychologically for the recipients of the message.  So, before doing anything else, I would recommend a collective viewing of The Ox-Bow Incident. This should be coupled with a discussion after the movie.  How would they (the movie viewers) have reacted if they were for hanging the men who ended up being hanged in the movie, only to find out later that those men were innocent? Frankly, I don't know that such a discussion would be sufficient preparation and, further, it may seem rather quaint to rely on a movie from 1943 to get the message across.  But the reality is that Republican voters have been played and have come to believe things that are patently false.  It's easy enough to see this as an outsider.  It is no small task to have this change in perspective happen when a true believer.

The elements of what is going on can be considered, by analogy, from the film The Insider, which itself is not about politics.  It is about Big Tobacco scamming the public, particularly teenagers who would be the next generation of smokers, by putting additional addictive drugs inside cigarettes, so the effect was much stronger than just the nicotine from the tobacco. This making customers addicted was part of the business model for Big Tobacco.  The main character is the chemist who supervised this program then severed from the company, an act of conscience, and afterwards violated the non-disclosure part of his separation agreement by bringing the story to Sixty Minutes, where Big Tobacco was a sponsor.  It's a good movie, so I won't give away more of the plot, but even this little bit does foreshadow much of the kind of things the Republican party has been doing.

For the Republicans, the addiction comes in the form of messaging. Perhaps Rush Limbaugh is a good example as his syndicated radio program started in 1988 and is still popular today, so he spans from the end of the Reagan administration to now. There are, of course, other messengers, Trump himself with his Tweets and public announcements, Fox News, etc.

It is not, however, just the messaging.  There is something else, in addition.  The metaphor is kind of insulting, I admit. But it is worth considering.  The recipients of the messaging regard it as truthful, even though it frequently is not and is instead designed to stoke their anger.  We should ask why that is.  For those long in the Republican fold, this can be explained by confirmation bias - the message is in accord with the recipient's prior held worldview.  But for those who weren't always in the Republican fold (or their parents weren't) one should consider first causes, that made allegiances change.  I'll mention a few here.  There may be others.  

The decline in manufacturing and the concomitant weakening of private sector unions is a biggie, from my perspective.  The reality is that a blue collar worker has much worse economic opportunities now and there is strong sense that these people were abandoned by the Democratic party.  A different but related explanation comes from Robert Putnam in his book Bowling Alone.  Our views about the world were once normalized by the social organizations we belonged to.  (Being in a bowling league is an example, but only an example.) As membership in these social organizations declined (because economic opportunity declined) it created a space for messaging that was harsher and more angry.  There was receptivity to this messaging to fill the void.  And the lack of economic opportunity meant there was already a level of anger in the recipient, so the message would be favorably received.  

Let's consider a couple of the bigger lies.  One is the anti-science view, particularly that global warming is a hoax.  Of course, among the Evangelicals, the anti-science attitude goes back far longer, to viewing Darwin's Theory of Evolution as a threat to the literal meaning of Holy Scripture.  But Evangelicals are not the majority of the Republican base.  What explains the anti-science view of the others?  I conjecture that it is two factors working in concert.   One is a lack of critical thinking, in this case the inability to ask the question - who benefits when enough people believe global warming is a hoax?   The evident answer is the big energy companies - oil and natural gas - and other producers who don't need to invest in technologies that reduce carbon emissions. Are the CEOs and high level ownership of these companies Democrats or Republicans.  A way to investigate whether the Republican base is aware of this argument is to survey them on this simple question: Are they aware of the Koch Family?  And then one should follow up by asking whether they are aware that much of the messaging that global warming is a hoax is supported by "research" that the Koch family finances.  

The anti-science bias among the Republican base is surely partly responsible for why so many have refused to wear a mask during the pandemic.  I'm not sure there was anyone who benefited from this anti-social behavior the way the Kochs have benefited from the  viewing global warming as a hoax.  But it is clear that Trump played this card, based on a delusion that the economy would perform better if people were out and about rather than hunkered down in their own dwellings, even if that spread Covid-19 at a much greater rate.   Will the Republican base see that in retrospect?

The other big lie concerns racism and anti-immigration views.  The Republicans have no plans to substantially raise incomes for blue collar workers and provide decent healthcare for them.  The attack on Obamacare without a viable alternative makes it clear what's going on.  The very rich Republicans don't want to pay the tax that generates the subsidies in Obamacare to make it affordable for ordinary Americans.  Being racist and anti-immigration is a way to refocus attention away from the bread and butter issues and use these as a means for generating anger as a visceral reaction.  It is a bait and switch, which it seems the Republican base has not caught onto.

This next bit might be entirely wishful thinking on my part.  I'm envisaging a post-Trump Republican party, that remains Conservative at its core but embraces honest messaging.  It has younger champions, who are devoted to making the lives of those in the Republican base much better, but if that can be done by cooperating with the Democrats that would be much preferred to giving them the finger.  And there must be some agreement that we're all in this together.  Republicans should not be in a civil war with Democrats.  

To get to this, there must be some reckoning with the Trump years, pre-pandemic and then after as well.  From a bread and butter point of view, how did members of the base do?  From a messaging point of view, did these members realize that Trump lied, early and often?  Would they prefer lying in the future or do they now want to hear the truth, even if a good deal of it is grim?   And to the extent that the lying goes well beyond Trump, with much of it originating in the media, there needs to be some way to hold a mirror up to the media, where before they were agents of propaganda, but moving forward they too embrace honest messaging, which sometimes means being self-critical within the Republican fold.  Drawing a connection between Joseph McCarthy and Donald Trump might facilitate this. 

Would the Democrats welcome this change in the Republicans?  Tactically, it is unclear to me.  The Democrats are certainly not a monolith.  There is one piece after another to read about divides in the party between the Center and the Left. A Conservative Republican party that had none of the thuggish tendencies of Trump might peel off quite a few Centrist voters among Democrats and Independents as well.  But it does seem clear that this would be much better for the country.  The parties would be far more representative of the population as a whole.  The uber rich would be consigned to a back seat, where they belong.

* * * * *

I want to get to the title of this post.  President Elect Biden has already announced a series of Executive Orders that he will put in place as soon as he takes office. He can do this quickly, without Congressional approval.  In contrast, for legislation that gets through Congress, all eyes are on the runoff elections in Georgia, to see who has the majority in the Senate.  If the Republicans retain control there, the assumption is that mostly it will be gridlock.  Compromise between the two parties to produce legislation that both sides can live with now seems a historical artifact, nothing more.  In this sense Congress is broken and it has been broken for some time.  Consider this piece by Evan Bayh from ten years ago. And then consider the high volume of those not seeking reelection in the House in 2018.  Serving one's country as a representative of the people isn't what it used to be.  Indeed, this morning while my wife was tuned into MSNBC, I heard a commentator explain the Republicans who've been reelected for 2021 and who haven't yet recognized Biden's victory are still frightened of Trump, even though he is a lame duck.  Being afraid of the backlash from disobeying the current party leader apparently dominates doing the right thing.  

Evidently, Trump will want to keep pulling the strings, even after he's out of office.  What will stop that from happening?  For the sake of argument, let's say the Biden-Harris team has the ability to force Republicans in office out of the current stasis, and to act more as the opposition party has done historically.  Further, imagine that the new regime is successful - in ending the pandemic, in reviving the economy, and in creating a sense for all Americans that we're in this together.  In this case, surely Trump's power would wane, and the seeds would be set for a new Republican party, as sketched above. 

If this actually were possible, a game theoretic approach would describe the situation as a coordination problem.  Currently, we're in a low level equilibrium where things are a mess.  But a high level equilibrium is possible, if only the beliefs of the players (in this case Republicans in government) recognized that's where we're heading.  The tough love in the title then should be considered a kind of shock treatment to get those beliefs to change.  It's not merely punishment for past injustices rendered.  Although, let's face it, the Biden-Harris team has a real reason to bear a grudge given the present circumstances.  

If this makes sense, so far, then the next step is to agree that the tough love would first be applied to individual Republicans, for the purpose of demonstration.  And that demonstration must make others serving Republicans in government fearful that they'll soon be the recipients of the same sort of tough love.  Idealists, and I normally consider myself in that category, would surely prefer to appeal to the better angels of these government officials.  But that clearly won't work.  Even before Trump, Republicans in Congress largely acted as in the low level equilibrium.  The piece by Evan Bayh linked about argues as much.  So there needs to be pressure applied, not just to get out from under Trump's thumb, but also to stop the old form of obstruction and replace it with a collegial opposition party. 

I'm sure the reader wants to know what sort of actual punishment will serve as this tough love.  I want to know too.  The ideal punishment would achieve the goals but then not be recklessly used by future generations, when there is no need for it.  I'm not fully conversant with the powers the Commander in Chief can wield, but this much I would be willing to offer.  Suppose the then President Biden aggressively exercises some power and perhaps abuses power to a certain extent.  (Following Trump would seem to give some license to doing that.)  Congress would then want to curb that power and Biden would agree to legislation that does that.  Passing the legislation would reign in future abuses, one would hope.  Until the legislation is passed, the power would be used quite aggressively. 

By means of illustration, suppose the President asserts that for National Security reasons, Senator X must be detained indefinitely and while in detention the Senator will be incommunicado, even with family. Likewise, this will happen for Representative Y, Federal Judge Z, and even one of the Supreme Court Justices appointed under Trump.  President Biden will say that others who might potentially receive such punishment can avoid it either by resigning or, in the case of elected officials, by stopping the obstruction and playing the game as it should be played. Then, let this sink in for a while, become the new normal for a certain amount of time, and let the frustration on the Republican side build, while the cooperative behavior in Congress also increases. 

* * * * *

The Republicans have treated the Democrats as if they're wimps.  And perhaps they have been wimps in the past, while operating under more normal circumstances.  But we're in a crisis now and we're bordering on a coup d'etat that has weakened us a great deal.  A wounded animal, facing the possibility of death, fights aggressively for survival.  That's what's needed here.

Friday, November 06, 2020

Repricing - in Higher Education and in the Economy as a Whole

With everyone edgy and tired from the pandemic, people are hoping desperately for a return to normal.  But if you are a planner/architect for Higher Education, some other sector of society, or society as a whole, you will invariably ask - what will normal look like when the pandemic is over?  This will be followed up by the question - how might that normal be redesigned so it is better than things were before the pandemic? I'm going to do some of that sort of thinking in this post.  But first we need to get at how the pandemic broke things and what should be done to fix  those things.  Coping during the pandemic has produced a variety of patchwork solutions.  I consider them as band-aids, in that they may have allowed universities to operate, but in a rather constrained way that is not sustainable if and when an effective vaccine for Covid is found.   

It is also evident that some things will change permanently, not because we design for those changes, but rather because there have been some lessons learned from coping with the pandemic, where it is individuals who make their own adjustments.  This includes the likely much greater reliance on working remotely and also that much instruction will be at a distance.  There is the further likelihood of substantial population migration - away from large cities - because of the fear of future pandemics, and away from areas that present severe environmental risk as a consequence of global warming.  So, for example, we currently have large populations living in drought areas that are exposed to a risk of massive fires.  And people living in certain coastal areas are at risk of severe hurricanes.  If property losses from such environmental events become uninsurable and/or the risk of loss of life becomes too great, people will move to where things are safer.  I'm noting this here because surely some of it will happen, but the extent to which it does happen is much harder to predict. 

Now let's get to the economics.  In as simple a way as possible, most people perceive a college degree as a passport to a good job and thereafter to a comfortable life (income-wise).  However, the pandemic has disrupted the labor market and good jobs are now much harder to come by.  In this, the current situation shares the general pattern which followed the burst of the housing bubble that brought on the Great Recession in 2008.  However, that crisis had purely financial causes and the government enacted reasonably effective counter measures to right the ship.  The labor market for college students and new graduates rebounded, though it took several years for there to be a full recovery that way.  Now, of course, while we are again in recession, we also have a botched public policy in managing the pandemic and that adds another dimension of uncertainty to the labor market softness.  Who knows when the pandemic will end?  Who knows how bad things will get until we have an effective vaccine?   It seems that the recession will continue and possibly get worse until we have good answers to these questions.  

Under the circumstances, the willingness to pay for a college education, viewed purely as a financial decision, has declined.  Further, as much instruction has moved online, there is a perception that the quality of that education has dropped as a consequence.  This means fewer students will enroll or continue to be enrolled.  In response, many colleges have had to reduce their tuition to retain the students they expected to have on board. The Chronicle of Higher Education reports that most colleges expect declines in tuition revenue.  That captures the situation at present.  I would term this a short-run view.  The rest of this piece takes a long-run view.  In so doing, it is a sequel to my earlier post, America's Caste System - My Take, which argued that we are income-stratified in this country in a way that is deleterious to the overall well being of the country.  In the next section I will review briefly some of the history on that and then make a connection between income and wealth inequality and the role college plays in society. 

* * * * *

Here we will consider a variety of looks at then and now from the point of view of both income and wealth, then repeat the exercise on tuition and expenditure at the university (where I will give an incomplete view since I don't have data to give a fuller picture).   In my earlier piece there was a link to graphs provided by the Pew Research Center, which show income distribution by race for a base year, 1970, and then again for something closer to the present, 2016.  The data are inflation controlled and consider per capita household income, to control for size of household.  There is some skew in the data in both years, as there is a large upper tail.  But the results are reasonably bunched around the mode in 1970, reflective of a middle class society.  The results are much more spread in 2016, with a larger upper tail and less of the population near the mode.   A comparison of the income distributions between then and now graphically supports the notion referred to as the hollowing out of the middle class.  

A contention I want to make in this piece is that a middle class society, even when there is some spread in incomes, helps to make everyone in the country feel we're all in this together.  In contrast, a society with stratified incomes makes us feel we're more part of the stratum (or caste) where we are located and then we are far less concerned with everyone else.  This is on account of what behavioral economists refer to as the availability heuristic, where people's worldview reflects what comes readily to mind and doesn't account for information that might be attained but not immediately.  If the vast majority of interactions are within our own stratum, we then think the rest of the world is also that way or we fail to think about the rest of the world at all.

While the Pew graphs are very interesting, there are some limits to the information provided that we should be aware of.  First, the data are right censored at $200,000 per person. What is shown roughly coincides with the lower 99% of the income distribution.  One would also like to know what is happening with the upper 1% of the distribution.  Also, the information is about then and now, but not about what happened in between.  It would be good to be aware of the entire time path within that duration.  Then, too, while I did mention the very soft labor market now, the stock market hasn't fared nearly as poorly, which is largely due to injections from the Federal Reserve to shore up the market. Thus, it would be useful to look at the wealth distribution and how it varies over time to complement the information in the Pew graphs about income distribution.  

I've got some of that information here with time series of wealth by category within the wealth distribution.  These data come from Distributional Financial Accounts maintained by the Federal Reserve. They provide tools to customize the range of time interval considered, though the farthest back you can go is to 1989, after the Reagan Presidency had ended.  It is interesting to consider the categories they use to cluster the information.  There is the top 1% (the rich).  Next is the 90-99th percentile (elsewhere I've referred to this group as the professional class).   Then there is the 50-89th percentile, which might count as middle class.  And there is the rest, below the 50th percentile.  The first three groups have (very) roughly the same amount of wealth, though of course wealth per household declines going from one group to the other.  The last category, with half the households, has very little wealth in aggregate, so almost no wealth per household.  This shows substantial wealth inequality in the economy overall, without needing to get technical and talk about Gini coefficients and the like.  The following table, which shows the wealth shares at the start and the end of the time interval, make it clear that wealth inequality has been increasing over time.   


Total wealth in 1989:Q3 was $20.82 trillion according to the Fed data, with 92.83 million households, giving a mean wealth per household of $224.28 thousand.  (The number of households is given here.)   Getting much closer to now, total wealth in 2020:Q2 was $112.05 trillion and with a guesstimate of 130 million households (the available data only go to 2019) we get that mean wealth per household is $861.92 thousand.  If you are like me, you'll be surprised by how high that number is.  Yet so many people have hardly any wealth at all.  In other words, we are a very rich country, but are doing quite poorly by the criteria given in Justice as Fairness, by John Rawls. 

Now I'd like to give a few caveats to consider about income distribution and wealth distribution.  One is regarding geographical differences.  We know that cost of living is higher in big cities than it is in smaller communities.  A big component of that is cost of housing.  Further, one might reasonably expect that housing values will rise strongly over time in gentrified urban areas, while in contrast, in Champaign, Illinois, where I live, housing values tend to be flat over time. Those who work in big cities need to be paid more, to compensate for that higher cost of living.  So, one might want to compute "real income" and "real wealth" to normalize for the purchasing power.  No attempt has been done to provide such a normalization here.  Another caveat concerns income and wealth variation over the life-cycle.  It is natural to expect income to rise over time as the person accumulates human capital while working.  Also, some payment schemes include rewards for seniority or, alternatively stated, some early compensation is deferred till later.  This way the employer can reduce the likelihood of turnover from those employees who have a high amount of firm-specific human capital (so are hard to replace).  

Early in the life cycle the person may borrow to enhance consumption or use some income to pay down debts (perhaps on college student loans).  So there may be little to no saving early and if the person is renting rather than owning the place where they live, no saving via that route as well.  Saving and wealth accumulation happens later in the life cycle. So, the age of the household members will correlate with household income and household wealth, at least till retirement.  There is then a tendency to decumulate wealth after retirement.

We also need to account for inflation.  While the Pew graphs controlled for inflation, the Federal Reserve data does not.  A quick and dirty way to account for inflation is to use an inflation calculator.  Based on the results found there, between 1989 and 2020 the inflation rate was 110%.  Knowing the inflation rate, it helps us to connect growth in income per capita (when the economy is at full employment this is attributed to growth in productivity) to growth in wealth.  The graph below shows GDP per capita over the relevant time interval and comes from data provided here.   I will have more to say about inflation later in this essay.


Let me conclude these caveats with a mention about possible mismeasurement, of both income and wealth. For example, consider this essay that you are reading.  It is being rendered on a blog available to anyone who can get online.  In that sense, it is a pure public good.  If those people who read it get some value from the experience, then the essay has asset value (others might get the reader benefit as well), but that value is not being measured.  Another example are assets such as works of art that are not traded frequently, but are sometimes appraised (for insurance purposes or other reasons).  Appraisal in the absence of market transactions is more art than science.  It is quite easy to overvalue an asset in this manner.  In this sense income data have a firmer foundation than wealth data, because income is measured from actual transactions.  For wealth that happens only some of the time.  Yet some income spending, for example on an antivirus software for your computer, doesn't raise welfare at all, at least compared to the case where the threat of viruses is nil.  So GDP spending doesn't distinguish preventing threats to people in the economy from providing benefit for those people.  In that sense it too is an imperfect measure.

It is useful to have these facts and caveats in mind about the overall economy when considering Higher Education and how it prices.  While I don't have data on other than my institution, the University of Illinois at Urbana-Champaign, I want to suggest that this information is typical of public higher education in general, research universities like UIUC and other universities too.  But, particularly on what faculty were paid then and are paid now, there is a need for much more data to confirm the ideas suggested here.  

At the link is an Excel Workbook with three different worksheets. Each show time series data about UIUC.  The third worksheet might be looked at first. (You can view it in preview mode, but you will have to increase the view size to make sense of what you are looking at.). It gives time series information about base tuition and other costs that students incur (but not about college-specific surcharges, which didn't exist at the then part of the table but are an important factor to consider in the now part of the table, yet are not given in the table).  Also, these are not inflation adjusted data.  To make better sense of this information, note that through the mid 1990s, U.S. News and World Report considered the U of I a best buy.  About 10 years later, or so, the U of I became the highest priced public university in the Big Ten.  There is a complication here to consider that is not in the table.  This is the fraction of in-state students versus out-of-state students.  When I started, in 1980, there were fewer than 8% out-of-state at the undergraduate level.  Things changed rather dramatically some years later, where there were many international undergraduate students, so much so that five and a half years ago Inside Higher Ed ran a piece called, The University of China at Illinois.  (International tuition is roughly three times in-state tuition.)

The first worksheet speaks to a different question - who is doing the teaching? When I started at Illinois virtually all faculty in the Economics Department were tenured or on the tenure track.  And then the teaching load was two courses per semester, with the expectation that one would be a graduate class and the other would be an undergraduate class.  (Faculty had a preference for graduate teaching as they could then tie the teaching to their research.) But that became an expensive solution.  So at the undergraduate level, a new category of instructor came into being - specialized faculty.  These faculty were allocated to teaching 100%.  The table shows that they are increasingly performing the undergraduate teaching.  I will also add here that at least in Economics, the teaching load for tenured faculty dropped (after I retired).   It has since become three courses a year, and in many cases all the courses are at the grad level.  This was a response to the labor market for Econ faculty, not something originated here.  Whether that will sustain after the pandemic, or needs to be adjusted based on available resources, I don't know.  Here I want to take it as an indicator of how teaching loads are determined across higher ed, nothing more.  

The middle worksheet gives salary information for assistant professors and for specialized faculty in the Economics Department.  It would be very good to have such data for all departments across campus, and then for all universities in the Big Ten, if not for all research universities nationally  Two thoughts should be conveyed here.  First, specialized faculty are paid less than assistant professors.  No surprise there.  Second, I posted my salary from 1980, when I was hired, both the 9-month salary and then the 11-month salary as well.  In general, tenure track faculty have a 9-month salary, which allows them to earn income from other sources in the summer, a grant or summer teaching.  The 9-month salary is paid over the full academic year.  If summer money is paid in addition, then earnings are higher in the summer.  Part of this arrangement is to encourage faculty to be entrepreneurial, to encourage faculty to find sources for summer funding.  But a different part is to give the float to the State of Illinois.  The 9-month salary itself was paid over 12 months, so the state could defer payment of some of the salary to the following summer. Getting back to the information on the worksheet, junior faculty salaries were substantially higher in real terms in 2018 than they were in 1980,  and even specialized faculty salary in Economics in 2018 was higher than tenure track salary in 1980, after controlling for inflation.  

I don't have data for the Economics department for other faculty than me in 1980 (I was the only new assistant professor that year), nor do I have data for faculty in other departments at that time to compare to now.  So as a proxy for that I'm going to wave my hands and suggest the pattern was the same in many other social science departments.  In STEM departments, however, the pattern might have been different, not reflected so much in salaries as it would be in the number of people who were hired as part of the startup package for a new tenure-track assistant professor. And the startup package might include a variety of non-personnel costs, which would not have been included then but are relevant now.

The first worksheet shows, in general, an increased reliance on specialized faculty in undergraduate instruction over time and a gradual increase in undergraduate IUs (the product of credit hours and enrollments in a course).  To the extent that specialized faculty are substituting for graduate assistants, it is not obvious that this change is being done for cost reasons.  But when the substitution is for regular faculty, that is almost surely being done to economize on instructional cost.  We should ask what the impact is on quality of instruction.  I will comment on that briefly in the next section.  I want to wind up this section by observing that the increase in IUs over time reflects both that the undergraduate student population had been increasing (these data are from before the pandemic) and it may be that students are taking more courses recently, as a way to graduate earlier and save on tuition expense that way.  

There is a factor that cuts the other way, which I know from my former students is a serious matter but which I have no data to indicate how extensive a matter it is.  Courses have capacity limits.  Students get on a wait list for a course that is already fully subscribed, hoping they will still be able to take the course after some currently enrolled students drop it. But in some cases the excess demand is so great that they can't take the course that semester.  Then they have to put off taking it till sometime in the future or abandon their plan to take that course entirely.  This can delay the time of graduation.  I'm guessing this happens much more at public universities than at private ones, especially public universities where enrollments have been rising.  Conceivably, online offerings can mitigate the problem, as classroom space limits wouldn't be an issue then. 

* * * * *

School has always had the potential to address two different purposes.  One is to promote learning for learning's sake - help to satisfy the student's curiosity, give the student modes for self-expression, and let the student learn to like these things so want more of it.  This is the idealistic view of school.  The other view is school as preparation for what comes next.  That sort of preparation comes in two different flavors.  In economics we refer to the first flavor as acquiring human capital and to the second flavor as signaling, such as indicating to a recruiter that the student is an attractive candidate for a job or internship.  Where in some areas human capital can be observed or demonstrated (a company considering a student for a programming job might give the student a programming task and see what code the student produces) in many areas human capital is really an experience good, that the potential future employer can only learn by hiring the students and seeing how the then employee performs on the job.  When human capital is an experience good, the signaling function takes on added prominence.

It is conceivable that the same behaviors at school can be done for both purposes at once. 

You've achieved success in your field when you don't know whether what you're doing is work or play.
Warren Beatty

Much of school was like this for me, though the first quarter of graduate school definitely was a grind.  However, I think it fair to say that nowadays many students rarely if ever find learning for learning's sake by the time they reach college.  For these students, school becomes a choice between acquiring human capital and producing a good resume (signaling).  Now I want to make a point that I haven't seen others make before, though it is pretty obvious in this context.  As income/wealth inequality has increased in the society overall, the pecuniary rewards from producing a good signal have gotten greater.  Consequently, the conscientious student is apt to spend more effort on producing a signal and less on generating human capital that can't be readily observed.  For such a student, school has become a form of economic rent seeking.  

Such behavior is rational in an economic sense.  But it is alienating for the students as it does not address the student's emotional needs and needs for personal growth.  Those who get through the experience, land a good job and find themselves on a good career path, are apt to subscribe to the Just World Hypothesis and thereby feel entitled to be extremely selfish and consequently are unaware of the hardships that others may be facing.  If it were known that this is happening with a significant fraction of a university's recent graduates, would that be an object of pride for the university or something the university feels it should remedy? 

We should also consider the consequences on students still in college and, indeed, those still in high school. Some may become excessively cynical and quite bitter.  They've been playing a game where essentially all academic actions are instrumental for a high income future; none of these actions produce emotional or intellectual rewards in the present.  And they are interacting socially with other students who are in the same boat.  Those interactions may be painful, even destructive.  The mental health crisis among undergraduates surely is somehow tied to this, and arose before the pandemic became our preoccupation.  I view that crisis as a canary in a coalmine.  Things need to change so more of the college experience nurtures the student in the here and now. But this can only happen if the college degree as passport to the good life depreciates in value.

Then we should look at how this behavior impacts instruction.  About a decade ago I wrote a post called Excise the Textbook, which gave an argument for abandoning the "transmission model of instruction" to instead embrace inquiry methods, and with that select readings from a variety of sources that support the inquiry.  But the transmission model remains.  In  this model, the textbook serves as oracle, the instructor relays the wisdom from the oracle, and the students absorb this wisdom verbatim.  The exams measure the ability to regurgitate this wisdom.  If students get good grades on the exams (which then helps them to produce a good signal for their resume) they are satisfied and indicate this on their course evaluation questionnaires. As specialized faculty don't have tenure and are concerned for their own job security, they need to receive decent course evaluations.  The approach is thus self-sustaining.  But it is mainly a charade. For students who perceive this and are self-critical, it contributes to their alienation.

The story above pertains to all students, regardless of family income.  It is a story that should make sense to the students themselves and to their instructors.  But to outsiders, a different story has captured the attention, concerning college student loans.  So I'd like to incorporate that into what's already been said here. 

One consideration which I'm aware of, but don't have sufficient data to consider all the implications, is that nowadays many students follow a 2 + 2 model in college.  They start in community college for the first two years, live at home then, and only after they have earned their Associate's Degree do they transfer to the university for their junior and senior years in college.  This path clearly lowers the cost of going to college for the student and the student's family.  Is it mostly low income students who follow this path?  I don't know.  Do these students perform as well when in their junior and senior years as other students who got need based scholarships for the first two years?  Again, I don't know.  What seems evident is that students need a group of friends to socialize with to feel comfortable in college.  It may take a while for that to develop and/or students rely on friends from high school for this. The transfer student may be at a disadvantage this way.  

Another consideration is the general out migration from the humanities as a major, which surely has been influenced by the economic issues considered in this essay, where the perception is a degree in a humanities field is less good preparation for a career (or at least for an entry level position) than a degree in a STEM or business discipline. Does this choice correlate with family income?  I don't know. More generally, one might expect experimentation via taking elective courses outside the major to be correlated with family income, with the idea that students need to learn about their own interests as much as to prepare themselves for life after graduation.  But such experimentation is a luxury, and possibly could mean longer time to degree.  However,  having AP credit coming in as a first-year student surely is correlated with income.  That might otherwise speed up time to degree.  So the experimentation isn't as costly for such students.

Then, too, a related issue to family income is whether previous generations of the family also attended college or if the student is the first in the family to attend college.  Expectations of family members on the student are quite different in the two cases.  First generation students find the first year of college especially challenging and dropout rates are higher for them as a consequence.   I believe that attending a year of college (or less) is of no benefit in the labor market, compared with not attending at all.  

Let me get to the purely financial issues with college student loans.  If the family covers the loan, rather than the student, so the student doesn't have a pile of debt upon graduation, then this must come from family savings that were intended for the parents' retirement or some other rainy day emergency.  In this case the cost of college is interfering with the life cycle view about wealth accumulation and has disrupted the middle class style of life as a consequence.  If the student after graduation becomes a big success, the parents can be repaid, and it works out for everyone then.  But this is far from a certainty.  Counting on it then begins to look like a Ponzi scheme.  

If the student bears all the debt, the student then is under enormous financial pressure from the get go upon entering the labor market.  The issue ahead of time, when the student is just beginning college, is who should bear the risk regarding how successful the student will be after graduation.  In general, it is efficient to shift risk away from people who can't diversify the risk to others who can.  This is the principle behind insurance markets.  The "Free College" movement is on track on this issue about who should bear the risk.  Yet the quality of the offering that is free should be something to care about.  Is free and yet high quality possible?  Or is that a pipe dream? In a prior post, I wrote the following, which I believe is a good way to think about the issue. 

......A similar argument could be made that the Federal government pay for college and then have the graduate face higher taxes over a period of time.  If the student could commit to living within the state after graduation, it could be state government that does this.  The difference between this approach and the current approach with student loans is that the amount actually paid back would vary with future income, not just with the amount of tuition paid.  This strikes me as the right way to manage the issue, but might never happen because there is no champion to advocate for it and it could end up a political football.

* * * * *

Let's turn now to rent seeking by elite colleges and universities.  Some of this, of course, happens in pursuit of alumni contributions, particularly very wealthy potential donors who make the big gift, to fund a new building for example. I don't want to minimize the importance of this form of rent seeking.  Indeed executive officers - department heads, deans, and provosts are evaluated at least in part on how well they do as fundraisers.  So the activity is built into the business model of the university.  But here I want to focus on admissions and tuition, as that best fits in with what was discussed previously. 

Let's begin by giving a novel way to think about how tuition is determined at elite private universities and liberal arts colleges.  If the degree offers a lifetime economic rent to the graduate, then tuition serves as partial rent extraction.  The size of the entering class is determined to maximize the economic rents of the graduates, thereby maximizing the the total tuition that can be collected by the academic institution.  The calculation is very much like the theory of monopoly pricing, which we teach in intermediate microeconomics.  As is well understood, there is deadweight loss as a consequence of such pricing - the volume is too low compared to the socially efficient volume.  So too, with admission to these elite institutions, as I point out in the piece linked above. But there is an added wrinkle with admissions that isn't in the basic monopoly model.  The lifetime economic rent obtained by the graduate likely varies with characteristics of the students.  There is then an incentive to admit those students who have the highest expected economic rents.  

One might reasonably conjecture that this correlates with family wealth, perhaps giving a reason for legacy admits.  This reason might not be attractive to the reader.  Indeed, this explanation of tuition setting and size of the entering class might seem excessively cynical.  And surely there will be notable exceptions, students who get need-based scholarships and who have obvious rare talents or who are admitted to diversify the entering class.  Imperfect explanations still can have value if they explain an important chunk of what is going on.  I will get to the evidence in a bit.  First, let's turn to public universities and in-state tuition.

The notion of in-state tuition is a holdover from those days when state government paid the bulk of the costs at public universities.  It still may make sense at regional public universities, as a way to make college affordable.  It is far less obvious that it continues to make sense at the flagship universities of major state systems.  (Those are typically the campuses with big-time football teams.)  If you ask students at Illinois what their next best choice was when they signed the acceptance letter to attend Illinois, they will likely say either going to another public university - Wisconsin, Michigan, or Indiana the most popular choices, and then pay out-of-state tuition there, or go to a private university - Northwestern, U Chicago, or Notre Dame probably top the list.  Only very rarely might they identify another university in state - Illinois State, SIU, etc.  So with in-state tuition most of the students are getting a huge financial surplus.

This leads in to the following question.  How is in-state tuition determined?  It is a political animal, as it is set by the Board of Trustees of the University.  I don't know this for a fact, but I suspect it is set at a fraction of the tuition at these next best alternative.  When I started back in 1980, I believe Illinois tuition was about 1/6 of Northwestern tuition.  Nowadays, especially if you include the fees and college-specific surcharges, the fraction is more like 1/4 or 1/3.  It's still a great deal financially, compared to the alternatives.  So, the pricing of the alternatives is like an umbrella under which in-state tuition is determined.  I do want to note that the university increased undergraduate enrollments, mainly with international students who pay much higher tuition, to bring in more revenue via the tuition route.  The pandemic coupled with immigration restrictions imposed by the Trump administration may make this much harder to do in the future.  If international students become hard to recruit, in-state tuition will have to rise to offset the revenue loss.  How that will play out, I don't know.  I'm not so concerned with the immediate future in this piece. 

Let's consider the evidence.  The New York Times had a feature a few years ago that took the highlights from an NBER study, Mobility Report Cards: The Role of Colleges in Intergenerational Mobility.  The results show that students from the bottom half of the income distribution are less likely as a group to attend an elite college than students from the top 1% of the distribution.  Yet the earnings of such students don't vary that much after graduation.  (The paper focused on a cohort of former students who were 30-32 at the time the study was conducted.  Conceivably, earnings could have varied more as these former students got older.)  For my teaching I did the report card for my campus that typifies the results.  (There is some gobbledygook in the html on this page that makes it a little hard to read.)  

A different representation is for my class in 2017.  I plotted the home addresses of students as they appeared in Banner (town, not street address) and did this for all the domestic students in the class, so there is a flag for each address in Google Maps.  (There were two or three international students who don't appear in this representation.)  If you zoom in around Chicago you can find most of the flags and see that Chicago proper is under represented.  Most of the students are from the suburbs, north and west of Chicago.  If you zoom out you can see the few flags that are from outside of Illinois, but still in the U.S.  You can also see there are no flags in Illinois west of Peoria nor south of Champaign.  Of course, this was just one smallish upper level undergraduate class.  It would be good to have such a representation for the university as a whole My guess is that the results would be similar, but at present that is just a guess.  

While there are some higher education institutions that seem a pathway to high incomes for students from low income families (Cal State schools and CUNY schools were cited in this regard) it appears that much of elite higher education reinforces the current income distribution.  Attendance in such colleges, then, can be considered part of the bequest well off parents make for their children, while the parents are still alive. 

While the parental motive is understandable, one wonders whether it gives them tunnel vision regarding what is happening overall. If so, this tunnel vision of well off parents seems to be what drives the elite universities along their paths regarding admissions and tuition. 

* * * * *

The high concentration of wealth might not be so bad, if the uber rich carried with them a modern day equivalent of noblesse oblige.  But we're not seeing that.  The Republicans intense desire to do away with Obamacare, a notable example, relates to the need for insurance premiums to be subsidized, with the subsidies paid by high income people. The tax cut that happened under Trump, which mainly benefited the rich, was a way to solidify this group as Trump supporters.  I have to say, on one level I don't get it.  If you are already very rich, why do you care about getting a tax cut? And, of course, Warren Buffett has said those taxes shouldn't have been cut, so I'm not alone in my not getting it.  But I do accept that it is current reality - very wealthy Republican supporters don't like paying taxes.  

I found this piece an interesting read, The President Has Made Selfishness Our National Credo. What I wanted to know after reading is whether those very wealthy Republican supporters are the same that way - entirely selfish, the Libertarian facade enabling a complete lack of social responsibility.  That's possible, but I've found at least a little evidence that it might be otherwise, which I wrote about in a post called Mattering Bias.  In that post I bring up the example of The Giving Pledge, where wealthy people make a commitment to give away a substantial part of their wealth in a philanthropic effort.  Among the people who have taken the pledge, there are well known conservatives who are strongly anti-tax.  How can that be?  In that post I conjectured that wealthy people care how their money is spent and want to know it makes a difference.  In contrast, you and I, when we pay taxes or when we give to charity, contribute such a small amount that it is merely a drop in the bucket.  But, I want to note the mattering bias argument is just a conjecture.  A very selfish person might take The Giving Pledge merely as a cover, to avoid unwanted criticism about the person's selfishness.  I really don't know which is the right explanation. 

This gets me back to the distribution of wealth itself and a desire for some substantial wealth redistribution from the rich to the poor, to make the system seem fairer and to try to restore E Pluribus Unum as the National Motto.  The question is how to get this done, via an increase in taxes on the wealthy or some other way.  If Biden wins, he has promised a tax increase on the top 1% but no change in taxes on the bottom 99%, and if the Democrats also take back the Senate, then this should happen.  But we will still be in a pandemic economy for the indefinite future and managing the near term may not be a good guide for how to do things if and when we do return to normal because essentially everyone in the population has immunity to Covid (from a vaccine that is effective and widely distributed).  Now we can discuss how these accommodations might be done so that if and when the normal takes hold, we're ready to take action.  

The reason to not rely on taxes entirely to do all the heavy lifting regarding redistribution comes from considering our recent history, where the Democrats and Republicans have rotated through who is in control and where there have also been episodes of divided government, which as of late are apt to produce gridlock rather than reasonable compromise. So, instead, I'm going to imagine that such redistribution happens sector by sector in the economy and happens slowly over time even within a single sector.  I will make the case that Higher Education is the sector which should go first and take a leadership position in doing so. 

* * * * *

To consider the issues with income redistribution in Higher Education, I will first look at my own campus as means of illustration.  About 15 years ago the campus began a modest program called Illinois Promise, offering free tuition and other support to students whose families lived below the Poverty Line.  The program did succeed in getting low income students through to a degree from Illinois, but its scale was such as to be largely invisible to those not directly involved with it.  The small scale is the reason the program could exist without explicitly identifying revenue sources to support it and/or internal budget cuts that would be needed to finance it. 

More recently the campus has introduced a new program called Illinois Commitment, where the family income bar is much higher than it is in Illinois Promise.  The income bar is the median family income for the State of Illinois.  I can only guess why the campus started this new program.  I suspect it was a response to Bernie Sanders' appeal in 2016 with his campaign of free tuition for all, and/or New York State embracing a broad free tuition program.  Do note that there are various stipulations which must be satisfied for the student to qualify for Illinois Commitment.  Nevertheless, you'd think that with a higher income bar many more students would be admitted under the program.  So, as before, we should ask how the campus can afford to do that.  Here are a few different possible answers.  

One is that, in fact, not that many students would be admitted or, if admitted, would decide not to attend Illinois.  For the former, I wonder what the data are about such students in years prior to the program.  Are such students less likely to be admitted than students from wealthier families (perhaps because their standardized test scores are lower)?  I believe that county of residence within Illinois is a factor taken into account for admission.  But still, my impression is that most of the kids come from the suburbs of Chicago and for the vast majority of those, family income is well above this bar.  For the latter, I'm reminded of the story where Larry Bird first attended Indiana University to play basketball for Bob Knight. Bird soon transferred to Indiana State, which on pure basketball grounds didn't seem to make a lot of sense.  But Bird reported feeling uncomfortable at Indiana - his clothes weren't up to snuff and he just didn't fit in.  I believe that factor would be at play in this case as well.  Perhaps there are things the campus can do as an offset to make these students more comfortable, thus wanting to attend, though I'm not sure what those things would be. 

For the sake of argument, let's say all of that can be solved and a substantial number of students come in under the Illinois Commitment program. How, then, will the campus pay for it?  I'm going to do some guesswork here to try and answer that question.  Five years ago, it seemed the campus was booming with international students, particularly students from China.  They were paying the international tuition rate, and the surplus from that over the in-state rate seemed a life-saver form U of I budgeting.  This particular solution was not available to many other campuses around the country, but Illinois has a very strong reputation in China, based primarily on the reputation of the College of Engineering.  Yet non-engineering students were coming in great numbers as well.  Given limited funding from the State of Illinois, their attendance was a godsend.

Then Trump became President and imposed various restrictions on people who wanted to come into the country.  This was a threat to the revenue stream.  The pandemic, which restricted international travel much further, was still another threat to the revenue stream.  If things return to normal because an effective vaccine will be found and widely distributed, will that revenue stream return to what it was before?  I don't know but for the sake of argument let's say it won't.  Then there needs to be another way to fund the Illinois Commitment program.  The most likely candidate is to take some cost cutting measures.

To be clear, most campuses have already taken cost cutting measures as a result of the pandemic and are likely planning more for the very near future as tuition revenues are down.  Are these cuts temporary? The hypothetical scenario we're operating under is that things have returned to normal, but when this happens has not been specified.  If this normal happens before fall 2021, I'd hope the cuts were temporary, but it might take a while for them to be restored.  In this case, it's best to think of our hypothetical happening two or three years later, so there is ample time to understand what the new normal looks like. In the intervening years, perhaps the volume of Illinois Commitment students won't be so great or the campus taps into endowment funds to cover the costs in that time period.

So, the cost cutting I'm going to consider here is after normal operation has returned and any temporary cuts that occurred during the pandemic are restored.  Further, the anticipation is that cost cutting will be permanent, so the Illinois Commitment can be an ongoing program. In addition, let us suppose that the cost cutting happens via salary compression, so comparatively low wage employees are exempt and the act of cost cutting itself reduces income inequality on campus.  It doesn't make sense, at least to me, to aim to reduce income inequality socially by giving students from low income families free tuition, if that is financed by laying off low wage employees or cutting their pay.  

However, there is an important issue with salary compression as described in that post, when it is only one campus that does it.  It will induce some well-paid faculty and staff to find work elsewhere and then quit.  Further, these quits won't be randomly distributed.  They will be concentrated among the very best, as these are the ones who are apt to have strong external alternatives.  This is an example of Akerlof's Lemons Principle.  When star performers leave an academic department, that will hurt it in the national ratings.  So, department members will be angry about this and thus become angry about the salary compression program put in place and indirectly become angry at the Illinois Commitment program.  Is there a way out of this?

I believe there is thanks to Duesenberry's Relative Income Hypothesis, which says that people's consumption patterns and sense of well being are set relative to a peer group that they designate.  Keeping up with the Joneses is in the spirit of this theory.  The Relative Income Hypothesis makes use of sociology as well as economics, which lends reality in my view, but which made the theory less prominent among economists when it came out.  And now we have the first real reason for why elite Higher Education should take the lead in doing sector-specific income redistribution.  I'm confident that the hypothesis holds true for virtually all faculty as well as for most campus administrators and senior staff.  The turnover can be prevented if the same salary compression happens at all the peer campuses around the country.  

Would other campuses buy into this general idea - have a program of free tuition for students from low-income families to be financed by salary compression (and related expenditure cuts that don't impact the bottom of the wage distribution on campus)?  Or is this impossible to achieve because leadership at the most elite campuses feel they are so advantaged by the current approach (which above I called rent seeking) that they see little reason to abandon what they are currently doing for what still appears a vague social good?  This framing of these questions makes the situation look like a multi-dimensional Prisoner's Dilemma.  That was deliberate on my part.  There is a huge amount of theory about how one might obtain cooperation in Prisoner's Dilemma as well as a host of experimental evidence that supports this theory.  

While I'm not up to speed on current developments about this theory, I suspect the following has not yet been well modeled and thought through.  This has to do with how our recent social experiences, when intense and profound, shape our attitudes about what is right and what activities we should undertake (impacting how we should play the game).  I can imagine that a common belief has emerged on these campuses, to combat bald selfishness, by which I mean selfish behavior that simply ignores the consequence on others.  (Not wearing a mask while in a public space is the current leading example.)  Of course, some of this behavior has happened on their own campuses, with immature students who want to have some fun taking undo risks, with their own health and the health of others, their friends included.  As educators we might ask, what would get these students to overcome their immaturity, to make an evident sacrifice in their immediate well being on behalf of the benefit for all, longer term.  It seems to me that campuses making clear financial sacrifices on behalf of needy students is one way to model this more mature behavior, and thus one reason why the ideas being presented here might be broadly embraced within elite higher education. 

* * * * *

Having given this sketch of the ideas, let us note that the devil is in the details and that until those are fully fleshed out one should keep a skeptical view about whether implementation of the ideas is possible.  This is especially true for campus leaders who have been operating in rent seeking mode as their primary driver.  It may be that for public relations reasons they need to market their campus as promoting upward mobility for their students from low-income families.  But this has been more talking the talk than walking the walk.  

How might that change?  Thinking about the change as diffusion of an innovation brings to mind Malcolm Gladwell's book, The Tipping Point.  Who will serve as the Connectors? the Mavens?  Let me bring in some factors outside Gladwell's framework to reckon with, before returning to Gladwell.  First, coping with the pandemic is really straining folks inside campus administration.  It is more than full-time work and it is exhausting.  While people might relish extensive thinking and planning about the future as a break from the intensity of now, do they really have the time for it?  Second, the biggest uncertainty about the future after the pandemic has ended will surely be the state of the macro-economy.  One might expect to proceed differently depending on whether we soon return to a full employment economy or instead linger in a recession for quite some time.  Third, campus administrators must navigate three distinct constituencies: (a) faculty, staff, and students; (b) members of the governing board of the university; and (c) the general public.  It would be much easier to give the go ahead with the innovation if it played well with each audience.   Last, there may be certain disciplines which wouldn't generate much faculty turnover at all from salary compression, so unit administrators in those disciplines might be much more willing to go along.  However, there will be other disciplines where turnover is likely unless the relative compensation among faculty at peer institutions is managed well, which must include some rules about the salaries under which new hires are brought in. As not all new hires beyond entry level positions are the result of being disaffected at one's previous institution, these rules need to manage both situations equally well. The same can be said for senior staff and administrative positions.

Let me return to Gladwell and speculate on how this will play out.  Those on campus who are very committed to social justice issues in society as a whole surely will realize that with persistent income inequality they will be stymied in achieving their goal.  They should readily embrace the ideas.  Nevertheless, that is far from sufficient for diffusion and, if my experience is typical of those who use social media, there is a tendency to preach to the choir there.  That will not do.  The connectors will be those who can bring together people who have unlike views.  How they will do this I don't know.  We should expect that in some instances neither side is persuaded and then leave the conversation with the same views as they had entering it. In some instances, we hope, their views will converge to the position that income redistribution within Higher Ed is a good thing to encourage.  The Mavens will study the arguments that bring this outcome and understand their common elements.   In other words, the Mavens will perform an applied research and then publish about it in scholarly journals as well as in more popular outlets. For their results to be found credible the Mavens themselves must be from Higher Education.  This is another reason why Higher Education should take the lead in sector-specific income redistribution.

I don't claim to be a Maven, but my current conjecture about what they will learn is that the message must be a mixture of principle and some pragmatic thoughts about implementation.  On principle, my guess is that the ideas of Harry Boyte on teaching students to be good citizens would have appeal.  On pragmatics, my current thought is to market this as a slow and steady wins the race approach - a little bit will be done in year one, a little bit more will be done in year two, etc.  If there are some snags that require adjustment of the approach, this will enable it.  This must somehow be combined with the observation that Higher Education is but one sector of the economy.  For this effort to be truly successful, it must eventually be mimicked elsewhere in the economy.  (Healthcare might be the natural successor.)  So effective messaging needs to encourage that. 

Inflation Is Our Friend In This Case

We are all taught that inflation is evil, so saying otherwise sounds sacrilege.  Let me note two things where inflation might be useful specifically for reducing income/wealth inequality.  First, low income people tend to be debtors while high income people tend to be creditors.  (Donald Trump is a notable exception.)  Most debt is not inflation indexed.  Inflation lowers the size of the debt in real terms.  This is a benefit to the debtor and a cost for the creditor.  For income/wealth redistribution, that's exactly what you want.  If the Federal Reserve can create inflation (which currently is against its mandate) that would help to reduce inequality in our society.  

Likewise, if you want to cut the real pay of somebody during inflation, simply have them receive no pay increase, or only a modest increase that is less than the requisite cost of living adjustment.  Nominal pay doesn't have to be cut.  Way back when I was an assistant professor and the campus was going through some hard times financially due to the recession that Paul Volcker induced (to beat inflation down) a question emerged whether it was legal to cut the nominal pay of tenured faculty members.  I don't recall what answer we came up with then, but it conceivably could be an issue with the proposal here. Without inflation, and if it is illegal to impose nominal cuts in pay, the only alternative left would be for the faculty to gift some of their salary to the university, as they do with other charitable giving that the university coordinates, though the charities themselves are not part of the university.  My suspicion is that under a gifting approach, many would opt out entirely and others would gift less than is really needed to make the program work. Inflation gets around this issue.

Extensions Of The Main Idea

One might consider the main goal of the proposal here to change the ethos, so that social responsibility becomes the norm and people who are financially well off don't maintain such an insular and selfish mindset.  If that can happen in higher education, can it also happen in other sectors of the economy?  

The discussion about healthcare among liberals has focused on access.  What about the cost of providing that healthcare?  Having had a serious health episode back in 2018, I got exposed to the billing my health provider sent to my insurance company.  I thought it was outrageous (though I will admit this was just a gut reaction and not an object of study).  I surmise that as an economic system healthcare in the U.S. is broken, in the sense of way too much rent seeking.  Might healthcare pricing get renewed attention as a consequence of this proposal.  If so, it will have been worth it.

It is also understood that elsewhere in the private sector the share of labor in company costs has declined while CEO pay is enormous and the share of "profit" that goes to those who own stock has also increased.  Boosting labor's share back to 70%, its historic level, or perhaps even higher, would be a wonderful possibility.  The ethos would have to change dramatically for this to happen, but one can hope that is possible. 

Then, let's consider taxpayers' attitudes to paying income tax, in particular, and on whether wealthy taxpayers embrace progressive taxation or not. Negative attitudes about taxation among those who are well off financially seem to me tied to bald selfishness.  I don't expect that John Birch Society types will change this way.  But many who otherwise claim to be Liberal and who are well off, in the top 10% of the income distribution, are nonetheless rather conservative when it comes to taxes. (My conjecture is that they've been co-opted by the Republican approach to taxation.)  Might they come to see it is their social responsibility to pay more in taxes.  On this one, in particular, I would expect Democratic politicians to assume the role of educators. As I noted earlier, for this election Biden's proposal leaves the taxes of those households making less than $400,000 unchanged.  Only the 1% would then see their taxes increase.  This probably made sense for the current election.  The difference between Center and Left may hinge on this issue and I don't want to challenge the thinking for the here and now.  However, if the ethos does change, I would imagine that proposals regarding income taxation in subsequent Presidential elections will also change and many in the professional class will favor having their own taxes raised.

If We Do Return To A Middle Class Society

People will want to know whether there are jobs that sustain this lifestyle.  I will note that I've been retired for more than 10 years, spend much idle time (some but not all of that time is needed as preparation for writing a piece like this) and get my income as a pension from the State of Illinois.  Bertrand Russell, in his essay from 1932 titled, In Praise of Idleness, argues that this arrangement could be applied broadly, not just to retirees, but to all adults in society. It is not necessary that people "work for a living."  We produce enough so people could work much less, as long as the product of that work is shared broadly. 

Yet, I don't think we're ready for this just yet, even as automation and artificial intelligence make Russell's argument all the stronger today.  Instead, I want to note a paradox of work today, based on those I know at the university.  The workload is enormous.  People are fatigued because of it.  The pandemic and working from home has exacerbated this.  There is one meeting right after another with no downtime in which to gather one's wits.  How can it be that good jobs are scarce yet those who have them are overwhelmed by the workload?

The economics answer is that some embodied attribute, you might call it firm-specific human capital or firm-specific social capital, creates a very strong bias toward those who already have the good jobs and away from those who have good resumes but lack this firm-specific asset.  In other words, there is a kind of market failure where we don't grow those specific assets enough in the people we employ who are further down the food chain.  On the one hand, this is why those who are higher up are well paid.  On the other hand, if there was a less steep earnings profile, it might encourage us to address this scarcity problem, and thereby employ more people without overwhelming them with work.  

I think the answer for people without a college degree is different.  The decline of manufacturing did horrible things to working class people, as described in this piece by Angus Deaton.  Instead of manufacturing jobs returning - highly unlikely in my view - we need to create other blue collar jobs that pay decent wages.  Construction would seem the best alternative and the Green New Deal as a latter day WPA probably is where we are headed on this, though I'd like to note that old fashioned infrastructure is also in dramatic need of reinvestment.   Perhaps if the country had the will to do these activities for a period of 10 - 20 years, it would address the employment issue for working class people sufficiently during that time period, after which the Bertrand Russell ideas could take hold. 

* * * * *

Writing a piece like this, I sometimes think of what I'm doing as trying to craft a proof to a theorem in math and then explain the argument to a non-math person. This well-known cartoon from the New Yorker occurred to me several times in writing this piece.  Is it simply an elaborate pipe dream?  Or might it actually be possible? I have no way to answer these questions other than by getting those who've read it to express their views on the matter.  

So, as with many of my posts, I don't view them as a blueprint or a call to action.  Instead, I hope they provoke some thinking in the reader.  It seems we're so locked into the events of the present that we do far less thinking about the future than we actually should.  And, as I've written elsewhere, our politics doesn't help with this.  Candidates introduce proposals but don't give sufficient background to consider them properly.  Understanding the background is important, especially if we can agree on that, even if we disagree on which proposal is best.  I want to conclude here by observing that I'm not arguing for the specifics I described.  Instead, I wanted to ask whether income redistribution at the sectoral level can happen without government interference.  And if it can happen, should it happen?  Those are issues to ponder.