Tuesday, March 21, 2017

When We Lost Our Mojo

In my course on The Economics of Organizations I do a section on personal reputations.  As we use student blogs to ready the class for discussion on the issues, it may be interesting to consider the prompt I gave students on this topic.

The topic is personal reputations and their role in influencing behavior. Describe some domain where you have a strong reputation with others (it could be with friends, it could be with your family, it could be at some place you worked, etc). Then discuss how your reputation developed. Consider what you do to keep your reputation intact or enhance it further. Finally, reflect on whether there are occasions where you'd like to stray from the behavior suggested by your reputation and what you do on those occasions. Have you ever "cashed it in" by which I mean you abandon your reputation altogether in favor of some immediate gain?

There is an economics theory of reputations based on play in a repeated Prisoner's Dilemma.  Without getting into technicalities, what the theory shows is that if the future looks sufficiently optimistic then players don't cash it in.  In contrast, a more pessimistic forecast leads to doing what is myopically optimal, disregarding its impact on the future.

I'm now going to switch from the language of economics to the language used in considering diffusion of innovation.  Early adopters of an innovation tend to be optimistic.  They embrace the new thing because of the possibilities it enables.  Those possibilities suffice for them.  High likelihood of success is not necessary.  Their enthusiasm combined with their experimental play with the innovation ends up being the driver of success, as much or more so than the innovation itself.  In this sense, adoption is itself a creative act rather than a mere flip of a switch.  Adoption entails fitting the innovation to purposes that adopters see, which were not apparent to the inventor of the innovation.

I am specifically thinking of teaching online in considering the above, but I think it applies in many other areas as well.  Examples include Muhammad Yunus' development of micro credit and Atul Gawande's championing of hand washing to prevent the spread of infection in hospitals.  In each case, apparently successful adoption encourages spreading the innovation further, till it is embraced broadly.

In contrast, latter adopters of innovation may do so for defensive reasons and/or to create more immediate gain for themselves.  In the online learning arena, one finding from more than ten years ago is that many instructors used the learning management system in a dull way, primarily to share files, and completely eschewed the possibility of experimenting with their teaching with the technology as an enabler of such experimentation.  In the business arena, one might consider subprime lending with essentially no underwriting standards (loans equal to 100% of the value of the house offered to any borrower whatsoever regardless of ability to pay off the loan).  In retrospect, it is difficult to fathom how this could have occurred.

It is worth noting that Gawande's piece on hand washing occurred roughly at the same time that Countrywide was making all those inappropriate subprime loans.   Indeed, at any one time we are likely to experience early adopters doing creative things that improve matters and  latter adopters of some other innovation creating harm via some cashing in approach.  For example, the late 1980s to early 1990s were a period that can be generally characterized by a sense of optimism for a variety of reasons, one being that the PC revolution was well underway.  Yet during that time my parents, who were retired and then snowbirds living in Florida in the winter and in New York the rest of the time, were scammed by their financial advisor at Prudential Bache.  Less than a decade later, there was the well known Enron debacle, much of which occurred during an even more optimistic period, now referred to as the dot.com bubble, with Enron's bankruptcy an emblem of the bursting of that bubble.

Optimism and pessimism vary over the business cycle - bulls charge while bears shy away.  But what we seem to have now is a very broad malaise, even while the stock market itself has been faring pretty well. 

I won't offer up a precise time, though I suspect in retrospect it will seem earlier than it did in prospect.  I will also try to do this both for me as an individual and for the country as a whole.

I switched careers, from doing economics full time to becoming an administrator in online learning, in that same optimistic period we associate with the dot.com bubble.  I was caught up in the enthusiasm, not for financial reasons, but for the potential that the technology seemed to unleash for learning.  But the job gradually changed, moving from supporting experimentation in a variety of ways, to offering production services for ordinary faculty.  In spring/summer 1999 when I became the director of a new campus Center for Educational Technologies, there still was a soft-money organization SCALE that was giving us a good chunk of our funding and enabled the experimental approach.  We had a chance to extend SCALE's funding, with the Mellon Foundation replacing the Sloan Foundation as the outside funder.  But that didn't pan out.  Thereafter, our funding was mainly internal.  It had a consequence I didn't fully appreciate at the time but seems quite clear looking backward at then.

Six years later we are in the midst of the full campus roll out of our enterprise learning management system, which was branded on campus as Illinois Compass.  It was a disaster.   The service had to be taken offline for a better part of a week.  After that we threw money at the problem and did a variety of reorganization to shore up the service.  Thereafter things were better for Illinois Compass, but I'm somebody who believes you can and should solve problems in prospect.  In this case I was unable to address these matters and this was to be my big moment.  It was all very deflating.

A year later I had a horrific fall, severing all the tendons on my left leg between the quadriceps and the knee. This piece is from four weeks after the surgery.  During this time I was between jobs, moving from the campus job I had to the College of Business.  There was potential that the new work would revive my enthusiasm.  But I found some blocks to that in how the College was structured that I hadn't anticipated when I applied for the position.  It's not that there weren't some successes and forward motion.  There were several of those.  But there was no home run, no fully online program.  The College has that now.  It wasn't ready for it then, though I only realized that in retrospect.

Let me switch to the country as a whole and draw some temporal parallels to my personal experience.  The aftermath of Hurricane Katrina happened at roughly the same time as the Illinois Compass debacle.  It seemed the Federal Government was incompetent and uncaring.   How the Airlines responded to 9/11 gives a different look, that it was the private sector as much as the government that was incompetent and uncaring.  This piece is talking about a time around when I had my leg accident.

The U.S. Commercial Air Transportation Analysis concluded in 2006 that despite advancements in technology, the overall customer flying experience was going down.  Cuts in food, customer service, capacity, onboard conditions were just some of the reasons given by the report, which came to this conclusion: "Virtually all travelers would likely say that travel through the aviation system today is less rewarding and more onerous than it was 5 years ago."

A year later there was the Surge in the Iraq War, a war that had been ongoing for five years.  People disagree about the efficacy of the surge, but that the American public was wearing down from the ongoing involvement with no obvious win to claim for the effort.  Surely that was demoralizing.    And this is still a year before the burst of the housing bubble.

Let me close by moving earlier in time, to the Bush Tax Cuts.  These were initially passed in the aftermath of 9/11, with the Republicans in control of both the White House and Congress, as is the case now.  Piketty's much discussed book Capital in the Twenty-First Century has a 2014 copyright.  Looking at the Bush Tax Cuts from the perspective of Piketty, what were we thinking?  When we willfully make such bad choices we eventually have to pay the piper.

And we're still paying the piper now.

Monday, March 20, 2017

Sweet Little Sixteen

This post is a diversion, allowing me to dovetail two different "sweet sixteens," each of which is timely.  The first is the Chuck Berry classic song from 1958.  The tune and lyric are both so familiar.  Also, it is kind of amazing to see a young Johnny Carson with Dick Clark before the song starts.


The other Sweet Sixteen is in reference to the NCAA Men's basketball tournament.  Yesterday, they completed the round of 32.  I put the remaining teams in Excel, along with their seed and their conference.  Then I sorted the information, once by conference, then again by seed.  The results are below.  A bit of analysis follows.



















After the round of 64 had concluded, there was some criticism of the seedings, particularly Wisconsin and Witchita State being under seeded.  One factor that they don't use is prior tournament experience.  For both of those teams that may have mattered, perhaps a lot.  Yet at this juncture the seeding looks pretty good.  Of the top sixteen seeded teams (those who were seeded 1, 2, 3, or 4) twelve of them remain.  All of those teams won their games in the round of 64.  The Twelve that remain also won their games in the round of 32.  So they collectively went 28-4.  That seems pretty good work to me.

Three teams seeded 1, two teams seeded 2, three teams seeded 3, and all four teams seeded 4 remain.    Only one team seeded above 8 remains, Xavier, an 11 seed.

Regarding conferences, I read yesterday some stuff about the ACC being overrated.  A couple of points should be made on that score.  First, the sample size is small.  Odd things can happen then.  Second, injury that doesn't get reported in the press can matter, but we  fans are unaware so think the team we are rooting for is in a funk.  Third, some teams do find a rhythm near the end of the season so their level of play then is higher than it was earlier in the year.  It is hard to sort out the importance of the various effects - talent, chemistry, and experience.  This is what makes watching fun.  There is unpredictability in it.  See my post from several years ago, Small Samples, Hot Hands, and Flow.

Finally, you can look here to see how all the conferences fared since the round of 32 was completed.  With the exception of Gonzaga, all the remaining teams are from power conferences.  The results don't show that any one conference is better than the others.  But they do seem to indicate that schools from non-power conferences may be at a disadvantage, especially those that haven't previously broken into the upper echelon.  Middle Tennessee State put on a good showing as did Rhode Island.  They are the exceptions that prove the rule.

Wednesday, March 15, 2017

Blurting

There is a notion in economics called skill-biased technical change, where the innovation favors certain forms of labor (those who have the required skills) and is pernicious to other forms of labor (those who lack the skills).  Economics itself may be biased in that innovations of this sort are viewed as productive overall (GDP goes up as a consequence of the innovation).   Not considered are innovations that are like addictive drugs, with wide uptake and deleterious consequences.  Of course, any such innovation will also have its productive use.  So it is hard to have a critical discussion about such innovations as the defenders will focus on the productive use only, while the critics will limit attention to the deleterious consequences. 

I can't claim neutrality on this.  Having lived my adulthood as an academic, I would find it desirable if the rest of the population embraced the mindset of a academics to some degree.  I described this mindset at some length in a post from a couple of years ago called The Professor Mind.  I began with the definition and expanded on the theme from there.


Blurting, the word in my title, is antithetical to the professor mind.  Little kids are prone to do it.  One of the earliest lessons in school is that kids must raise their hands and wait for the teacher to call on them.  Doing that rather than simply shouting out whatever is on the kid's mind requires some discipline and self-restraint.  Blurting, in contrast, is immediate and instinctual.

In the language of Daniel Kahneman in his book Thinking Fast and Slow, the professor mind is about thinking slow.  I like to say it is about producing a narrative.  That is an iterative process.  One tries a little bit of the story, sees if it makes sense and if it fits the situation.  When that fails one has to go back and try something else.  Sometimes one doesn't see the failure until different bits of the story are put together.  It requires patience to develop a full and coherent narrative.  Part of the professor mind is developing that sense of patience.

Most of the research that Kahneman relies on in making his argument was gathered before smart phones, social networking, and micro blogging sites such as Twitter came into wide usage.  That people will think fast some of the time is human nature.  The issue is the balance between thinking fast and thinking slow.  Has the technology changed that balance toward the thinking fast side?  Has it reduced our impulse control?  Has it made us less critical in our thinking by not perceiving the need to think slow?

While the answers to these questions may seem obvious (yes to each one of them) I want to make a case for the technology to be neutral this way.    To make that case let me begin with this humorous story about Adlai Stevenson the candidate for President.

A supporter once called out, "Governor Stevenson, all thinking people are for you!" And Adlai Stevenson answered, "That's not enough. I need a majority." 

Some years later, then President Nixon coined the term silent majority.  At the time people's opinions were available only to friends and family, either via face to face discussion or by correspondence that was closed to the rest of the world.  The silent majority probably wasn't totally silent.  But outsiders couldn't learn their views in a direct manner.  

The technology is neutral argument, then, is that most of what we are observing is in the form of "composition effects" produced by that majority who used to be invisible but no longer are.  

Let me say here that I don't buy this argument as the whole story, but I think it is worth advancing because it surely is some of the story.   However, there is a different part of the story where those with the professor mind nonetheless resort to blurting as a consequence of social media.  One prominent example, a case that roiled my campus, is the Steven Salaita story

In his formal statement at the news conference, Salaita said his “deep dismay” at the number of people killed had fueled tweets he described as “passionate and unfiltered”.

It is not clear from the above whether Salaita would agree that he shouldn't have made those tweets.   One senses some remorse here, but just how far that goes I wouldn't want to judge.  However, I will say that 20 years earlier during the SCALE project we learned that some people would say things in an online forum that they never would say in a face to face setting.  The technology takes away a layer of inhibition.  One has to be schooled again, to add that layer back in and avoid making posts when in a highly emotional state.  Not everyone learns that lesson.  And those who do typically learn it the hard way.

But there is a different argument of the cognitive rather than the emotional kind that should get the most attention for the topic at hand.  This is about the always on nature of living online and the endless multiprocessing.  Each thread can get get only a little attention, nothing more.  Blurting works in this context.  Nothing else will.  Here the requirement for what works is that it produces a response.  There is no requirement that the response be thoughtful.  Metaphorically speaking, then, the technology makes us all zombies.  Multiprocessing is the drug that induces our impulses to overtake our judgment.

What then is a solution, even if it is only a partial solution?  I really don't know what will work for others.  I will content myself here to consider my own situation.

Back when I started this blog there was no Facebook, nor was there Twitter.  For nine months or more I was able to generate a post a day, on the order of 1500 words per post.  I did a lot of thinking slow then.  And I was quite busy with my campus job.  That blog writing was a restbit from the the remaining hours in the day, some of which was frustrating or inane. Blog writing was something that I looked forward to doing.

I now use Facebook a lot, and I tweet a daily rhyme.  I've learned to on occasion use the like button rather than write a lengthy response in a comment.  I still aim for substance in these outlets, but now brevity is much more of an imperative.  And, without a doubt, sometimes pith gives way to mere blurting.  Actually, often I can't tell the difference.

Yet I cling to the longer form that this post exemplifies too.  How much into the future I will do that, I don't know, but for now it seems necessary.  I do this entirely for me, not for the audience.  Indeed I can't tell whether there is any audience for this stuff at this point.   For myself, however, I still feel it imperative to produce the narrative and wish others felt this obligation as well.

In longer form pieces one now regularly sees the practice where certain tweets are cited.  Maybe that's an error, giving positive reinforcement to blurting when it actually should be discouraged.

Is there anything we might do that would encourage more deliberate thought and its expression in writing?  Being made aware of some such practice that was effective would make me enormously happy.  Lacking that, maybe it's time to reread Fahrenheit 451.  

Saturday, March 11, 2017

Automation and Taxation

I am quite happy to have capital substitute for labor, especially when the labor is my own and the capital is inexpensive to purchase.  Examples abound.  I can't remember when I last did my taxes by hand rather than on the computer.  My dad did them by hand.  But that was 20 years ago.  Likewise, I can't remember the last time I contacted a travel agent to book a trip or the last time I hand wrote out a piece for some secretary to type. All of that seems like activities from a different era.

I am also not averse to using my (financial) capital to get somebody else to do the physical labor that I once did.  A few years ago we switched from me mowing the lawn to having a service do it.  Part of that is they do a more professional job, having tools to aerate the soil, putting down seed when that was needed, and in general keeping up with the yard maintenance in addition to the mowing.   (We live in a neighborhood where having a good lawn matters.) The other part is my arthritis and that sometimes the mowing got to be a burden rather than a form of relaxation.  For both reasons, this type of substitution makes sense to me.

On a personal level, this sort of capital for labor substitution should have two benefits.  One is to allow me to devote my (no longer so scarce) time to the usage I want to put it to, rather than to put in a lot of time into things that must be done but which provide no personal satisfaction in the doing.  (I count doing the taxes in this category.)  The other is to provide a potential health benefit when the labor that is done away with had some health risk attached to it.  Just to give a mild example here, using the dishwasher versus washing the dishes by hand, perhaps there is no health risk if you use rubber gloves when you wash the dishes.  But I don't like those gloves and tend not use them.  I suffer from eczema which the dish washing  mildly exacerbates.  The more sorts of labor that are somewhat detrimental health-wise that can be avoided, the better.

I lead off this piece with my personal views of this trade-off, because so many people are talking about automation and the elimination of jobs, as if that is pure evil.  What is different for the economy as a whole than in my own situation in this regard?  Let's focus on two points of difference.

The first is that I never experienced technology making my own efforts obsolete, quite the contrary actually.  In the mid 1990s the Internet enabled a new career for me in online learning.  The time was ripe for a change for me. (I turned 40 in 1995, our younger son was born the year before, and the economic theory I had been doing until then started to seem less compelling to me intellectually.)   It would be quite different now, especially if I were an adjunct not on the tenure track, teaching high enrollment introductory courses that might be outsourced as MOOCs or taught in some other way.  I would have me no job security whatsoever and might become alienated about doing the work now.  The lack of job security and alienation about work is a much more common occurrence now.  The second part of the 1990s is the last time I can recall a really tight labor market.  Real wages were rising then.  Opportunities seemed to abound.

The second is whether  other people have all these other things to devote their time to, things that they love to do but don't do enough of because they have to bring home the bacon first and foremost.  For me, writing is this type of thing.  It has never been a livelihood in itself for me.  But I have learned over the years that I need to have a mode of self-expression, a way to bring my internal reflections outside of myself, if for no other reason than that it enables me to move onto something else.  It hasn't always been this way for me.  Particularly before I started this blog, the job itself was the thing.  I found self-expression through work.   If people live for the job and the job is then taken away, they can lose their sense of purpose.

Let me note that while at the individual level each of these may seem like problems without a solution, at the societal level the first one definitely can be solved.  For example, many are now calling for a guaranteed minimum income.  That is one way to address the first issue.  I am going to leave the second issue without addressing it in this post.  I think I have a reasonable answer to it for people who, like me, were engaged in knowledge work.  I don't have an answer more broadly considered, so will let others ponder the question now and leave it to be taken up later.  It is now time to turn to the topic in the subject line of this post.

* * * * *

Yesterday or the day before I became aware that Bill Gates has called for the income that robots generate to be taxed.  If the robots are replacing workers, those workers previously earned income that was taxed. After this capital for labor substitution, that labor income is lost and with that the tax revenue that accompanies it.  Gates is calling for a taxation on the machines to make up for that lost tax revenue.  I found this piece, which describes Gates' ideas on the matter.

“Right now if a human worker does you know, $50,000 worth of work in a factory, that income is taxed. If a robot comes in to do the same thing, you’d think we would tax the robot at a similar level.”
Bill Gates, in an interview with Quartz

I found this suggestion more than a little amusing.  Indeed, one of the reasons I went through my personal experience at the beginning of this piece is to consider here non-factory work that has become obsolete as a consequence of technological improvement.  Steno-typists hardly exist anymore.  Word processing eliminated much of the function. Microsoft, obviously, played a big role in that.   Is Gates proposing to pay back taxes on all those lost earnings due to widespread adoption of Microsoft Office?

I assume he is not.  So I want to work through the economics of this a little more, talk about the economics some, and then try to identify where I agree with Gates and where I think what he says requires some modification.

First, it should be noted that while until this point in the piece I've talked about capital as a substitute for labor, instead it might be that capital is a complement of labor, meaning productivity of the labor is higher in quality or quantity after the capital has been installed.  A simple example is given by the online learning tools we now use in teaching, which we didn't have access to before the mid 1990s.  The effect of these tools is to expedite communication between students and their instructors, to allow instructors to become more aware of the work that students produce, because that work is readily accessible online, and to enable students to access much more content that might be relevant to them in learning the subject matter.  The Internet is a fantastic library.  These online tools have not done away with the need for good and devoted instructors (my earlier comment about MOOCs was about a brave new world that we really haven't reached and likely won't, particularly in regard to undergraduate education).  These tools have improved the experience for students and instructors alike.

As long as wages relate to productivity in a direct way, capital as complement to labor should raise wage income.  In this case there is actually increase in tax revenue from the installation of capital when it is a complement to labor.  So Gates' point is not relevant here.

Second, there is the issue of whether the IRS or state taxing authority can distinguish the substitutes case from the complements case.  If not, so that a general tax increase on capital would be put into place, let us note that this would create some bias against the use of capital when it is complementary to labor.

Third, one should recall the example of Steve Jobs, Bill Gates' friend and rival, who famously took $1 as his pay for being CEO of Apple Computers.  He took the rest of his earnings as stock and stock options. This was partly marketing ploy.  But there was a real reason for doing this as well - tax avoidance.  The dividends and capital gains Jobs received were taxed at a lower rate.  Earned income rates, which is what determine what most of us pay in income taxes, are higher than unearned income rates.  Further, there is tax avoidance with capital income as financial capital is highly mobile and readily can be moved to low (no) tax jurisdictions.

Fourth, capital's share of GDP has been rising (so labor's share has been falling).  There are multiple possible explanations for this.  I prefer the market power explanation.  Labor's market power has fallen while management's market power has risen.

If you couple the third and fourth reasons, you get that tax revenue as a share of GDP has been declining and will continue to decline further in the future.  This is happening without any adjustment in tax rates.  This is where I agree with Gates.  It is a distinct issue, one that must be addressed.

I like to push arguments to logical extremes to see what that does to things.  Consider a world where there is no paid work at all.  Machines do it all.   Individuals do receive a guaranteed minimum income.  They also still consume government provided goods and services.  They drive on roads the government pays for.  They travel through airports that are likewise publicly provided.  Robots now do all the police work.  Other robots teach our children.  Water and sewage systems also are publicly provided and get regular capital upgrades.  And on and on.  All the goods and services provided by government must be paid for out of capital income.  In this world there has to be balance between tax revenue generated out of that capital income and government expenditure on the goods and services that people need and want.

How do we get there from here?

Thursday, March 09, 2017

On learning to argue with people where we disagree - what's possible and what isn't.

I am reacting to this segment from the Charlie Rose show, which featured Frank Bruni of the New York Times and Jonathan Haidt of NYU.  It is quite interesting viewing so I encourage others to watch it if you haven't already.  Nonetheless, I was not happy with some of the conclusions.  So I puzzled about it for a while.  It occurred to me that some of what was being said echoed Allan Bloom's The Closing of the American Mind, a book from 30 years ago.  (I read it many years later.)  I found this more recent piece about Bloom, which was illuminating and shows he was much more complex than those in the Reagan fold (William Bennett comes to mind) who embraced what Bloom was saying.  The connection to Bloom shows these issues have been with us for quite a while.  In spite of that, these issues haven't come close to being resolved.

My core hypothesis is based on the following propositions:

(1) Most people don't know how to argue, including many who are members of the professoriate.  So there are not a lot of good models to emulate for the learner.  Further, it may be that acquisition of the skill is difficult and arduous, or if not that, then it is painful.  There needs to be some limiting factor that explains why the skill is not more widespread.

(2) Experts, those who have completely mastered a set of interrelated skills, are often not good teachers for novices, especially when the experts themselves were precocious learners in their formative days.  Good teachers have empathy for the learner who stumbles or finds himself blocked.  The good teacher offers helpful suggestions to get the learner back on the learning path.  The expert may not perceive the need for doing that or not know how to do that.   While it is said that you really learn a subject when you teach it, already knowing the subject doesn't mean you can teach it.

(3) Putting (1) and (2) together, experts at argument often are too normative in their approach and simply assume that learning will happen if the environment shows what the end goal looks like.  College campuses maintain they are for the free exchange of ideas.  That is the end goal.  So students should learn to argue in the college environment.  Metaphorically, this is like throwing the novice swimmer into the deep end of the pool.

(4) We need to work through a good pedagogic approach to learning to argue.  In that learning would proceed in stages.  Learning happens if the learner is open and thus somewhat vulnerable.  Having too harsh an environment doesn't promote learning at all.  It instead leads to self-protection, where people tend to close up.  In turn, that blocks further learning.

(5) What is referred to as argument comes in two forms.  One is argument that seeks the truth.  The other is argument where participants have the sole goal for their prior view to prevail, that is to win.  Idealists, and I consider myself idealistic in this dimension, much prefer the first form.  In that form the participants have some humility to them and embrace the fencing term touch√©.  In other words, they acknowledge when the other person has made a good point.  In the second form, that never happens. One reason why people shy away from argument is that they may want the first form but don't know if the other is of the same mind that way.  Nobody likes the hard sell on things they don't already want.

Now I want to comment on my own learning about argument and to use that to then cycle back to some points that Jonathan Haidt made that I found interesting.  For much of the time growing up I had friends who were not friends with each other.  So I had separate interactions on more than one dimension.  This started in elementary school - second grade.  My best friend, David, who lived across the street from us, went to a different elementary school.  (I described some of my relationship to David in this post called Slapball.)  This continued on into junior high but expanded some.  David had a friend, Jimmy M, who also became my friend.  But neither of them were in an SP class.  I was.  So we interacted before and after school but not during.  I developed a whole bunch of new friends from the class I was in. Among them, Steven G. was my best friend.

Through junior high school, friendships were pretty much based on play of one sort or another.  In high school, some of the friendships had an intellectual aspect.  I would have interesting and intensive discussions on a variety of topics.  I did this a lot with Lenny, Jimmy K, and Michael.  (They were each on the math team, as was I.)  They all knew each other, but my discussions with them were one-on-one.  Mostly it wasn't about politics.  But once in a while it was.  Michael was much more conservative than the rest of us.  At the time the fashion was for kids to have long hair.  Michael kept his closely cropped.  So he was different that way, but we got along fine because we did have other common elements.  I want to note that I remained friends with David and Jimmy M during this time.

There were some cliques in high school, which I think was a social thing rather than a political thing.  I'm not sure why others became part of a clique, but I did not, in the sense that I had different friends who didn't overlap.

In college, particularly my last two years at Cornell where I lived at 509 Wyckoff Road, I found myself in a near ideal situation for my own learning. Social life and intellectual discussion blended.  The space was very safe.  We were extremely open with each other who lived there, but guarded about bringing in other people who did not.  Particularly interesting to me were Sue and Joyce, who shared a room on the third floor.  Sue was in the Hotel School and in the rest of her existence I sense she behaved in quite a different way, putting on a performance in that setting.  She really seemed to enjoy our group in Wykcoff, where the act wasn't needed and she could be herself. Joyce was in food science and didn't lead this sort of dual existence, but I believe also got something out of the interactions we had because she didn't get something similar elsewhere.  There were also grad students on the floor.  Ed was in Physics.  Jane was in Human Ecology.

One of the big points I want to make here is that we never talked about what we were studying.  So we were all novices in the conversations, which contributed to them being freer and more open.  I would not describe us as a clique at all. We were simply people brought together by a common living arrangement and we learned to enjoy each other's company very much.  There were other people in the house, who didn't live on the the third floor, that we also got along with quite well.

My conclusion is that to the extent that these experiences taught me how to argue it happened under conditions of safety in conversation among people I was already friendly with.  I did have disagreements with Ed now and then, but while they were challenging they were definitely not explosive.  In other words, there was some experimentation but it was non-threatening experimentation.   I had a taste for that sort of thing.  I don't know whether my preference is universal or not, but I'm pretty sure a harsher environment wouldn't have been helpful for me.  It is for that reason that I think both Bruni and Haidt over romanticize the past - kids are overprotected now - in our day it was much more rough and tumble.  The former might be right.  The latter is claiming too much.

Haidt made the point that there are now cliques in the humanities and they are of an intellectual sort.  They champion the underdog (e.g., the Palestinians over Israel or Occupy over investment bankers).  This itself, might not be a liability.  But these cliques seem rigid and don't consider issues along other dimensions, which in turn leads to intolerance.  I have some friends who are humanists yet don't fit this mold.  But I don't know any undergraduate students in the humanities now, so can't say whether Haidt is giving an accurate description or if Middlebury and Illinois are sufficiently different campuses for the point not to apply (though Berkeley probably isn't that different from Illinois).

Haidt said these attitudes don't permeate through the entire campus.  Business students got a particular mention.  (Haidt is in the B-school at NYU.)  He said these students are much more practical in their orientation and thus much less ideological.  Likewise, students in STEM disciplines are not nearly as ideological.  One doesn't know, however, whether they are more capable of arguments where the people disagree or if they simply refrain from those topics in discussion because they know it would go badly.  It may be that very few students know how to argue, but some are willing to voice strident opinions, in a take it or leave it manner, while others are not.

I would also like to point out that self-righteousness is not the sole province of the humanities, far from it.  One recent example is that some Bernie supporters never warmed up at all to Hillary.  They could only see the bad in her - she had sold out completely to the monied interests.  Writing this paragraph I find myself challenged.  On the one hand, I needed some example to support the point in the first sentence.  On the other hand, I don't want to argue the example.  Indeed, in today's rhyme (I post a rhyme most days to Twitter, one of my own creation) the theme was why engage in a conversation that likely will end at loggerheads, with everyone angry.  Who needs that?  Most people shy away from such topics.  I'm no different. 

Let me make one more point and close.  Elsewhere I've written that my core value is collegiality.  Collegiality enables argument where people disagree.  Further, in a collegial atmosphere it is less likely that somebody will take offense to what might be considered a minor transgression.  Sometimes we talk about where to draw the line.  Better, I think, would be to ask whether the transgression can be walked back with a reasonable expectation that it was an isolated incident.   If so, that can be tolerated.  In a non-collegial environment, the perception of hostility considers it a permanent condition.  Angry outbursts can happen then.  But argument cannot.

Tuesday, March 07, 2017

Socialism Reconsidered - Part 3

This is the third in an occasional series.  You can find the previous posts by clicking the label at the bottom of the post.

Let's begin with a bit of Americana - No Taxation Without Representation.  Every school kid learns this was the cause of the American Revolution, even if it turns out to be more myth than fact.  (In college I took a course where we read Alan Heimert's book Religion and the American Mind.  I confess that as a non-Christian much of it was over my head.  But the core hypothesis stuck with me - that for ordinary people in the colonies who were part of the militia, not the wealthy where taxation did matter, the Revolution was really about being against the Church of England.)  I want to invert this phrase and pose a question.  If you don't pay taxes are you still entitled to representation?

There are many possible reasons why somebody doesn't pay taxes.  When my mother was in her later years, her home health care was so costly that it more than offset the earnings she had from Social Security and elsewhere.  Indeed, eventually the estate was reduced to nil, having been expended on the costs of long-term care.  That's one sort of situation.  Another is if you run a business that has large capital losses.   Our current President seems to have exploited this.   But the main one, and the one I want to focus on here, is that you are poor - you're income is too low to warrant paying taxes.  So for this third category the question above amounts to - are poor people entitled to representation?

To me it is a question with a simple answer.  America is a democracy.  Each citizen has inalienable rights.  Representation is part of that.  So the answer is yes.

But that is theory, not practice.  Let's consider voting.  The following table gives U.S. aggregate voting in Presidential elections, along with the potential population that could have voted, so a voter participation rate can be computed.  The data are found here.   I put them into Excel so that the relevant portion of the table can be represented along with the column headers. 



The highest participation rate was in 1960, not quite 63%.  While there are a lot of people who do vote, there are many who don't.  For the sake of comparison and to benchmark this rate of voting, I did a search on Canadian voter turnout.  The Canadian voter participation rates tend to be higher than the U.S. rates, with the top rate in 1958, around 79%.

You might ask whether higher voter participation rates mean the democracy functions better.  I believe this to be the case, but that is just one person's opinion.  It is not trivial to find out how others view the issue.  In any given election, where a person has a preferred candidate and would like to see that candidate win, there is a tactical preference for low voter participation on the other side.  The question, then, is whether you can elicit a preference about how well the system functions that entirely abstracts from the tactical preference.  I'm not sure you can but I think it would be interesting to investigate.

Let me switch to a more normative view.  In the first essay I talked about taking a Rawlsian Veil of Ignorance view of social justice.  Such a view would deem 100% voter participation as the ideal.  But for this to hold, one must simultaneously take a positive view of how elected officials act.  That is, they represent the people who voted for them.  They tend to disregard the interests of other people.  This is not meant to be a controversial point.  It is just meant to say that if those who are elected really did represent everyone equally, then high participation rates wouldn't matter.  When elected officials represent only their own constituencies, non-voters are under represented.

The Web page where I obtained the table above has additional information on factors that influenced voter participation in 2008, when Barack Obama was first elected President.  Age matters as does race.  But for this discussion the factor I want to concentrate on is income.  Poor people have lower voter participation rates.   How would our politics change if voter participation rates didn't vary with income and/or we had near universal voter participation?   Benefits doled out by the government would go increasingly to the needy.   The costs of providing those benefits would be borne by those who can afford to do so (even if they don't want to do so).  In other words, there would be more income redistribution than there currently is.

As I am writing this, the lead article in the NY Times is about the Republican plan to replace the Affordable Care Act, i.e., Obamacare.  Undoubtedly, there is more of a Robin Hood approach in Obamacare than the Republican plan, though this piece from a week ago characterizes Obamacare as to the right of the traditional Democratic approach to universal health care.  I mention this only because it gives an example of a government policy that has an income redistribution consequence.  You might call this a direct way the government impacts the income distribution.

There are indirect ways as well.  An interesting lecture by the Nobel Prize winning economist, Joseph Stiglitz, called Why Capitalism is Failing illustrates the point nicely.  (The discussion starts around the 21:00 mark.)  The market largely determines people's incomes.  But the way the market does this is influenced tremendously by the rules of the game and how the government shapes the economic environment.  Think about antitrust legislation and how vigorously it is enforced, or about how the law treats unions and workers who don't belong to unions, how intellectual property ownership gets conferred, what public good infrastructure is provided, and many other government policy decisions.  These indirect effects on the income distribution swamp the direct effects.  This, of course, explains why there is so much lobbying to gain political favor.  The rules of the game matter and the lobbyists know it.

Nonetheless, to the extent that voters have interests that are opposed to the interests of lobbyists, when voters are made aware of an issue their interests serve as a restraint on how much politicians can be swayed by lobbyists.  There has been much discussion recently that we have become a plutocracy.  Low voter participation rates facilitate that.  One can understand why the plutocrats themselves would like to keep it that way.  The rest of us, however, should have an interest in higher voter participation rates, especially since, as Stiglitz argues, there is inefficiency with extreme income inequality so the pie would be bigger if some of that could be eliminated.

The Democratic Platform for 2016 is interesting to consider this way.  It begins with a section on economic security for the middle class.  It says nothing about improving the livelihoods of poor people.  There is a later section on voting rights and campaign finance, but there is no mention of voter participation rates.  Thus, there is no connection made between the economic policy ideas and the voting (or non voting) behavior of the Democratic constituency.

My guess is that if poor people did vote at the same rate as the rest of the population, then they'd mainly vote for Democratic candidates.  I don't know this for a fact, but I think it a reasonable working assumption.  Given that, what we have now is a kind of low level equilibrium, where the politicians don't expect such people to vote so don't offer policies and programs that would favor them and where such potential voters choose not to vote because it seems that the system doesn't care about them.  I am calling this a low level equilibrium to suggest there is an alternative, high level equilibrium, one with much higher voter participation and much more income redistribution as policy.  If you take the Rawlsian ideal as the goal, you'd like to see a shift from the low level equilibrium to the high level equilibrium.  I will conclude by speculating how that might come about.

Before that, I really don't want to present these ideas from a partisan perspective.  If there were already universal voter participation my own sensibilities might be that of an Eisenhower Republican.  When I talk about Socialism in my title, I am not talking about government production of goods and services.  I am simply talking about the Stiglitz point that the government shape the economic environment to advance the Rawlsian ideal and then let the market take it from there.  But this is clearly not where the Republican party is now.  Further, they are the winners in the low level equilibrium.  That seems obvious.  So they lack incentive to change things this way.  In contrast, the Democrats do have the incentive to see things change.  It is the only way they might become the majority party.

So the first step is to get some changes in the thinking by the Democratic leadership.  Voter participation is key.  They must increase that as part of their long term-strategy.  Voting rights can still be part of the approach, but it is not the whole story.  There has to be policy pieces that appeal to those who currently are not voting.  If there are pocketbook issues that matter, people will be more inclined to vote.  But there are risks in this, particularly that upscale voters might be alienated.  Since they are the ones contributing the bulk of the cash to the campaigns, there might be reluctance to move away from the historical approach as represented by the 2016 platform.  That reluctance must be overcome.  However, there should also be self-protection on this point, which might be had by bringing in upscale voters into the strategy discussion so they can understand the issue.

The second step is to really hammer on the theme of social responsibility.  Even if the real game is to get the top 0.1% in the income distribution to pay more in taxes, a lot more really, that simply can't work in the low level equilibrium.  It can work if the top 20% are paying more in taxes, so that a big number of people are actually doing that.  Getting a good chunk of those to stay in the Democratic fold requires moving away from narrow voting your pocketbook and toward a broader vision of what the social good looks like.  This will require lots of education and lots of marketing.  And it must survive many election cycles, not just one.

The third step is to work through a set of policies that aim to increase labor force participation while also having an adequate social safety net for those who are out of the labor force.  But the logic of what is wanted here requires a different rationale than the historic one, especially if artificial intelligence will soon make much paid work obsolete, as some have argued and which I mentioned in the first post in this series.  Here the goal, as much as possible, is to create a set of shared work experiences.  The idea of income redistribution is challenged by the thought that some people are not deserving of receiving benefit.  In the class I teach I have my students read this piece called How to Get the Rich to Share the Marbles.  In other words, income redistribution policies shouldn't just address the needs of the recipients, though that is necessary.  They must also address the psychological needs of the donors.  When that happens, government is more of a conduit than a coercive force.  That's the way it needs to be perceived.

The last step is for the language of the previous three steps to embrace state politics as well as national politics.  This post was motivated by a strong negative reaction to a Scott Walker Op-Ed in the NY Times.  Walker makes reference to taxpayers and voters, but not to citizens.  Walker argues that States know how to do things but that the Federal Government does not.  Wisconsin is known for voter suppression tactics recently.  Black citizens, especially in Milwaukee, are under served and under represented.  Reading Walker's piece, I felt like I was returning to the early 1960s, before the Voting Rights Act was passed.   There needs to be a strong counter narrative.  Voter participation must rise in Wisconsin, as it must nationally.

Of course, wishing doesn't make it so.  But wishing is a way to advance the conversation.  That's what I'm trying to do here.

Thursday, March 02, 2017

Heroic Assumptions and Incompetence

It is not going over the cliff that's the problem.  It's looking down after that.
The Lesson of Wile. E. Coyote

This piece was motivated not so much by current events but rather by reading something I wrote almost 10 years ago in a piece called Facsimiles, which was mainly my reaction after the fact to being a faculty member at the Educause Learning Technology Leadership Institute.  But as is my style, I meandered into my subject matter by touching on some other themes first.  This is the paragraph that triggered that piece I'm writing now.

We are in the midst of the post competence era. As I’m sure you’ll agree, the Bush White House can rightly claim naming rights for this new epoch that we find ourselves in. And thanks to the likes of Maureen Dowd and Frank Rich, for making it clear that the Bushies make the same mistakes over and over again, due to a combination of extreme hubris and an abiding cronyism. 

I should note that 10 years ago was still before the burst in the housing bubble.  So this was a reflection not based on the economy.  The primary factors were the Iraq War and Hurricane Katrina.  Following that paragraph I went on to describe a different example, outside the world of politics.  The Ford Motor Company had made a botch of things, acquiring a variety of more exotic car companies (Land Rover was one of those) based on overly optimistic forecasts of sales that didn't materialize.   While the particulars are not part of the current conversation, the general approach seems like a perfect foreshadowing of the present.

In my fall course on The Economics of Organizations, somewhere in the middle of the semester we do a session on conflict (how to prevent it) based on Bolman and Deal's Chapter 8.  I have a slide that gives a one-word approach to leadership.  In honor of my mom, the word is in French.  The word is - √©coutez - which means listen.  I have the students say the French word aloud in class as a group. Then they repeat the exercise.  I'm not sure whether the message gets through to them.  But I am pretty sure that in one brief session like that nothing else will.

Listening is actually quite difficult to do.  It requires patience and humility.  The listener might not like what he is hearing.  The listener might not understand what he is hearing so ends up garbling the message.  And the listener might not be willing to devote the time necessary to understand the full message.   I will add a further issue that I experienced while an attendee at the Frye Leadership  Institute, where there were 50 of us all very bright and able.  The pace of discussion was faster than was my preference.  I found myself wanting to linger on a point made 10 minutes ago but to do so I would lose the current thread.  That seemed to happen repeatedly.  Listening in that setting challenges getting a good balance of breadth and depth.

Near the tail end of my career as an administrator on campus, I found it harder to listen both on campus committees where I did not embrace what the committee was ultimately going to choose to do, and with my own staff, where it was far too easy to monopolize the conversation.  So I believe I have some experience with the pitfalls.  It is one reason why I prefer one-on-one conversation with peers, where there is more give and take.  I enjoy those discussions.  They seem all too rare now.

Many people don't listen.  Such people opt for the heroic assumption as an alternative.  It provides a quicker path to making a choice and doesn't challenge the person in their own held beliefs, at least not till after the fact when the evidence that the assumption was erroneous can no longer be ignored.  One wonders if there is learning by doing, so people who make heroic assumptions get their just desserts, after which they are reformed in the future.

My conjecture on this is that if such an experience happens early enough in life then there is indeed the necessary sort of learning.  The school of hard knocks has some excellent teachers.  But people may be sheltered from the bad consequences early on in their lives.  Making the heroic assumption then becomes a habit, one that increasingly hardens over time.  How else does one explain the paragraph I excerpted from my Facsimiles post?  It certainly seems to give an example in the old-dog-and-new-tricks category.

But I also expect there are cultural factors at root.  In other words, if in your cohort of friends everyone else seems to be making heroic assumptions, you will too.  Economists are not immune from the problem, witness the old joke with punchline - assume a can opener.   Also, particularly in macroeconomics where controlled experiments are simply not possible, schools of thought may provide the heroic assumptions and may become so entrenched in the economist's worldview that he may not realize he is making any assumption at all.  That's the problem with self-evident truths which eventually prove not to be true.

An old movie we should watch every so often is The Oxbow Incident, which demonstrates the nature of the people who make heroic assumptions, and how it comes about that people take the law into their own hands.  Ultimately, this leads to tragedy.  There may be learning after that, but the realization may also be too much to bear, which is what happens near the end of this movie.

Competence requires working backward inductively from the bad outcome to trace out the probable causes.  Where those are human decisions, the competent leader avoids making those choices, opting instead for something different.  I have the sense that competence is getting increasingly scarce.  It certainly seems that way with our politics.  I wonder if it is true in the business world as well.  Bernie Madoff may well have been a canary in a coal mine.

It is unsettling to think this way.  I would prefer a more optimistic view of the future, if that could be had without making my own set of heroic assumptions.  Right now, however, I just don't see it.