Thursday, June 21, 2018

Who needs research anyway?

It's easy enough to consider research to find a cancer cure or other research to get us to drive/fly Jetsons-like craft powered entirely by the sun and see the utilitarian benefit from the research, even if delivery of a functional product remains far into the future. It's much harder, however, to make the case for the economic theory type of research I used to do, as exactly what type of application it might help to create is entirely unclear for the most part, while for a few pieces I produced it is quite clear that there will be no application produced whatsoever.  And I suspect if you were able to count produced research across all fields at the university as well as count all faculty with tenure or on the tenure track, you'd find quite a lot of the type of research I did by faculty who specialized in that, abstract and ethereal, rather than concrete and low to the ground. Who needs that?

I'm motivated to pose this question from seeing yet another piece in the Chronicle about the threat to tenure. I get frustrated when reading such pieces, partially because of the threat itself, which is real enough, but mainly because the defense seems too weak to me.  Those inside the academy take the need for research as a self-evident truth and mainly they talk with one another.  What about external audiences?  If they are much harder to convince what then might do the trick?  This is still too optimistic as a way to pose the question, for perhaps there is nothing that will do the trick.  What then?

And here is the thing.  At the administrative level, we in academe may not really believe in tenure to support research, at least the sort of research that isn't also supported by external grants. This other type of research, when done at a public university, is supported by other funding, mainly tuition and state tax revenue, nowadays with much more of the former than the latter.  Before looking at the data on this point for my university, let's note that there are a few reasons why research is expensive when viewed from the perspective of it being a tuition supported activity.  The first is the claim on faculty time, i.e., teaching loads.   Research faculty are required to report their time allocations as percentage of the total.  I would always report 50% research, 40% teaching, and 10% service.  I'm going to assume that is typical.

At the time I made such reports the teaching load in the Econ department was two courses per semester, so those two course supposedly took up 40% of my time working. The way the university counts, the load should be measured in IUs (instructional units) produced.  IUs are measured as the product of number of students and credit hours for a course.  When I taught intermediate micro, typical class size was around 60 and it was a 3-credit course. So my section would generate around 180 IUs.  Graduate courses in Econ were typically 4 credit hours, but with many fewer students.  (Except for the core courses, which I did teach for a while.  Those were larger because students from other departments, notably Ag Econ, took them.)  The expectation of a faculty member with tenure or on the tenure track was to teach one undergrad course as service and one grad class per semester.  And the faculty preference was to teach grad classes, not because of the fewer IUs, but because the subject matter was closer to the faculty member's research.   Adjunct faculty typically teach only undergraduate classes.  And they generate much higher IUs, either by teaching more classes or by teaching much larger classes.

The other factor that makes research faculty expensive is their salary plus non-wage compensation to support research, such as travel money and the ability to hire a research assistant.  I should note that whether on the tenure track or not the instructor needs an office and office space is scarce.  So allocating that might enter into the cost calculation.  But the reality is that departments don't rent the space they occupy.  They are assigned space that the university owns.  One might impute a rental price from that allocation.  But it is not the custom to do that, so I won't do it here. Likewise, the benefits that employees receive, nowadays mainly health insurance, are borne by the university but not costed at the departmental level.  (Illinois has an issue that most other public universities do not, as contribution to the retirement system is mandatory and the system itself is in arrears.)  With these caveats we can focus on just the salary number.  Tenure track faculty are paid more than adjuncts.  That reality is undeniable and contributes to making teaching with tenure track faculty a more expensive way to go.

Now let's look at some data, which are taken from the Campus Profile.   What is shown below are percentages of teaching by research faculty, grad assistants, and specialized (non-tenure track) faculty.  This is done with three different looks: overall, 100-level courses only, and all undergraduate courses.  I added the line space between each look to enhance readability.  Also shown are total undergraduate IUs, which have been trending upward.


This is an interesting table to examine and I encourage the reader to spend some time on it.  I don't know that Illinois typifies public research universities in these numbers, but I expect the picture is similar elsewhere.  The reason for the increase in undergrad IUs is to admit more out-of-state and especially international students, who pay much higher tuition.  And the reason to do that is because their tuition revenue serves as a substitute for state tax dollars, which have been declining. At the same time these high tuition IUs have been increasing, the fraction of undergraduate teaching done by specialized faculty has also been increasing.  Perhaps surprising, the percentage of teaching done by grad assistants has been declining. The percentage of teaching done by tenure track faculty has also declined, but only modestly, but sufficiently so that a plurality of undergraduate instruction is now done by specialized faculty.

Can these trends continue or will something fracture if they do?  Who knows?  At root here is whether undergraduate instruction needs faculty who are research oriented to make the instruction high quality.  Really, the issue is how this is perceived by the students and their families, at the time when they are applying for admission.  But one might also consider the question whether the same course would be taught differently depending on whether it is taught by a tenure track faculty member or an adjunct, regardless of the perception by potential students.  And then one might ask, in addition, whether when such a difference exists if it matters for the student post graduation, via the type of human capital that the student acquired while in college.

I don't believe we know the answer to this, but I want to note my own view about how my course might matter to the former students who took my Economics of Organizations class.  I'd expect for it to not matter much at all for students in entry-level positions.  It might matter some for those who attain managerial positions, though this would be more for mindset broadly considered than for specific knowledge.  This, too, is how I believe critical thinking skills impact people in the world of work.  In other words, there is a long fuse before the education really pays off.  And couple with that the preoccupation students have with boosting the GPA which makes them myopic and focus on that very first job out of college. So they themselves might not see the long-term benefit of their education this way, which might explain why they may not care whether the instructor is an adjunct or not.

* * * * *

Published research is a public good, at least to the extent that everyone has access to it.  So we should consider the public good benefit from the research, even if the public are free riders and don't fund the research directly or indirectly.  The previous section was meant to get at the benefit to those who are doing the funding.  Here I want to take up the benefit to the free riders.  Is there really such a benefit or not?  I will stick to economics in considering an answer to this question, though the question should be posed for any discipline that does liberal arts education and is not vocational in its orientation.  Among such disciplines, economics may be closer to the vocational end of the spectrum than other disciplines (comparative literature, for example), at least as perceived by the students I considered in the previous section.  I want to observe here, in contrast, that the research I mentioned at the outset of this piece has no vocational value whatsoever.  In this regard, the main difference between economic theory and research in other fields within the liberal arts is the reliance on math models as a necessary part of the discourse.  In this sense, economic theory research is about developing interesting (at least to other economists) math models and drawing the implications from those.

If the set of beneficiaries of the research includes only other faculty members at other universities and/or graduate students who are aiming to become faculty members in the future, one might view the research self-serving and not necessary at all.  So the first step here is to identify potential beneficiaries outside of higher education.  Some economics research might benefit public policy, or how organizations (trade associations, labor unions) go about their business, or how individual companies do likewise.  It might also benefit how news organizations report economic phenomena and thus how the general population comes to understand those phenomena.  And it might filter down into the teaching of economics in high school which, arguably, should be about preparing citizens to understand the economic phenomena that they learn about from their news sources.

Thus, it might be tempting to parse that sort of research with evident potential for external benefit from other research that is simply too abstruse to produce such a benefit.  The former then might be necessary but the latter surely is not necessary.  Is that right?

I'm going to try to counter this naive view in a couple of ways.  One is to note that process by which research is produced.  An individual author or a set of co-authors work through an idea and produce a draft of a working paper.  This is an intermediate product.  They then will want to present their paper in a departmental seminar and perhaps externally. The conversation the authors have with the audience, both during the live presentation and also in individual office visits, serves as an additional input for producing a revision of that first draft.  In a very real sense, all research is a community product, even if those lending comments don't get acknowledgment for their input.  (Comments offered during an office visit might be acknowledged in a footnote.  Those offered during the live workshop session are typically not acknowledged that way.)  The process can sometimes be adversarial or even combative, though my experience is that an underlying collegiality is essential to make it work.  The important point is that research as a community affair may not be well understood by those outside academia.

If it were understood, then we'd want to know what makes somebody an effective member of this research community.  My answer is that at a minimum, the community member must be doing research.  And my experience is that those who do applied research benefit from having conversation with theorists, at least those theorists who can translate the theoretical issues in a way appreciated by the applied scholars. This gives the potential for theoretical research, which provides no direct external benefit, to nonetheless produce a substantial indirect external benefit, by giving credibility to theorist who then interacts with the applied researcher on an equal footing and helps to make the research better.

The other point is to note that the distribution of research output across research faculty is heavily skewed with the superstars producing the bulk of the research, but even with that there is some difficulty in measuring research output.  One measure is lines on the CV, perhaps weighted by the quality of the journal where the paper appeared.  Another measure is of impact, perhaps by counting citations for the paper.  But impact can happen in other ways, for example, by being part of the readings assigned in the core curriculum or in the appropriate field course. The superstar who writes the seminal paper in the field needs other scholars to extend the work and draw out the full implications.  And sometimes important papers have errors in it, which go unnoticed for some time before they are corrected.   I will illustrate with some of my own work.   The following is from a post I wrote as eulogy for Leon Moses,  prior to participating in a workshop held at Northwestern to honor his memory.

The other paper marked my switch from the dissertation research to study of oligopoly models, where I had more success.  One of these was a paper on dynamic duopoly with inventory.  I'm rather proud of this paper, as it made a fundamental point that wasn't in the rest of the literature.  Heretofore people who worked in this area assumed that asymmetric outcomes (one firm is the market leader, the other is a follower) were a consequence of some asymmetry in the initial conditions (the leader got there first and leveraged that for strategic advantage).  My examples showed that you could have symmetric starting conditions but still get asymmetric outcomes and, indeed, that is to be expected in these sorts of models. Google Scholar says the paper got  32 citations, which is second highest among the pieces I've written.  I would never had thought to write on this topic at all if it had not been for the earlier work with Leon.

Some years later Thomas Sargent, who would subsequently win the Nobel Prize, gave a talk in the department on some macroeconomics model he had written with a co-author. (I can't remember who the co-author was now.)  My research was not in macro, but the model Sargent showed us was linear-quadratic and it shared some of the same properties that the model in my duopoly paper had.  So after a fashion I asked, do you know if your equilibrium is unique?  (He had assumed that it was.)  In posing that question I was pretty sure it was not.  He became noticeably flummoxed by that question.  It simply hadn't occurred to him to ask it himself.  I had hit a nerve by posing the question.  Even the Nobel Prize winner needs assistance from other economists now and then to get at something that is good and correct research.  This is an argument for having some larger tail of non-superstar scholars who nonetheless are competent researchers active in the profession.  Yet I would readily agree that how large that tail should be is not easy to determine by criteria we would all deem sensible.

* * * * *

I tried to be careful in the previous section by talking about the potential public good benefit from research.  Here I want to ask, if the potential is there is it actualized?  As in my previous post I led off with a point from Peter Drucker - the active party in any communication is the receiver/listener - I will contain myself to consider the issue of actualization from that perspective.  Is the message of the research well received or is it ignored?  If it is ignored, then the potential beneficiaries clearly don't need the research.  And if the ignoring part is willful rather than inadvertent, one might then presume that such potential beneficiaries actually view the research as a waste of time.

Alas, this bring politics into the question.  Those who believe that shrinking government is the right answer will view research that demonstrates effective government programs as pernicious.  Is it possible then for academics to make a convincing argument to those people about the benefit of research so they'll change their mind?

There is a second issue here, which sometimes goes under the label of corporatizing the university.  The audience already knows what it wants and has the means to get its wishes so then pays for research to deliver just that.  The Koch brothers are notorious for supporting research centers that produce libertarian anti-government policy proposals.

Evidently there is some of both of this going on.  Given that, what of arguments to protect tenure, and thereby enable research by scholars who are independent and not beholden to an external audience? Can those arguments possibly work?

My sense of things is that they can't work as long as Republicans control most of the state governments, with the attack on tenure as part and parcel of that control.  The arguments might work as part of setting the stage for future Republican defeat and a return to control by Democrats, but I'm guessing that this issue is pretty far removed from most voters who vote mainly for Democratic candidates.  The student indebtedness issue has gotten most of the bandwidth in the press, when considering higher education.   Teaching with adjuncts is a way to contain the cost of instruction.  In this world view, research seems like a luxury we can't afford.  Are we ready to come to grips with that perception?

* * * * *

If there is actually a substantial public good benefit to the university, in both its teaching mission and in the research it does, the latter as described above while the former as discussed in this post, then the university should largely be funded out of tax revenues, as that is the correct way to fund public goods.   It occurs to me that this might still be possible - if we federalize the system.  I believe we've outgrown the current approach and we'll be seeing public research universities continue to decline in the future as the current approach persists.  One reason for this is that the current approach is poor on cost containment, particularly on the matter of salaries and other perqs for high caliber research faculty.  A federalized system would be better that way and thus be more sustainable, plus there is a larger potential tax base from which to draw to support the system.  Then, too, one might argue that there is superfluous redundancy of public research institutions.  Does every state really need at least one?   I don't know.  It's not a question you see a lot of people asking.

In the absence of federalizing the system as an answer, academics should be prepared to hear from friends who are not in academia that what they do is esoteric and unnecessary.  We need to engage those people in thoughtful argument.  I'm afraid, however, if we do that, too often we'll end up getting the short straw.  This one is tough.

Wednesday, June 13, 2018

What Does Free Speech Mean? Some Issues to Consider

I want to begin with two observations/questions.  The first comes from reading Peter Drucker.  He models communication as a message being sent and then received.  Drucker asserts that the active party in any communication is the recipient (listener).  It is an interesting perspective as most people probably assume that the active party is the sender (speaker). If you take Drucker's view seriously, it would seem that both the sender and the receiver have rights that must be taken into account.  How is that done in practice?

The second observation is about this blog.  I make posts at my leisure, when I want to.  So on that score I seem to have complete freedom.  Yet I have very few readers now and, frankly, I'd like to have more of them.  In part, that is why I re-post to Facebook, rather than simply link to a post.  I'm guessing that more of my friends will read it that way.  But out on the Web, it is much more hit and miss.  Ten years ago there were a lot of hits.  Nowadays, it is mainly miss.  Am I exercising free speech now or not?  Does free speech demand having an actual audience, not just the potential for having such an audience?

One can go a long way in trying to address these observations in a coherent manner and I encourage the reader to try doing that in a way that might produce a consensus answer.  I am going to answer these here with my own views only, not claiming they are how everyone else thinks about them.  And the case in point that I will use to base my response is receiving unsolicited emails from vendors.  I get quite a few of these.  I feel no obligation to even read them.  When I do read them on occasion, mainly I feel no obligation to respond.  Once in a while, when the sender is using my current campus address (@illinois.edu) but not the old address (@uiuc.edu) and the sender seems earnest yet is ignorant that I have retired, I will respond by noting that and asking the person to take me off their mailing list.  While I strongly believe in collegiality as a rule, I do this mainly not to be a good citizen, but to avoid getting future emails from this person.  I don't feel a strong social obligation in this case and indeed it is an uphill battle, since new vendors whom I've never met send email with their own unsolicited spiels all the time. I can't say whether the vendor has better luck with other recipients, but if I am typical of those recipients then the vendor doesn't deserve an audience.  To sum up, the vendor does have free speech, but the audience can safely ignore the message, so that when there isn't an audience it doesn't mean that free speech has been denied.

Now we should take the discussion from individuals to groups of like minded people.  Whereas an individual receiver can simply ignore the message from a sender, a group of receivers may have sufficient power to block the sender's transmission.  In this case, is the group denying the free speech rights of the sender?

I deliberately used an online example above, my blog post, because such avenues are freely available to anyone.  They can't be blocked.  As an alternative to writing out a post like this, I could record a talking head video on my computer and post to YouTube or some other online video host.  If that avenue is always there, does a group blocking free speech in a face to face setting then constitute denial of free speech, or is it merely pushing the speech to another venue?   This is not such an easy question to answer.  And it is confounded by the following.  Speech may be a market activity, meaning the speaker expects to get paid for giving the talk.  The speaker then may not want to produce a freely available online alternative, as that might cut into earning speaker fees.

The practice of students on some campuses blocking a speaker from giving a talk has drawn a strong reaction in many circles.  For example, consider this piece from the Boston Globe about the episode at Middlebury College, where the speaker Charles Murray was blocked from talking.  I've been asking myself, what if the protest had been somewhat milder, a boycott that strongly discouraged attendance at the talk but without any threats of violence to those who did attend or to the speaker.  Contrary to what actually happened, suppose Murray did give his talk but the auditorium was largely empty.  Would that have produced much the same reaction in the press, or quite a different response?  In other words, is it the violent blocking that is at issue or any sort of blocking whatsoever?   I really don't know.  If a certain type of civil disobedience was deemed acceptable, even if it had the consequence of reducing the size of the audience, my preference would be to embrace that form of protest and shy away from the violent forms.  However, I'm far less certain whether that is a matter of preserving free speech rights of the speaker or simply a distaste for violence.

Then, I want to consider a different situation where the receiver/listener is in a somewhat captive situation and thus is unable to ignore the message sent by the sender/speaker.  This happens, for example, when both are students in a discussion-based class.  As I've written about this example in the past, in a post called Theism - "Pan", "Mono", and "A", I want to try to unpeel the issues some first before discussing possible ways to address them.

During the first two years of writing this blog, I was a campus administrator and indeed the blog was hosted on a campus Web site, though the blog was not linked directly from the Web site of the unit I supervised.  I thus felt that while I was representing my own thinking in the blog, rather than agreed upon campus thinking, I had an obligation to respect campus thinking and decision making.  This show of respect happened both in the tone of the post - exploratory, not accusatory - and in the way arguments are made, highlighting where "reasonable people might disagree."  I mention this because I think some of the free speech issues in the classroom are about tone and style of argument.  Speakers feel they can be blunt and uncaring about how listeners will react.  Also, if the speaker feels disrespected for the speaker's prior held views, the argument is apt to be made to win the point, rather than in a way to get at the truth.

On the matter of bluntness, one should then ask whether the speaker is well aware of how the listener will react, so is deliberately trying to do harm to the listener, or if the speaker is ignorant of how the listener will react and doesn't expect what is said to cause a negative reaction.  On the matter of the speaker feeling disrespected, that will promote anger and anger is a driver for trying to win the point.  It is also possible, however, that the speaker is not angry, but merely egotistical.  Arguing from the perspective of the speaker, If I know more than you I expect to be right and my job is to show you the error in your ways.  So lack of humility may be an alternate explanation to anger.  The two factors may mutually support one another, as well.

Some of the arguments I see being made about free speech seem to assume that the speaker can be willfully ignorant and make argument as the speaker sees fit, independent of the sensibilities of the listener.   And it matters not here whether this happens on the open Internet or in a captive situation, such as the classroom I described.   As you might guess from reading this piece, I believe the captive situation requires ways to contain the speaker some because the listener has rights too, while the open Internet does not, because there the receiver can ignore the message.  In my ideal, the best way to contain the speaker would be via a gentle education that aimed to encourage the speaker to seriously consider the listener's perspective.  Even with such an ideal, however, there should be a recognition that it would be a long time coming to deliver such education in an effective manner.  Thus, it is likely necessary to have some rules/regulations that govern speech in the captive setting.   Regulations are never perfect.  There can be too many regulations or a given regulation can be too onerous.  So this is a balancing act.  Purists don't like balancing acts.  But those same purists, when considering free speech, likely entirely ignore the rights of the listeners.

I want to close here by noting a couple of other pieces I've written on this general topic, which shows at a minimum that I'm fascinated by it and I don't believe it is well treated in the arguments I read about it.  A month ago I wrote a piece about embracing rules that move us from debate to reasoned argument.  If speech is viewed as a piece of reasoned argument then more people will embrace speech where there are diverse views expressed. Such rules then need to be seen as playing a dual role, one as constraint, the other as enabler.  I think having this duality view is helpful in considering free speech issues.  The other piece I wrote less than two weeks ago about speech in the classroom.  The instructor regulates student speech in a classroom in a variety of different ways.  In a well functioning classroom students get used to the flow and the instructor with a deft touch is appreciated by the students.  I believe that such an instructor actually embraces the rules of the previous post, at least implicitly.  The classroom is a place where students are meant to learn inquiry methods.  Thus, it is my belief that a well functioning classroom, which might take a chunk of the semester to operate well, can tolerate differences of opinion and that contentious speech would better be handled after the class has reached this high level of function than early on, where everyone is a stranger. More generally, by which I mean moving outside of the classroom context, I think an ongoing conversation of people with different perspective has the potential for producing interesting results if they engage in argument, but not debate.  The thing is, argument is slower and more time consuming.   It is impatience that makes productive speech hard to achieve and why there is so much fracture when it comes to contentious issues.

Saturday, June 09, 2018

Blame It On The Bossa Nova

With society evidently in decline, there is a growing cottage industry of folks doing a historical look and from that trying to discern the cause of that decline.

On the liberal side of things, the year 1980 has become focal, witness this recent piece by Paul Krugman.  The idea, in a nutshell, is that the election of Ronald Reagan ushered in the era of "Supply Side Economics," which largely proved to be a sham.  The tax cuts, deregulation, and hostility to organized labor didn't so much make the economy grow, as its adherents would have you believe, but rather created a boon for those already at the upper end of the income distribution, by shifting the return in income generation from labor to capital.  The rest, as you say, is an all too familiar history, featuring wage stagnation for families near the median in the household income distribution, and a variety of social ills that resulted once economic prospects for them appeared grim.  To a certain extent I have bought into this narrative, for example this commentary as rhyme entitled Trickle Down.  Yet I think there are issues with this story that focusing on 1980 tends to mask.  Some of this post is intended to bring those issues out into the open.

A different narrative has emerged from some conservative commentators and/or liberals who are critical of upscale voters and how insular they appear to be.   An example of the latter is this piece by Richard Reeves, Stop Pretending You're Not Rich.  He takes on the little tricks by which those who already are doing well preserve their advantage or better it and then pass on those advantages to their children.  A particularly pernicious way this happens, according to Reeves, is via zoning restrictions in housing that in not so many words are aimed at keeping the riffraff out, consequently denying these people opportunities that really should be open to them.  Many refer to this sort of behavior as gaming the system. The well to do are usually quite expert at such gaming.  They know how to make the system work for themselves.  Yet others complain (not just the riffraff but many people of more modest means) that the system is broken.   I wrote about this some years ago in a piece called Gaming The System Versus Design It, where I argued that even very accomplished people don't  know what a well designed system looks like. Then they tend to make an intellectual error and assert that the status quo is quite okay - because they do well by it.  So Reeves piece resonated with me.

David Brooks takes a somewhat different tack on the matter in this piece, The Strange Failure of the Educated Elite, though he comes to a similar conclusion as Reeves.  Brooks contrasts the current system, which he describes as a meritocracy, with what came before, where people of privilege - White, male, Protestants from families of means - ruled the roost.  Meritocracy is fairer and therefore a better system, according to Brooks, except for one thing.  In the old system, the people of privilege did learn a kind of social responsibility, a variant of noblesse oblige, which perhaps survives to this day in organizations like Rotary. The meritocracy seems deficient on educating people on social responsibility.  Thus, the gaming of the system is undisciplined by consideration of the consequences on others.  Brooks wants a meritocracy where the highly educated behave in a socially responsible way.

This point also resonated with me as I recently wrote a post, Sensitivity and Social Responsibility - Can They Be Taught?  I should note here that in college (and in K-12 as well) we do teach the students to game the system, though perhaps inadvertently.  The most obvious way where this shows up is in the rampant credentialism, which invariably gets the hard charging student to juggle more balls than is intellectually nourishing, for the sake of padding the resume.  The kids learn to care a great deal about their GPA and in far too many students that I see in the course I teach this overwhelms curiosity and intrinsic interest in the subject.  Thus, the students get a sense that adults are fundamentally instrumental about achieving ends.  It is this mindset which supports the gaming behavior, with no holds barred.

There is something of a puzzle here.  On the one hand, Krugman and his ilk, who focus on 1980 as the beginning of the decline, want to blame conservative Republicans and their anti-government bias.  Brooks and Reeves, in contrast, want to blame educated elites, who tend to be liberal Democrats.  It is their hypocrisy which is to blame.  Given the politics behind these competing narratives, one would think they are opposed to each other.  Yet, as I've said above, each of these stories rings true to me to some extent.  How can that be?  Is it possible they are actually the same story, but being told from different vantages?  I wanted to take on these questions below.

Sometimes in such circumstances I consider my own life events and try to view the circumstances personally.  Having done that I then see if I can generalize from my experience. Indeed, 1980 was a very important year in my life, as that fall I started working at Illinois as an Assistant Professor.  Actually, I was ABD at the time I was hired, but in my contract I got all the perqs of a new Assistant Professor - a course buy out in the fall, summer support the following summer, which was not conditional on finishing my dissertation, though I did get my PhD the following spring.  Conceivably, I could have stayed at Northwestern for another year and gone on the market with the degree already in hand. I chose not to do that.  There were many reasons.  The ones I want to focus on here were quite materialistic.  I was tired of living like a graduate student, in my dingy apartment with crappy furniture, particularly my bed with a way too soft mattress which was giving me lower back pain.  I wanted to make a decent living and have reasonably good things.  I never considered myself a yuppie.  But for that moment I shared with yuppies the view that quality of life is found in material things.

So I started to ask myself whether that yuppie perspective was itself a consequence of being tired about what had come before.  I'll get to that in a second.  First, I want to note some of the social changes I observed going from being an undergraduate at Cornell to a graduate student at Northwestern.  There are many possible explanations for these changes: college town versus (sub)urban university, students mainly from the Northeast versus students mainly from the Midwest, Ivy League versus an aspirant to be a peer institution, the various majors I saw as an undergraduate student (math, philosophy, poli sci) versus the econ majors I saw as a TA, and then the changing time in which we lived.  My definite impression, clearly not a scientific study but something I hold to quite strongly, is that at Cornell the ethos was somewhat anti-materialistic and heavily influenced by the counterculture.  The undergrads I had in my Econ classes at NU, in contrast, seemed far more materialistic - yuppies in the making, if you will.

Now let's get to why the yuppies turned inward.  The antecedents that tired people out were the Vietnam War and then Watergate, but I view them as one bigger thing or two jumbled together.  Had there been no Gulf of Tonkin Resolution, the Democrats would have held onto the White House in 1968 and our history would have been entirely different. It seems to me that Krugman, Reeves, and Brooks should all consider the consequence of the Vietnam War and ask if it really is a better "first cause" to explain America's subsequent decline.

Again I'm going to personalize this and use 1968 as another focal time.  I became a teenager in January of that year and then started high school that fall, which was also the time of the big NYC Teachers Strike that closed the schools for a couple of months. The thing is, initially I went to the wrong high school, Bayside High, because my mother thought that would be better for me.  But it was a disaster and I soon transferred to the local high school, Cardozo.  The experience marked a change in my relationship with my mother.  I became aggrieved and untrusting of her, insofar as her making decisions on my behalf.  I concluded that if an authority is to make choices for us, the authority must justify the choice and demonstrate it is good and proper.

Indeed, this was my main lesson about the Vietnam War.  The government made extremely poor choices, often doing so under false pretenses.  From that, quagmire followed.  The war divided us - the hard hats versus the hippies, in a kind of division that has morphed some but still survives to this day. The show that best captured this divide was All in the Family.  Archie Bunker became America's favorite bigot.  He worked the loading dock and was a Nixon supporter. The Meathead, his son-in-law, had the long hair that hippies wore and was much more liberal in his views.

I can't say how much of what we associated with hippies - dope, love, and rock music in addition to the long hair - would have happened anyway as a matter of course.  But peace became a crucial part of the mix and because it was so long in coming it hardened the view that authority couldn't be trusted.

This sense that authority couldn't be trusted and that the country was dividing us wore people out.  Who could possibly want to be concerned with social responsibility.  It didn't get us anywhere, or so it seemed.  After Watergate concluded, many people must have felt, couldn't we just get back to normal?

I don't have an adult sense of what normal was like.  The TV shows I watched as a kid, Leave It to Beaver and Gilligan's Island I will offer up as emblems of a lot of other programming, presented a Caspar Milquetoast alternative reality, but I wonder if many thought the world largely safe in the way those shows presented it.  Were there strong divisions between liberals and conservatives in the late 1950s and early 1960s or was a unified world view far more common then?  I don't know, but if so we lost that as a consequence of Vietnam and never really got it back.

Sometimes this division existed within families.  Would hippies as teens and young adults stay as hippies for life?  Or might some of them abandon it all and then become yuppies?  The show Family Ties captured this tension with a light touch.  My sense of things is that there are many who were hippie types in the early to mid 1970s who became conservative when Reagan was President and more conservative as they grew older.

Now let me make two more conjectures.  One is that the Vietnam War crowded out the good elements of LBJ's Great Society, particularly with regard to civil rights.  If achieving racial equality, legally and as a practical matter as well, had been the only issue to occupy the national attention because there wasn't a Vietnam War, I believe we could have made a lot more progress.  To be sure, George Wallace was a candidate for President in 1968 and there clearly were forces of reaction that wanted to block progress on civil rights.  But a much more concerted effort for progress would have been made, and if Democrats had retained the White House in 1968, there wouldn't have been a Southern strategy so evident in changing the political realignment.  Our politics might then have become more earnest and far less cynical.

The other conjecture is based by juxtaposing the film Easy Rider with the current furor about the NFL banning players taking a knee during the playing of the Star Spangled Banner, which one might use to remind ourselves that some people react very strongly to the symbolic behavior of others when they deem it disrespectful and inappropriate.  Many people held hippies in contempt, as a threat to their own way of life.  That contempt created a strong negative reaction that served as enabler of the hard right development in our politics.  Without the Vietnam War, even had their still been hippies, I believe they'd have been seen as a curiosity, perhaps, but not a real threat.  Drugs did scare people, certainly. But I believe it would have been considered far more benign, had there been no Vietnam War.

So I'd argue for the Vietnam War to be blamed as the cause in the country's path to decline  You can see the selfishness of the educated elite as a consequence.  You can see that anti-tax mood of Republicans from Reagan on also as a consequence.

* * * * *

Let me wrap up.  There is always a difficulty in trying to identify a first cause and use that to absorb the blame, when we know that history didn't start with that.  A case may be made for the JFK assassination being primary.  If JFK had remained as President, with his prior faux pas at the Bay of Pigs, he may have learned from his mistake and we might have then avoided a Gulf of Tonkin Resolution.  LBJ didn't have that prior experience.  So he stepped in it, the biggest pile of poo ever.

And, because I like to end on a light note, perhaps we should consider an antecedent to the JFK assassination.  This gets me to the title of my post.  The song performed by Eydie Gormé came out earlier in 1963. It has the word blame in the title so was useful for my purposes. And it serves as a reminder that temporal precedence is not sufficient for causality.   In any event, surely we've made many mistakes along the way.  Focus on a first cause only as where we should concentrate our blame doesn't get us to ask why we didn't rebound better but instead descended into the abyss.  That sounds like a topic for another post.

Friday, June 01, 2018

There really isn't freedom of speech in a well functioning classroom

I am reacting to a piece from the Chronicle this morning,  I want to make a general point that free speech doesn't happen in the classroom.  Instead, the instructor regulates student speech.  In a course where otherwise the students are satisfied with how the course is being conducted, this observation is mundane, and the examples I will use to illustrate that will show as much.  So, when an outside group comes in and cries foul about how a particular course is being conducted because a student's free speech rights have been denied, it may make for interesting press but one needs to ask, do they really have a leg to stand on?

Before getting to my examples, let me make clear that the main pedagogic issue is quite the opposite.  Many students don't open up in the classroom.  There is a "shy student" problem that vexes many instructors.  This is true even when the students are quite able.  Some years ago I wrote about the issue in a post called Teaching Quiet Students after having taught a class for the Campus Honors Program.  I was surprised then by how many of these honors kids were reticent to participate in class discussion.  Ultimately, I learned that the best way to address the issue is to give multiple modes for communication.  One reason I now have students do weekly blogging is that they can express themselves in writing and some ultimately are more comfortable doing that then speaking up in the live classroom.

Now to the speech in the classroom issues.  When an instructor lectures, the instructor may take clarifying questions during the lecture, but other questions are deferred till a Q&A session at the end. This delay in allowing students to pose a question is a (temporary) suppression of student speech in the name of letting the instructor provide the foundational content in the lecture.  Nobody considers this a violation of the First Amendment.  It's normal classroom procedure.

Let's move to consider classroom discussion.  The instructor regulates the flow of discussion.  The student who speaks next must raise his or her hand first and then be acknowledged by the instructor.  This is the standard procedure.  Students shouldn't blurt out and shouldn't interrupt another student who is speaking.  These prohibitions aren't specified in the syllabus.  They are so much part of the norm in the classroom that it is reasonable to expect them to be obeyed without delineating them.

Are there times where the instructor won't pick a student with a raised hand?  Yes.  When that student has already commented repeatedly in class, the instructor might very well ask, does somebody other than _____ want to chime in?  This is done in the name of promoting broader class participation.  The student with the raised hand might feel a bit frustrated by this, and if no other student does chime in then it's proper etiquette to ask the original student about what the student had wanted to say.  However, if other students do eventually enter the discussion, the instructor may not feel obligated to return to that first student.  This is okay.  The real issue is whether  there is a good flow in the discussion.  If that has been obtained, the instructor has achieved the main goal.

Are there other times where the instructor will interrupt a student who was speaking?  I can think of two possible reasons for interrupting a student.  The instructor has a responsibility for keeping the discussion on topic, giving students some leeway for sure but not complete freedom on this score. If the instructor is unable to tie what the student said to the previous class discussion, the instructor should cut the student off, perhaps asking the student to reconsider the point in a way that it does tie to the discussion, but to think about it for a while first. The other possible reason might surprise the reader, and I may be distinctive in my teaching approach for doing it this way.  When the student makes a very good point, I sometimes immediately cut the student off by saying, "stop."  I want to emphasize the point just made and have the class reflect on it, for fear that if I had let the student keeping going, the point might get lost in what else is said. Admittedly, it is a bit rude to do this, though the class gets used to it after a while.  And the student who made the good point does get some acknowledgment of such then and there.  So it is not punitive, certainly.  The take away here should be that an individual student does not have the right to hold the floor in the classroom, for even a moderately long time.

I want to say two other bits about the Chronicle peace and then close.  The piece mentioned that the student had missed the prior class session, where some of the issues that introduced.  Students who miss class without having an excused absence have less status in my view than those who attend right along.  I would expect such a student to be somewhat circumspect for having missed class.  That didn't seem to be the case here.  The other point is about how escalation should occur when there is a conflict between student and teacher.  The right next step, in my view, is for there to be a one-on-one conversation between the two of them.  This is usually not pleasant, at least at first, so there is some temptation to skip this step.  But it is the better alternative.  If either party is not satisfied with how that step has played out, then taking the matter to the department head should happen next.  Indeed, what is so strange in the case reported in the Chronicle piece is that there appeared to be no other university governance on the matter, at least until it had fully escalated already.  The prospect of that repeating is truly frightening to me, and why I'm writing here.

Sunday, May 13, 2018

Adapting the Marquess of Queensberry Rules for Debate to Simulate Argument

The New York Times has recently had a series of pieces to wit that Liberals should be debating Conservatives vigorously in the "marketplace" of ideas.  This is supposed to normalize each group and get away from the extreme views that appear to emerge when every conversation is preaching to the choir.  I want to challenge some of the underlying assumptions with that, offer my preferred alternative, and then work through an example to illustrate.

I want to begin not with politics at all but with each of us, regardless of political affiliation, as potential consumers of product, though perhaps with no interest to buy a particular item.  Some salesperson tries to initiate contact - by ringing the doorbell, calling on the phone, or sending an email.  Do you feel an obligation to respond in this case?

I do not.  I view each of these as an intrusion.  The two live situations, doorbell and phone, seem to operate under the assumption that I might say yes out of politeness, even when I don't want the product.  Indeed, this also seems to be the case for some charitable solicitations.  (And when it's a neighborhood kid, perhaps chaperoned by a parent, e.g. selling Girl Scout cookies, I think that is okay as a way to support the community, but otherwise it is crossing a line.)  The phone one is particularly interesting because, with caller ID, if the number is not recognized I don't pick up.  Very often in this case there is no message left on the answering machine.  The situation itself reveals the caller wants a live person.  The email is slightly different since there is no pressure that way.  But I get many emails to an @uiuc.edu address which, while it still delivers the mail, went out of existence a decade ago. And I get a lot of messages thinking I'm still Assistant CIO for Educational Technology, which I quit back in 2006. The sender could verify that I'm retired, for example by checking LinkedIn or my Blog, but apparently doesn't do that.  If the sender hasn't done the requisite homework, why should I listen to the message?

Let me summarize this conclusion.  In matters of the market, the potential buyer should initiate the conversation, when the buyer is inclined to do so out of interest in making a purchase.  When the seller initiates, the buyer is in his rights to walk away.  There is no obligation to participate in a conversation.   Just about everyone I know (definitely not a random sample of the population and actually not a very large sample either) subscribes to this view.  Whether it is universally held, I can't say.  But I'm going to take it as a universal principle in what follows, because it helps to frame the argument better.

Indeed I have an aversion to getting a sales job even when there is no dollar purchase of a good or service.  Instead, what is being sold is some idea.  The person delivering the spiel wants me to embrace their idea. I want no part of it.  I first encountered this situation in college where some students from another university came to the Union to proselytize about religion.  I participated once or twice, uncomfortable in each situation.  I can't say that I learned anything from this, other than to avoid such conversations in the future.  I subsequently experienced this sort of thing as a young faculty member when thrown into departmental politics, where the infighting was quite fierce (because the stakes were small).  And later when I became involved with learning technology and where the overt goal was diffusion, some educational technologists embraced a whiz-bang approach.  This was the train, it was about to leave the station, and you better get on it before it does.  I confess that I fell for that one for a while, but I soon came to my senses and insisted that you had to connect how the technology was going to help teaching and learning in a plausible way to deliver a credible message.   When I was a more experienced administrator I tried to keep the bs to a minimum and give an accurate picture of what was going on.  That's been my style since and it is the style I prefer in others.

Now let me turn to politicians briefly.  They are speaking to a broad audience when we the public hear what they have to say.  Doing a sales spiel (stump speech) is an occupational hazard.  Nuance and troubling facts that offer a decent counterargument are ignored.  This is true on both the left and the right.  But otherwise, I don't think the situation is symmetric at all, as I wrote when I deconstructed a piece by Bret Stephens in this post, The Demagoguery of the Reasonable Conservative Commentator.  Nonetheless, if you were/are a fan of The West Wing then you know that in Season Six the consultant Amy Gardner was brought in to advise the candidate Matthew Santos on "The Presidential Voice" where the trusted candidate speaks with gravitas and talks in broad strokes.  (This is in an episode called Freedonia.)  I take from that show that this is the nature of political speech in public settings.  The voter then has to play a game of inference, inverting what the speech really implies.  Of course, we all play inference of this sort in every conversation where the details aren't fully spelled out.  But it is more so with political speech, ergo Lincoln's line about fooling some of the people some of the time.

Now I want to switch gears and distinguish between argument, which I like, and which I participate in regularly, even as a reader or as a listener, and debate, which I'm far less fond of and which I frequently find less satisfying, though I did enjoy watching The Great Debaters.  More recently I watched a video of an old debate between James Baldwin and William Buckley, and while Baldwin was interesting to see here, I was offended by Buckley's demeanor, which seemed condescending to me.  I had seen Buckley debate before.  He has a tendency to make a flip remark when he doesn't actually have a refutation to what the other side says.  It's not a tactic that I appreciate.  With this as background here is the distinction I want to emphasize.

With argument the participants start from positions of perhaps opposing views, though there may be some overlap.  The goal is to produce a deeper understanding of the situation.  Neither side will likely prevail in full, especially if there is some merit in each side's original position.  Instead, what will emerge is something new that is produced from the prior views during the argument.  Mary Parker Follett called the process Interweaving.   For the process to work each participant must acknowledge the good points made by the other and when a convincing refutation of one's own maintained position is offered up that must be acknowledged - touché.  For an argument to take place, the participants are clearly necessary.  An audience is not.  Indeed, an audience might encourage a participant to grandstand and become less flexible as a consequence.  If there is only modest differences in opinion among the participants at the outset, perhaps the grandstanding can be contained and the audience might then learn how good argument works.  I fear that many students today don't have good models of what effective argument is about, so they never learn to want to participate in argument.

With debate the participants start from opposed positions and adhere to those positions throughout.  They don't modify their views at all during the debate.  The goal is to win, by convincing the judges or the audience that their side is making the case better than the opposing side.  Debate requires non-participants to watch and evaluate, then to declare a winner.

The Marquess of Queensberry Rules in my title emerge from the observation that a sales spiel can be an effective debate tactic, even as it masks the truth.  It could produce a win by propagating a myth if the judges were prone to embrace the spiel.  The deliberate promotion of myth should be deemed "hitting below the belt" and against the rules.  A debate needs to adhere to basic norms of fairness.  Promoting a myth that is known to be false ahead of time  is one example of unfair behavior.  A full adaptation of the Marquess of Queenberry Rules would delineate various possible departures from fairness and rule them all out of bounds.  But rather than articulate those norms here I want to do something else.

I'd like to get some insight into the debaters view of epistemology.  How did they come to their own held views that they will be promoting in the debate?   If the debaters are true believers, the debate will offer a clash of views and the sales spiel approach is quite likely to be deployed by each side.  In contrast, if the debaters claim to be swayed by evidence, so that when new information that is credible is presented they might change their views to accommodate the new information, then the form of debate itself might not be so different from argument and the audience in attendance might be able to synthesize the arguments in a way that produced something new.

With that in mind imagine a pre-debate survey administered to the participants, the results of which get released to the audience prior to the debate.  Here are some hypothetical questions that one might pose on such a survey. 

1.  Are you ever skeptical of your own held views?
1a.  If you answered yes, how is that skepticism expressed?
1b.  If you answered yes, have you ever publicly changed your view?
1c.  If you answered yes to 1b, can you give an example of that? 

2.  When you present information to support your position are you sometimes aware of other information that might repudiate your position?  If so, how do you treat this other information?

3.  In preparation for debate do you argue with people who hold dissimilar views to your own?  Have you had prior experience of argument where you've actually persuaded somebody to come over to your side?

4.  Likewise, have you had prior experience when arguing with people with dissimilar views of switching your own view to be more in accord with theirs?  If so, can you give an example of this? 

The list of items might be longer, but this should give the idea.

Now I want to switch gears again and challenge those who so believe that debate among opposing points of view is critical for a democratic society.  I believe that is not true in general and/or there is an implied assumption that the debaters are willing to argue to the truth.  If that assumption doesn't hold, it is my view that debate can produce very little other than the enmity of the participants and a lot of blather.  The old TV show Crossfire offers evidence of this.  Jon Stewart's take down of the show makes the argument far better than I ever could.  And, of course, the show went off the air soon thereafter.  Have we learned anything from that experience?

* * * * *

Here I want to give my example.  I am going to take on Arthur C. Brooks in his piece Why Do We Reward Bullies?  Brooks is President of the American Enterprise Institute, a well known Conservative think tank.  I'm going to go through this twice, first in debate mode, then in argument mode.  I hope this illustrates the difference between the two.

Debate Mode

Brooks makes two main points.  First, we should fight bullies rather than cow to them. If we fought them consistently they'd lose their currency and disappear (or at least appear far less frequently).  Second, bullies need an audience to thrive.  They play to the audience.  So don't blame the bullies for the fact that there is a ready made audience for their bullying.  That includes President Trump.  He is simply satisfying what the market demands.

My counter will show the two points are mutually inconsistent.  Demonstrating that inconsistency, I will have won the debate and it will be over.  However, that will not elucidate the underlying situation at all.  So for the reader, the results will be unsatisfying.

I begin by considering the decision to capitulate to the bully or to fight the bully as akin to the fight or flight instinct in most animals.  Since I actually have a class session in my course on the Economics of Organizations on Conflict, where I like to briefly consider a Darwinian approach, it is worthwhile to reflect on the emotion associated with flight versus the emotion associated with fight.  Students readily agree that flight is associated with fear while fight is associated with anger.  If a normally dovish person is going to take on a bully, the person is going to be very angry.  The person will have gotten worked up into a lather.  Being angry at the bully, the person will blame the bully for the bullying behavior.  That's what will happen.

Brooks wants people to take on bullies but then not blame them.  Those are mutually inconsistent.

Debate over.

Argument Mode

Here I want to begin by noting that I actually have some sympathy for what Brooks seems to be driving at, but I think his framing of the issues is not good, so he ends up painting himself into a corner.  The first part of the argument is to search for a better framing.  Then we can ask whether we can define the problem in a way that still makes sense to Brooks,  in other words if he is somebody who might adjust his view after hearing the argument.  Then we can get at solutions.  We might still disagree vigorously on how to solve the problem, but we might get much closer on what the problem is.

The first part of the argument is about defining what bullying is, which Brooks never does. Can there be aggressive behavior without bullying?  For example, NBA players are known to talk trash during the game. (Larry Bird in an earlier era and Draymond Green nowadays come to mind.)  Is talking trash bullying or is it simply part of players competing against one another?  What about a prosecuting attorney taking on a hostile witness?  Is that bullying or a necessary way to get at what the witness really knows?  Frankly, I don't know where to draw the line between bullying and legitimate forms of aggressive behavior.  Might it be that all forms of aggressive behavior in political discourse should be questioned, regardless of whether the behavior is legitimate or bullying, simply because aggressive behavior is inappropriate in political discussion?   If so, then Brooks' piece, while not a complete red herring, misdiagnoses the problem from the outset.

The next part of the argument is to focus specifically on anchors for news/commentary shows.  With that I think it worth mentioning the Aaron Sorkin vehicle The Newsroom, which was far less satisfying to view than The West Wing.  The Jeff Daniels character in the show, Will McAvoy, was actually a prosecuting attorney before he switched to do the news and the premise of the show from the get go is that he would bring his prosecuting style to doing the news, no holds barred.  We might then ask, is this a good way to do the news or not?

My belief, and I used to be a regular watching the PBS News Hour, which favors a more inquiring style to do the news, but I have since gone pretty much cold turkey on watching any news, is that the inquiring style started to fail at around the time that the Tea Party experienced electoral success.  And maybe there was evidence of that failure much earlier, when New Gingrich was Speaker of the House.  In fairness as perceived by the viewers (and as perceived by the political parties) the inquiring style demanded representatives from the Democrats and the Republicans, either simultaneously or sequentially, so the audience could see different points of view represented.  But those representatives could stonewall by spewing a party line rather than give thoughtful and reflective responses.  This spewing of the party line was particularly dissatisfying to watch.  (It was very much like how above I characterized debate among true believers.)  The prosecuting style was meant to remedy those issues.  Alas, every cure has side effects, some unanticipated when the cure is first implemented.

The next part of the argument is to look historically and ask whether these issues have always been with us.  On this score I look back fondly to when Walter Cronkite was the most famous newscaster in the country and the political parties were far less polarized.  Might it be that more restrictive supply of who provides the news coupled with tighter regulation of how the news is delivered would return us to that idyllic time, when the news providers were trusted?  If wishing would make it so.  I will provide two points to counter this view.  First, the film Network dates back to 1976, when Cronkite was still the main man for CBS News.  Network is a remarkably prescient film.  Paddy Chayevsky, who wrote the screenplay, was able to discern all the ills of the current news shows 40 plus years ago.  Given that's true, we should question whether the time of Cronkite was really all that idyllic.  Second, while then the TV channels were over the air, so restricting supply was possible by the restriction that TV networks operate at different frequencies in the radio spectrum, now TV provided by cable or satellite faces no such restriction.  Further, regulate TV and all that might happen is for the programming to move to the Internet.  Netflix, for example, might begin to offer the news.  When only some of the providers can be regulated, that would seem not a very effective solution.

The next part of the argument is to look at current day approaches to stop bullying outside the sphere of political discourse (though they overlap with that sphere).  Two of these worth mentioning are #BlackLivesMatter and #MeToo.  Brooks doesn't mention either of these.  I can only guess as to why the omission.  (That would be a general aversion to collective behavior of any kind.)  What these approaches show is that collective responses to bullying can raise attention to the problem and perhaps empower individuals to come forward to point out specific acts of bullying.  Individual action alone doesn't seem to work well, because the individual acting alone in intimidated by the bully.  Let's say that is true.  Might collective responses also work in the arena of political discourse?

My sense is that it won't work.  Both #BlackLivesMatter and #MeToo are fundamentally about fear.  The victims are fearful of the bully.  These fearful people have banded together because there is strength in numbers.  In contrast, viewers of either Fox News or MSNBC are addicted to watching because the shows on each network are sure to provoke anger in the viewers.  The viewers have become addicted and want to be so provoked.   Karl Marx argued that religion is the opiate of the masses.  In this sense Fox New and MSNBC are the prophets of the new religions.  Thomas Edsall's most recent column, Which Side Are You On?, provides good support for this view.

In turn, all private news organizations are in a competition to attract eyeballs.  The more eyeballs the more revenue.  Capitalism is fundamentally what drives this behavior.  If the viewing audience could be guaranteed in some other way, perhaps the programming then would be less trying to stoke the audience into a rage.  But there is no way to guarantee an audience.  So rather than do the right thing, they do the profitable thing.

I want to note that this is not new.  Back in the Walter Cronkite days The New York Daily News had a greater circulation than the New York Times, and the New York Post, which had been a respected afternoon paper, switched over to mostly tabloid news.  Social networks aren't causing this, though they may be exacerbating the problem. The problem has always been with us.

* * * * *

Argument tends to be slower and far more nuanced than debate.  People expect there to be simple answers to social issues and debate encourages that expectation.  Argument supports that there are myriad issues at work  and no one simple solution to get rid of the aggressive behavior in our political discourse.

How would Mr. Brooks respond to my piece were he to read it?  Dismiss it with an ad hominem on me? Or embrace some of what I said while trying to refute some of the specifics?  I don't know.  I do know that Libertarian types in general are loathe to acknowledge market failure and the issue Mr. Brooks has identified surely is a kind of market failure.  If he did acknowledge that much, what then?

In my view, the right thing to do would be to argue further.

Friday, May 11, 2018

Getting Connected to Oneself

I spend a lot of my time in introspection and have done so for much of my life.  Having that inner conversation may be one way to consider being connected to oneself.  Yet while I do it a lot some of these conversations don't really draw me in.  They are more chatter than anything else, time fillers, nothing more. There are other conversations that are more gripping, some of which result in writing a blog post like this one.  But still, they seemingly are products of the conscious mind only.  Some years back I read On Not Being Able To Paint and it was an eye opener for me.  Milner explains that often the conscious self blocks the subconscious, and in so doing we become dull and our creativity is hampered. In fact, this is the core problem she identifies to explain why she has trouble painting.

We tend to think of our own subconscious mainly at work in dreams, and of course it is there, though interpreting our dreams and what the subconscious is driving at is an art, one that most of us probably find elusive.  We may be less aware of the need for daydreaming, as a way to release the subconscious where it can better interact with the conscious self.  Trying to follow Milner, I'm hoping that writing for me is a way to do that, where during the process of composition I fall into a reverie, lose track of everything else, but zone back in after a time.  So my indirect goal with the writing, quite apart from producing the essay for others to read, is to experience a sequence of finding reverie, then returning to more conscious awareness, and then repeating the process.

This piece, in particular, is motivated about wanting to touch the subconscious regarding some serious health issues that I am now facing.  Of course, I've consciously thought about that quite a bit.  But, so far, it doesn't seem as if the subconscious has weighed in on the matter.  (Last night I had a dream about a rather horrible gigantic monster that was just waking up, getting ready to wreak havoc.  Perhaps the subconscious is beginning to to express itself on the matter, though maybe not.  Many other of my fears could explain that dream.) Are the health issues really of no consequence in the grand scheme, because life goes on?  Or have I simply not given the subconscious enough of an opportunity to express itself?  Indeed, I've written very few blog posts as of late.  I have this feeling that I should get back to that and maybe I will, though I seem to have such a small audience.  Really  that shouldn't matter.  I procrastinate now in composing these slow blog posts because I don't want to fully repeat things I've already written, and I want to say something of consequence as well, mainly to prove to myself that I still can.  Then I find myself getting stuck on the themes I come up with.  The previous post alludes to that in the ultimate section of the piece.  Lacking a way to get unstuck, I then look for other reasons not to write.  That's where the small audience comes in.

I do something else much more regularly.  Every day I will read the Quotes of the Day where four quotes are presented.  I select one based on criteria that I would find hard to articulate.  Let's just say I look for the one I find most pleasing, for whatever reason.  Then I add my personal quip to the quote and post to Facebook.  This routine has become an ingrained habit, one I partake in over my first cup of coffee in the morning.  (Looking back through my Facebook wall, it is appears I started to do this in 2014.)  I mention it here because the quip is evidently a reaction to the selected quote.  It may be of interest in asking where the quip comes from and if others were to do likewise in reading the Quotes of the Day whether they'd come up with similar stuff.  I don't know, but let's say they would.

I want to contrast this with my other short writing that I do on a daily basis, which started at roughly the same time.   I write a rhyme that I post to Twitter (which then is posted to my Facebook wall).  Some of the rhymes have links associated with them, something I read, in which case the rhyme is a reaction to the piece and in that sense the rhyme is like the quip I write for the Quote of the Day.   The more interesting case here is when there is no link.  Where does the rhyme come from then?  Does the subconscious emerge here in "choosing" the rhyme? 

I think that is a possibility, especially when its a subject theme that comes first to mind although some of these rhymes are riffs on ordinary experience, typically something recent that has come to my attention.  There is no subconscious needed for that, just a nose for small things that might be the subject to write about.  But even then I might connect the subject to something entirely unrelated.  This one, for example, connects the first cup of coffee in the morning with Edgar Allen Poe's The Raven.  Why make such a connection?  The best answer I have to that question is that my subconscious needs to assert itself and such odds connections offer a way to do that.

Sometimes it might just be word play.  I come up with a couple of words that rhyme.  That's the starting point.  Then I might try a line where the first word concludes the line and another line that ends with the second word.  I ask myself, is there a potential for a (very short) story here by juxtaposing the lines?  Suppose there is.  Then a decision needs to be made about format.  Many of my rhymes are limericks, but I've experimented with other formats as well. A different one I kind of like has two verses, each verse with three lines.  The word at the end of the first line rhymes with the word at the end of the second line in each verse.  Then, the word at the end of the third line in the first verse rhymes with the parallel word in the second verse.  This one is my favorite with that structure.  Is this particular rhyme veiled social commentary or is it just nonsense?  I'm not sure.  I'm also not sure where the conscious self takes over and where the subconscious holds sway. But I feel their interaction more with the rhymes than with the quips.

Some of this may be the time allotted to the task.  I almost always post the quote with the quip first.  And I come up with the quip almost immediately, perhaps a minutes or two, not much longer than that.  I do have an internal censor (more about quality than about whether stuff is too risque).  I think of the activity as trying to find something when I have a pretty good idea already where to look.

Though I post the rhyme second, I've actually written it earlier, quite often the day before, sometimes, unfortunately, when I wake up in the middle of the night and can't go back to sleep.  There is more exploration with the rhyme on the topic, on how particular lines might go, on what is needed to fit it all together.  The sense of exploration is pretty close to the feeling of reverie I mentioned earlier.  When it's happening I am most engaged.

And now I want to talk about my fear about writing.  It's that these shorter forms are kind of like eating dessert first.  They spoil the dinner.  It this is true, I'm having trouble with composing the longer pieces because I spend too much time on the shorter ones.

Or it could be that I'm just out of practice.

Monday, April 30, 2018

Sensitivity and Social Responsibility - Can They Be Taught?

With malice toward none, with charity for all, with firmness in the right as God gives us to see the right, let us strive on to finish the work we are in, to bind up the nation's wounds, to care for him who shall have borne the battle and for his widow and his orphan, to do all which may achieve and cherish a just and lasting peace among ourselves and with all nations.
Abraham Lincoln
Last Paragraph of the Second Inaugural Address
I asked myself what quote might be used to support the title of this post, if not to exemplify social responsibility itself then to discuss the necessary feelings that are the precursor to social responsibility.  I thought of the above, though I know that quoting Lincoln is tricky in many ways, especially because it's very hard to live up to his example.

Here's a little backdrop to better make better sense of the quoted paragraph.  America was still engaged in the Civil War at the time of this address, though by then the war was winding down with it evident that the North would prevail, yet the timing and the terms of the peace were still uncertain. The war itself was horribly destructive and in a very real sense there were no winners, since everyone felt devastated.  Certainly, Lincoln felt that way.  The peace would be just as difficult to forge and it was unclear how reconstruction would work.  Lincoln was setting the stage for that, though definitely not preordaining what would happen.

This fantastic essay by Garry Wills does a complete deconstruction of the full speech (which is available at the link above, is not long, and definitely worth the read itself) and I would encourage people to read Wills essay slowly and completely.  The myriad complexity that Lincoln was dealing with coupled with his religiosity, which I was previously not well aware of, that made him see the Civil War as the necessary work of God to punish all for the sin of slavery, both North and South, as well as the the various crimes committed during the war, provided a seemingly impossible situation to remedy.  Yet finding a remedy was the task at hand that Lincoln took on.

One wonders whether in our contemporary lives as we go about our ordinary business if we ever confront situations that parallel what Lincoln had to deal with, even if those situations we confront are not nearly as intensive.  A couple of weeks ago I had jury duty and ended up being empaneled on a jury for a criminal case.  By that time I was already cooking on this post about social responsibility, so I paid more attention than I otherwise might about how the jurors went about their work. While we were not of the same mind regarding guilt or innocence of the defendant, everyone on the jury was earnest, both in arguing for their own held view and in acknowledging the opinions expressed by other jury members. All jurors took the responsibility seriously and gave it their all.  It is just this sort of behavior that exemplifies social responsibility for me.

It is tempting to apply Lincoln's address to our current national politics, but I won't do so here because I don't think I'm skillful enough to pull it off and because my own perch is not lofty enough; too much of the politics I tend to see from one side only.  But I will make one observation on that front before moving onto the topic I do want to discuss.  There is a tendency to talk about particular policy prescriptions first, and then the policy prescriptions themselves become the issue - for or against.  In other words, a solution is proposed and then that serves to draw a line in the sand. The presumption behind this approach is that the problem description has already happened.  I think, however, that the problems are typically under specified, because they are seen from the perspective of some of the populace but not of all, and because they are seen in isolation from other issues that surely are related.   So this approach is problematic for those reasons.   In any event, I might write about our national politics again in the not too distant future, but here my focus is on undergraduate education.

A couple of years ago I wrote a post called A Vision of College Education with a Strong Historical Basis, which was based on a talk given on campus by Harry Boyte.  He emphasized an education steeped in good citizenship (his term) and his talk resonated with me in several ways.  In my recent teaching, I have experienced what I'd term a failure of good citizenship in the classroom, from poor attendance to dysfunctional group project teams.  So I am returning to the same themes here, though calling it social responsibility (my term) instead as it makes clear that should go well beyond voting, paying taxes, serving on a jury, etc. and include all aspects of social interaction.

In Boyte's talk he used the metaphor of educating the head, the heart, and the hand.  I'm going to exclude educating the hand here, as I don't have anything to say about it.  Most courses that I am aware of at the university have a 100% focus on educating the head.  Certainly until recently, I would have said that is my focus in teaching economics.  I want to consider how instruction would change if the scope broadened, to include both the head and the heart.  With that I want to restrict attention to college students on residential campuses who are in the traditional age category (usually described as 18-22 year olds) because that falls within the realm of experience where I have some familiarity.

We should first consider the education that students get about social responsibility before they enter college.  Obviously, that will vary from student to student as much of this education will take place outside of school.  It is conceivable that students learn social responsibility from their religious training.   In my post about Harry Boyte's talk I mentioned this essay by Albion Small, The Bonds of Nationality, a piece from 1915.  Small argues that social responsibility must indeed be taught and that churches are the right place for doing that.  A more contemporary writer, Nicholas Kristof, has argued that some evangelical Christians show extreme selflessness and social responsibility and that we should be aware of their good example. Nevertheless, I would take this as the exception rather than the rule.  Too many people live in one world on the Sabbath, where they do practice what the religion teaches, but then act entirely differently during the work week, and then in a much less socially responsible manner.  Further, I'm afraid, responsibility within the faith is not sufficient in our society today, yet responsibility outside of the faith is not learned nearly as well.  Indeed, antipathy to others who are unlike ourselves is a major issue, a lesson seemingly taught by some religions.  Education in social responsibility would produce acceptance of others and respect for diversity of background.

There are non-religious activities that teach kids about social responsibility during the grade school years. For example, many kids become boy scouts or girl scouts.  Scouts are taught to do good deeds.  What counts as a good deed?  Will the lesson to do good deeds as a kid stay learned as an adult, well after they stop giving out merit badges?   These are some the questions I want to take up in this piece.  Even if these lessons stay learned, however, I don't think that the good deed view of social responsibility is sufficient, as I will explain below. 

Elsewhere I've remarked that for the last few years as I've been walking on campus, students have been prone to hold the door for me as I enter a building.  I don't remember this happening ten or fifteen years ago, so I attribute this largely to me.  My pace is slow and a bit labored now, and there is evident gray in my beard.  Helping the elderly is surely the polite thing to do.  Perhaps holding the door counts as a good deed.  But I want to get well beyond this sort of example.  There is no need to look carefully in seeing that my walking is slow and labored.   And holding the door for somebody else is just the polite thing to do. 

To see what is in front of one’s nose needs a constant struggle.
George Orwell

Over roughly the same time interval where I've observed students holding the door for me, I've become aware of expansions of the vocabulary to include terms such as microaggression and mansplaining.  These terms didn't arise out of nowhere.  They are the product of real social issues, situations where people are hurt by the behavior of others.  One wonders how many former scouts nonetheless engage in this sort of condescending behavior, which I assume is not out of maliciousness but because they are unaware of the consequence of their own actions.  The use of the word sensitivity in the title of the post is there deliberately to indicate that social responsibility demands an awareness of others and some understanding of how what we do is received by them.  If scouting doesn't produce such awareness, then what might do that?  And if Orwell is right that such perception requires substantial effort, what then might encourage the person to persist in making this sort of effort?

Let's next consider students in their first year of college, including the late summer when they may already be on campus but classes have not yet started.  This is the most formative time for students in their college experience.  In particular, even if students don't do this consciously, they may be shopping around for a personal philosophy, one that they embrace for themselves, one that is distinct from the cultural environment they lived in while growing up, which largely reflected the views of their parents. Will social responsibility be a part of this new personal philosophy?  What would encourage that to happen?

There are some obvious issues that need to be confronted before addressing these questions.  For many students who are starting college, this is the first time living away from mom and dad for an extended period of time and the first time the students are responsible for their own supervision.  It's good for the reader to recall back to that time when the reader was in college.  It's quite likely that there was an extended period of overindulgence, which to give it a label we can call - when the cat's away the mice will play.  (In case this is not obvious, here the parents are the cat, while the students are the mice.)

It's not that you want to thwart such playful behavior.  Experimentation of this sort is essential for the student's growth.  It's what lesson the student gets from the experience that matters.  Might that lesson then develop into a personal philosophy centered in hedonism and nihilism, each of which would impede developing a sense of social responsibility?  If, in contrast, the student moves on from this phase, learns to accept pleasure as a small piece of the puzzle rather than as the puzzle itself, it would seem that the student has a far better chance to develop a sense of social responsibility.  We need to ask here, what will tip the balance, one way or the other?

A different issue arises from the comparatively high tuition that students or their families pay.  As I've noted several times before, the full boat in-state tuition and fees at the U of I are substantially higher in real (inflation adjusted) terms than the tuition that my parents paid for me back in the 1970s at elite private institutions (first MIT, then Cornell). The high tuition tends to encourage mercenary tendencies in the students and that itself can block developing a sense of social responsibility.

In a recent piece from the Guardian the author argues that Business Schools bring out these mercenary tendencies in the students and acculturate them into a world view that capitalism is good and essential, irrespective of the social harm it might cause.  While the Business School may do this to a greater extreme than elsewhere on campus, regarding teaching social responsibility the Business School might serve as the canary in the coal mine about how the education itself biases the students in directions that are anti social responsibility.

If we educate our graduates in the inevitability of tooth-and-claw capitalism, it is hardly surprising that we end up with justifications for massive salary payments to people who take huge risks with other people’s money. If we teach that there is nothing else below the bottom line, then ideas about sustainability, diversity, responsibility and so on become mere decoration. The message that management research and teaching often provides is that capitalism is inevitable, and that the financial and legal techniques for running capitalism are a form of science. This combination of ideology and technocracy is what has made the business school into such an effective, and dangerous, institution.

There is then the question of whether this might be remediated by a course or two that focuses on social responsibility.  Most Business Schools indeed have such classes.   The author of the Guardian piece is not sanguine about this solution at all. The courses are a token attempt at addressing the issue, nothing more.

The problem is that business ethics and corporate social responsibility are subjects used as window dressing in the marketing of the business school, and as a fig leaf to cover the conscience of B-school deans – as if talking about ethics and responsibility were the same as doing something about it. They almost never systematically address the simple idea that since current social and economic relations produce the problems that ethics and corporate social responsibility courses treat as subjects to be studied, it is those social and economic relations that need to be changed.

If we take this critique seriously, it then suggests that teaching social responsibility must be done holistically across the curriculum, not in one course or two.  Perhaps the high enrollment classes should be exempt from this burden, because they will become impossible to teach otherwise.  But the remainder of the courses should not be exempt.  Further, we need to rethink the curriculum insofar as the high enrollment courses are what dominates in the first year.  Teaching social responsibility should happen from the get go. That is not compatible with a program of study that in the first semester has all high enrollment courses.

The last thing I'd like to consider before turning to my very broad strokes recommendations for social responsibility education is what students learn on this front from the "school of hard knocks" while they are in college.  Is that a good teacher of social responsibility or not?  It seems to me that question is worthy of a broad and extensive evaluation in a serious research project that would like to get a handle on the answers.  Lacking the results from such a study, I will content myself with what I've garnered from my recent teaching.

In my class, students do weekly blogging where they write about their experiences, frequently those are experiences while in college.  Some of the more poignant posts are about small acts of betrayal committed by peers and/or by people they previously thought were their friends.  They are actually prompted to come up with such examples, as a fundamental point in the economics of organizations course is to consider "transaction costs" and the "holdup problem" or, in other words, that in economic exchange people behave according to their own advantage, which sometimes implies they behave to the detriment of others.  This is not an uplifting view of human nature.  Moreover, if one hasn't experienced such opportunism previously and first encounters it in a state of naiveté, before having worked through how to lessen the likelihood that others will behave in a socially detrimental manner, it can be quite deflating to witness this sort of behavior first hand.

The most common of these that students write about, by far, is group work done for a class, where one or more of the team members "free rides" on the efforts of the other teammates. It's usually the diligent students in my class who come up with such examples.  If I can juxtapose the students who have taken the wrong lesson from the experience - when the cat's away the mice will play, with those students who learned the right lesson there but then experience these small acts of betrayal, neither group is apt to embrace social responsibility.

Tarzan comes home after a long day at work.
Tarzan:  Jane, bring me a double martini.
Jane dutifully brings Tarzan his drink.  He downs it immediately.
Tarzan:  Jane, bring me another double martini.
Jane:  Tarzan, what is it?
Tarzan:  Jane, it's a jungle out there.

I fear that many of the diligent students I see in my class come to the same conclusion that Tarzan makes.  Of course, everyone experiences some acts of opportunism by others. In my class, students who can't come up with better examples talk about their experiences driving to work during the summer (in the Chicago area) where they have about an hour-long commute each way.  They report the unsurprising result that drivers are rude to each other, exhibiting a variety of selfish behaviors in the process.  Having grown up just a couple of blocks from the Long Island Expressway, that jives with my experience when I learned to drive. But that's not the issue.  The issue is whether the experience generalizes or if instead it only constitutes a specific niche and elsewhere people learn to act in a socially responsible manner.  There is some serious research that shows the rich are less compassionate.  For those who were born with a silver spoon in their mouths, perhaps the mechanism is different.  But for those among the rich who worked their way up (probably starting from the upper middle class) I suspect their worldview is quite like my diligent students who have experienced these small acts of betrayal. They take it as the norm, something they can't influence by their own behavior.  So they come to disregard others, including many of their peers.

I will conclude this piece with some basic ideas about the education I have in mind.

The Golden Rule as the foundation for social responsibility - Students must come to embrace treating others as they want themselves to be treated.  It should serve as a guiding principle in their own personal philosophy. This embrace must go beyond the purely intellectual (in Boyte's terms educating the head).  It must be emotional as well (touching the heart).  Further, students must develop efficacy in acting on the Golden Rule so they can indeed benefit others via their own actions when they set out to do so.

Experiential learning where social responsibility ends up as a consequence of reciprocity - I've been surprised over the past few years in my course evaluations where some students report that I evidently care.  Why make such an observation?  My conclusion is that this contrasts with their more frequent experience, where the institution and the people in authority with whom they interact don't seem to care much at all.  As I'm making a claim here that this is the norm, it would actually be good to investigate whether that claim is valid.  Here, assuming it is, a big question still to be considered is how instruction must change so that students come to believe that their instructors care about them.  Likewise, how must the rest of the system change so students perceive it is sensitive to their own needs? A big part of providing an answer to the question in my title is whether the system can change in this way or not.

An emphasis on duality with the benefits from both diversity and oneness -  Students need to learn to be inclusive, to respect the differences that exist among us, and to understand that others with different backgrounds can enlighten us by bringing a new perspective to aid our imagination.  Yet students also need to accept that we are essentially the same.  The Golden Rule itself follows from our oneness.  There is no Golden Rule when there is tribalism.

Develop sensitivity awareness by bringing our failures and foibles out into the open rather than by keeping these personal defeats concealed out of shame -   In considering others we are always self-referential.  Empathy, in particular, is learned from our own failures and then reflecting that outward. Many students now have no avenue to discuss their own failures, so do not come to see them as providing valuable life lessons.  Doing this will require a gentle hand from the teacher and a very safe environment so that students become comfortable opening up on these matters.  Another big part in answering the question in the title is whether we are up to meet these safety needs or not.

  • Two specific topics that have been written about recently might provide the focus for this sort of education.  One of those is loneliness in college and what students do to counter it.   The other is about student anxiety and what can be done to manage that.  For example, it may not occur to a diligent student now that a teammate is very anxious and hence appears a shirker for that reason, rather than because the student really wants to goof off.  Getting the diligent student to appreciate that there is more than one possible explanation for the observed shirking behavior will go a long way toward developing the student's sensitivity. 

* * * * *

This piece took me a long time to draft.  For reasons I still don't fully understand, I was afraid to write it.  So I procrastinated in doing so.  I've rewritten it already several times.  It probably still needs another rewrite or two, but for the time being I think I've gotten these ideas out of my system.  I hope that others begin to discuss these issues.   I'm convinced it's a conversation we need to have.