I have lived in Champaign Illinois for a long time, more than 38 years, having started as a faculty member in Economics at the U of I in fall 1980. For the first 10 years I lived in town, an apartment, then a condo. The last 28 years I've lived in houses closer to the edge of town. For 14 years I lived in an old Victorian house that had been moved to Champaign from Villa Grove and then renovated after the move. There we had a cornfield right outside our back yard and another one across the street in the front of the house. Then we moved to a brand new house in a subdivision that was still having new construction at the time. In this case the cornfield was a block away and there are other cornfields within walking distance of the house. (Some years, of course, they grow soybeans in those fields, rather than corn.) I consider myself a displaced New Yorker, having grown up in Bayside, Queens, NYC, subscribing to the New York Times, and being a lifelong Yankees fan. But my perspective is different from my high school classmates who stayed on the East Coast after college. Life in the Midwest in a college town is different and does have an effect on one's perspective.
The above is meant as backdrop for some speculation on my part, mainly in reaction to the piece The Hard Truths of Trying to 'Save' the Rural Economy by Eduardo Porter, and also a bit of reaction to Thomas Edsall's latest, The Robots Have Descended on Trump Country. The starting point is that many folks in rural America, and ditto for those in Rustbelt towns where the big manufacturing employer closed the factory some time ago, stayed put because they had roots in those locations, but their job prospects were nil. So they have suffered economically, and then went through all the psychic pain associated with that (think opioid crisis and suicide). In Edsall's piece, several economists are quoted about the dislocation effects of automation, and that absent any policy to counter those effects, people will suffer. There was agreement on that. There wasn't as much agreement on what that countering policy should be. In Porter's piece, the agglomeration effects of big metropolitan areas are so strong that the vast majority of jobs will be in the big metropolitan areas. Therefore, if there is a policy established to relocate people so they can secure better employment, they should be relocated to big cities.
While there is clearly some logic to that recommendation, I want to challenge it on some grounds, and then offer an alternative possibility, ergo the title of this piece. Regarding big cities, sometimes other economic variables than jobs are brought up to suggest that the cost-benefit calculation must be done carefully. The two that come to mind are the costs of housing and the time to commute to and from work.
But I would like to focus more on ecological/environmental factors that are rarely considered in this context. We need to start questioning the wisdom of having a massive number of people living in chronic drought areas. The California fires are the most recent and highly graphic example. Does it really make sense to ship more people to California, unless through massive desalinization efforts or an incredible run of good luck that brings a lot of moisture back to the area the effects of the sustained drought can be countered? This one, it seems to me is right in front of us. Moving large populations to dry areas seems like a formula for disaster. We've been doing it for some time already. When are we going to wake up?
The other one, that perhaps is less visually evident but I gather is quite a threat as well, is to move lots of people toward coastal areas where rising sea levels pose a long term threat. This too might be countered by the building of massive sea walls, but absent that, is it wise for people to relocate to coastal areas? If we are going to have some policy for migrating people shouldn't such factors be taken into account as well the availability of jobs?
Let me say a bit more about the Champaign-Urbana (CU) area before getting to my suggestion. The University itself is quite large and has been growing the number of students, particularly international students who pay substantially more than the in-state students and yet are willing to come here because the school has a strong reputation. Data about the Champaign-Urbana MSA coupled with a map of the three counties in the MSA, Champaign, to the north Ford, and to the west Piatt. What the reader should get from looking at this is a scattering of small towns around CU that, on on the one hand, may project a rural atmosphere and, on the other hand, are not too far from CU. I can report that there are plenty of pickup trucks in the parking lots when I go grocery shopping or to visit my healthcare provider.
So, if you are thinking about psychological adjustment for somebody who must move to take advantage of economic opportunity, one might guess that it would be far easier for somebody with rural roots to move to the CU area than it would be to move to Chicagoland. And in saying this I'm thinking that CU is emblematic for other college towns with big universities, all around the country. Now, one might want to narrow it a bit. The U of I is the land grant college in the state of Illinois. The university has an Ag school (it goes by another name but it is still an Ag school). And there is an Illinois Extension Service, that provides offerings for the entire state and beyond. In other words, there are already structures in place that could be utilized for the re-education and workforce placement of rural people who migrated to the area. I suspect this same thing is true for every college town which is the land grant college for that state.
Now just a bit on what work these people would do after they migrated here, for jobs that don't yet exist and would need to be created. Perhaps some of this would be in "next generation agriculture" that aims at addressing global warming. I am way out of date on this. The last piece I can recall reading on the subject is from more than a decade ago. So I should not be the one the who says which projects, if any, in this area are worth pursuing. I did want to bring it up here in the following way. The jobs we're thinking about are going to be government subsidized, not jobs that emerge simply from market forces. It makes sense to do some experimenting along these lines, so I would expected something in this area.
But there are also other already available technologies that could be much more intensively utilized. (And now my chance to quote Bob Dylan - The answer is blowin' in the wind.) If you drive along the interstate, once in a while you see a windmill farm. What determines how many windmills there are, how big an investment has been made? I know that near residential developments they are viewed as an eyesore, but the maps I showed have plenty of area left over. Would converting a cornfield to a windmill farm make sense? At the right prices, yes. A lot more effort would need to be make to make the case for this seriously. All I can say convincingly now is that it is really flat here and the wind blows hard quite frequently. If a massive investment in windmill farms makes sense, then installing and maintaining the windmills is the type of work I have in mind.
Similarly, some time ago I wrote a post called Hard Hats That Are Green. There the focus was on solar panels, placed on roofs of existing structures, done so the government picks up the tab. This would generate quite a few jobs, again on install and maintenance.
And while we're talking about it, there are huge needs regarding old style infrastructure - roads in bad repair, government buildings with a lot of deferred maintenance, ditto for water systems and sewers. If there's funding for this sort of thing, it would generate employment.
I want to make one more point and then close. For something like this to happen, it has to be done on the Federal level - a 21st century Morrill Act. At the state level, the big investments will go to the urban areas, where the bulk of the population is and where the big bucks are to support political campaigns. There simply isn't enough of a constituency within the state to get this done at that level. And at the Federal level, it clearly won't get done as long as the Republicans control either the White House, the Senate, or the House of Representatives. But if there were the political will to do it, who then would advocate for this sort of latter day Morrill Act?
Here I'm not trying to sell people on the idea. Just to consider it. I think life in CU is pretty good. Maybe others would think so as well.
pedagogy, the economics of, technical issues, tie-ins with other stuff, the entire grab bag.
Monday, December 17, 2018
Thursday, December 13, 2018
The AI Guys Need to Work on Automating Politicians
This will be brief. I'm reacting to Thomas Edsall's column today, which is more on the how robots have big positive effects on GDP, but big negative effects on the workers they displace. We have yet to figure out a mechanism so all can internalize the benefit of automation. One thought is that we aren't really trying hard to solve this problem, in part, because we have this bizarre and distorted notion of meritocracy. So those who are being harmed by automation don't deserve any better. What a myopic and ultimately foolhardy view of what is going on.
Some years ago I wrote a post called, The Economy as One Big Brain, where I deliberately tried to put the shoe on the other foot, so talked about some fiction which I called The Virtual CEO. If we could automate the position at the top, might we start to focus on making more work for people down the line, as the virtual CEO would not require astronomic compensation to perform at a high level. It is worth pondering along these line.
But a different thought should be entertained as well. There is some presumption that AI works great with repetitive tasks, but is in over its head when the decisions require executive function and are therefore not nearly so routinized. The question is this - do many people in executive positions nonetheless spend the vast majority of their time making routinized choices?
I don't know how to answer that question for CEOs. But it seems to me the answer is yes for many politicians. If so, we should start seeing The Virtual Congressman, if not The Virtual President. I, for one would be very interested in an AI analysis of members of Congress, to see if the work could be automated or not. My guess is that answer correlates with which party the member of Congress belongs, but I'd be interested in learning otherwise.
Some years ago I wrote a post called, The Economy as One Big Brain, where I deliberately tried to put the shoe on the other foot, so talked about some fiction which I called The Virtual CEO. If we could automate the position at the top, might we start to focus on making more work for people down the line, as the virtual CEO would not require astronomic compensation to perform at a high level. It is worth pondering along these line.
But a different thought should be entertained as well. There is some presumption that AI works great with repetitive tasks, but is in over its head when the decisions require executive function and are therefore not nearly so routinized. The question is this - do many people in executive positions nonetheless spend the vast majority of their time making routinized choices?
I don't know how to answer that question for CEOs. But it seems to me the answer is yes for many politicians. If so, we should start seeing The Virtual Congressman, if not The Virtual President. I, for one would be very interested in an AI analysis of members of Congress, to see if the work could be automated or not. My guess is that answer correlates with which party the member of Congress belongs, but I'd be interested in learning otherwise.
Saturday, November 10, 2018
What If We Banned Marketing?
This is going to be a quick post as the idea is much more pipe dream than analysis. I first want to get out a list of examples of pernicious marketing, the ones that seem to be facilitated by technology. In no order of importance my list is:
1. Robocalls
2. Social media, where the user is not the customer. The user is the product. Advertisers are the customers.
3. Fake News
4. Email where vendors automatically subscribe you and then offer an unsubscribe link.
5. Free Web sites that are ad-supported, with ads that fit your profile from Google use.
Perhaps the list can be made longer. It's enough for now. The question to ask is whether you indirectly benefit, by using services that you don't pay for, more than it costs you, by being exposed to all this virtual snake oil selling, or if you would be better off without any of this sort of marketing at all, and paid for the services you want. I don't know the answer to this question, especially as it applies to the population as a whole, but I'm beginning to suspect that for me only I would be better off if the marketing entirely disappeared and I had to fork up more bucks for my online use. (Of course, this depends on how much I would have to pay.)
Now a bit about the economics to sharpen what I have in mind and also to argue more about why the marketing is so pernicious.
Content on the Web is essentially a public good. By this I mean that the marginal cost of getting somebody else to access the content is essentially zero. Paying a very small amount per each access is not the right way to price such a public good, as it will drive use down well below the efficient level. At present, there are subscription models, such as with the digital New York Times, that still comes with some ads and that enable a limited amount of free use for non-subscribers, and that co-exists with the full ad-supported model. The Times, of course, has a long history that pre-dates the Internet. New entrants among the providers are apt to simply rely on the ads.
The thing is, the ad-supported model impacts how the news is presented, from an incentive point of view for the provider. The more eyeballs that see the ad, the more the provider can charge the advertiser for running the ad. So the provider looks to ways that can generate a large audience. Producing shrill content then wins out over producing more level content. Users are attracted to the sensational and repeat use is habit forming. This is likely unhealthy for the user but is surely profitable for the provider.
One might hope to engineer human beings so they have a preference for more level content. In the absence of that, however, one needs a different solution.
So the alternative I have in mind is where there were several buying consortia, and each consortium, in turn, would contract with multiple providers for unlimited access that was ad free, made available to all consortium members. One type of consortium might be state or local government, that used taxes of residents in lieu of consortium fees. Consortium members would need to have some affinity for one another, so the providers that the consortium did contract with would have broad appeal with the membership. Geography might be one source of affinity. It that proved correct, then a government as a consortium makes sense. If, in contrast, affinity was driven by having similar interests in things, then private consortia would be better. I made a post a while back about this, thinking that the real function that the consortia would deliver is protecting members with regard to their use data and policing providers so they don't misappropriate such data.
Now let me switch to phone calls and email. Individual calls or messages are not public goods. With phone calls, the real issue is that the caller doesn't pay if the the call isn't picked up on the other end. And with email, the sender doesn't pay, whether the receiver reads the email or not. This gives senders an incentive to market with these forms of communication, since it is zero marginal cost to them. Conceptually, at least, each receiver could have a safe sender list. People on the safe sender list wouldn't pay by initiating a call to the receiver. Everyone else would pay, a small but significant amount each time they send. That fee would limit the marketing.
I don't know what we'd have to do to set up billing like this for phone calls and email, but what seems clear is the the market won't do this on its own.
In effect, Facebook friends are safe senders, though Facebook has the issue of friends of a friend might not be your friend, so on a comment thread it may not be completely safe. What I'm getting at is that it might really be that we're not so enamored with social networking, but we really are enamored with interacting only with safe senders. I don't know. I'd love a way to test this out.
Let me close here by noting that I've focused on marketing for profit and have ignored marketing to achieve political ends. That was to make the economics part easier to consider. I don't know if my economic analysis has much to say about the political use, other than to note that if phone and email got rid of the spam through some pricing arrangement for senders, then people who have been turned off by these means of communication might rely on them more. In turn, that would offer alternatives to Facebook, which would weaken it's monopoly power.
1. Robocalls
2. Social media, where the user is not the customer. The user is the product. Advertisers are the customers.
3. Fake News
4. Email where vendors automatically subscribe you and then offer an unsubscribe link.
5. Free Web sites that are ad-supported, with ads that fit your profile from Google use.
Perhaps the list can be made longer. It's enough for now. The question to ask is whether you indirectly benefit, by using services that you don't pay for, more than it costs you, by being exposed to all this virtual snake oil selling, or if you would be better off without any of this sort of marketing at all, and paid for the services you want. I don't know the answer to this question, especially as it applies to the population as a whole, but I'm beginning to suspect that for me only I would be better off if the marketing entirely disappeared and I had to fork up more bucks for my online use. (Of course, this depends on how much I would have to pay.)
Now a bit about the economics to sharpen what I have in mind and also to argue more about why the marketing is so pernicious.
Content on the Web is essentially a public good. By this I mean that the marginal cost of getting somebody else to access the content is essentially zero. Paying a very small amount per each access is not the right way to price such a public good, as it will drive use down well below the efficient level. At present, there are subscription models, such as with the digital New York Times, that still comes with some ads and that enable a limited amount of free use for non-subscribers, and that co-exists with the full ad-supported model. The Times, of course, has a long history that pre-dates the Internet. New entrants among the providers are apt to simply rely on the ads.
The thing is, the ad-supported model impacts how the news is presented, from an incentive point of view for the provider. The more eyeballs that see the ad, the more the provider can charge the advertiser for running the ad. So the provider looks to ways that can generate a large audience. Producing shrill content then wins out over producing more level content. Users are attracted to the sensational and repeat use is habit forming. This is likely unhealthy for the user but is surely profitable for the provider.
One might hope to engineer human beings so they have a preference for more level content. In the absence of that, however, one needs a different solution.
So the alternative I have in mind is where there were several buying consortia, and each consortium, in turn, would contract with multiple providers for unlimited access that was ad free, made available to all consortium members. One type of consortium might be state or local government, that used taxes of residents in lieu of consortium fees. Consortium members would need to have some affinity for one another, so the providers that the consortium did contract with would have broad appeal with the membership. Geography might be one source of affinity. It that proved correct, then a government as a consortium makes sense. If, in contrast, affinity was driven by having similar interests in things, then private consortia would be better. I made a post a while back about this, thinking that the real function that the consortia would deliver is protecting members with regard to their use data and policing providers so they don't misappropriate such data.
Now let me switch to phone calls and email. Individual calls or messages are not public goods. With phone calls, the real issue is that the caller doesn't pay if the the call isn't picked up on the other end. And with email, the sender doesn't pay, whether the receiver reads the email or not. This gives senders an incentive to market with these forms of communication, since it is zero marginal cost to them. Conceptually, at least, each receiver could have a safe sender list. People on the safe sender list wouldn't pay by initiating a call to the receiver. Everyone else would pay, a small but significant amount each time they send. That fee would limit the marketing.
I don't know what we'd have to do to set up billing like this for phone calls and email, but what seems clear is the the market won't do this on its own.
In effect, Facebook friends are safe senders, though Facebook has the issue of friends of a friend might not be your friend, so on a comment thread it may not be completely safe. What I'm getting at is that it might really be that we're not so enamored with social networking, but we really are enamored with interacting only with safe senders. I don't know. I'd love a way to test this out.
Let me close here by noting that I've focused on marketing for profit and have ignored marketing to achieve political ends. That was to make the economics part easier to consider. I don't know if my economic analysis has much to say about the political use, other than to note that if phone and email got rid of the spam through some pricing arrangement for senders, then people who have been turned off by these means of communication might rely on them more. In turn, that would offer alternatives to Facebook, which would weaken it's monopoly power.
Friday, November 02, 2018
America in Its Addled Essence
Teenagers go through a troubled phase. Part of that, of course, is the hormones raging through their systems, creating then new feelings for them which can't easily be addressed. Most people think of adolescence that way. But there is something else going on as well. The teenage years give a preview of what being an adult means. It is discovered that the safety of childhood was apparently based on certain myths. Continuing to maintain those myths starts to look like workshop of false idols. When this realization occurs there is apt to be disillusionment, perhaps also a lot of anger. The teenager then has the task of how to deal with the situation.
Some might repress the realization, apparently willing to play by the rules, perhaps for the rest of their lives, or at least until they live away from their parents and can exercise more direct control. For those who go this route, the anger builds underneath. The teenager is seething but is also frightened about overt display of those feelings. Inwardly, such kids are very unhappy. William Deresiewicz writes about the very good students who fall into this category in his book Excellent Sheep. Others might not repress it. They get overtly moody and rebellious. Defying authority becomes an act of self-expression and a way to reclaim oneself. This route requires identifying areas where the authority clearly made poor judgment and then lied about it, so is deserving of blame. For kids of my generation, the Vietnam War served this role. But there were far more personal areas where authority fell short as well. Kids challenged their parents, the long hair most of us wore one overt reminder of this.
Still others might get depressed and then do a variety of seemingly odd things as a consequence. In my case, I learned to mumble, particularly at dinner time. I had a need to express my point of view (as I still have). But I was pretty sure that making myself heard would create one more episode of conflict. I must have intuited this rather than reason it through as I seem to be doing now. Mumbling, particularly at home, became a part of how I was able to cope. I'm guessing that other kids found their own idiosyncratic ways to deal with it.
School was one place that propagated myth, both in the teaching of social studies/history and in the rituals we went through. It may have been elsewhere as well, but I will contain my examples to those two. Before getting to those, I want to observe something about my independent reading, especially in elementary school (probably starting in 3rd grade, but my memory is not good enough to be sure of that). I would go on a jag and read many books in the same theme. Then I would move onto something else and go on another jag.
Pretty early on, that was mythology itself. First, I got into the Norse myths and became enamored with those stories. Then I did a repeat with the Greek myths and maybe the Roman myths too (which seemed mainly the same as the Greek myths, though the gods had different names.) After that, I moved onto biography of figures well known in American history. What I'm asking myself now is whether that sequence is normal in a child's development and, if so, whether for that reason or something quite different it made sense to depict real historical figures in a somewhat mythical light, just as a matter of holding the child reader's interest. In any event, I think it fair to say that each of these biographies gave a romanticized telling of the person's life, as books for children are apt to do. Several of the biographies I read at the time were authored by Clara Ingram Judson.
Sometime later, probably in junior high school, I read The American, by Howard Fast. It is considered historical fiction. (Incidentally, we have an Altgeld Hall on campus, named after John Peter Altgeld, the main character of the story.) I don't know how they draw the line between works of non-fiction and novels about real people. That is not the issue for me. Some myths are delightful and benign - George Washington chopping down the cherry tree, for example. Indeed, I think we come to learn about many important historical figures through such fiction. For example, I came to know something about Vincent Van Gogh from reading Lust for Life, by Irving Stone. However, other myths develop to hide painful truths. It is the latter that is my interest here.
Debunking is what happens when the more painful truth comes to light. If school is the source of the myth, then the debunking must happen elsewhere. It's also possible, perhaps even more likely, for a competing narrative to emerge later where neither has evident claim to be called the truth. The stories then coexist until much later, when subsequent events shed light on which of the competing stories is more likely true.
The first one of these myths is about New York City directly, then about about America as a whole, indirectly, mostly by the implication that New York City was representative of the entire country. I attended elementary school in Queens, P.S. 203. We spent one grade, I believe, on the history of New York City. In that we were taught that New York City was a "melting pot" and the melting pot story was indeed omnipresent. Immigrants came into New York City, spoke the language of the mother country, and lived in communities with the other immigrants from the same country. The children were Americanized, by school primarily, also by listening to the radio, going to the movies, and other acculturating activities. Once Americanized, the background of the kid's parents faded in importance.
Like most myths, the story is partially true. I should note here that when I was a kid the story was applied mainly to people of European extraction. The heterogeneity of these people was very real when I was a kid (and maybe it still is). It makes me question the practice used today in the label White, which implies within group homogeneity.
My belief is that public school is actually pretty good at being a melting pot, at least that was true for P.S. 203 when I attended. The problem is that not every kid went to public school. Indeed, I lived two blocks from St. Robert Bellarmine Roman Catholic Church, which we all called St. Roberts. It had a school the Catholic kids went to as an alternative to going to public school, I believe for grades K-8. I had a variety of experiences with some of those kids that I would describe now as mild antisemitism. I should point out that diagonally across the street from us on 212th Street was an Italian family that we got friendly with and stayed friends with even after they moved to Manhasset. The daughters all went to St. Roberts but never showed anything but friendliness to us.
My interpretation of this is that for some the large society was itself sufficient to be a melting pot and whatever prior prejudices there were did indeed fade away. But for others, those prior prejudices were much stronger and in the absence of greater efforts at melting away those prejudices, by attending the same school for example, the prejudices would persist, even if in certain circumstances they remained unarticulated. I should also note that JFK became President just as I started elementary school. I was too young to read the newspaper then, but I became aware that he was the first Catholic President and that his religion created some tension with him and Protestant voters. Whether similar animus between Protestants and Catholics exists today, I can't say based on direct experience. What I can say is that to the extent that such feelings are still present it gives the lie to America as the melting pot.
The melting pot is a story I would like to be true. In my own interactions, it is a story I try to live by. And for some part of the population, I believe the story works for them as well. But it clearly doesn't work for other parts of the population. Let me briefly cover what I learned about this while in college.
I took a course on American Politics and we read pieces by many authors, including Nathan Glazer, though not the full book Beyond The Melting Pot. It was evident then that even the imperfect melting pot didn't work in New York City when it came to Puerto Ricans and Blacks. As difficult as religious differences are to overcome, racial differences are harder and perhaps too hard to expect the melting pot approach to work. That same year when I started elementary school, the movie version of West Side Story came out. The music is fantastic and the romantic story that is Romeo and Juliet like is entertaining. But underlying the story is that rival gangs, one white ethnic, the other Puerto Rican, fought for turf and in that way the old divisions were sustained rather than overcome. This same message was given in a much later movie, Gangs of New York.
And when I was a kid Harlem was considered a ghetto for Black people. But Harlem was in Manhattan, which seemed like a different world when I was in P.S. 203. I can't recall whether P.S. 203 had any Black students or not. Junior high school was otherwise. The Civil Rights act of 1965 either mandated busing for integration or New York City interpreted it that way. I'm not sure which. I started junior high in 1966, so the school was integrated then. But the integration was partial, at best. There was tracking and I was in SP (special progress) classes. Those classes must have inadvertently created a sense of meritocracy among the students. (Perceived meritocracy is a different counterforce to the melting pot story. It encourages elitism if not outright snobbery.) I don't believe there were any Black kids in SP when I went to junior high. We took some non-academic classes then, shop and band. Those may have been integrated, though I can't remember. The academic classes were not. In such a setting the melting pot really didn't have a chance to work. The regular academic classes (non-SP) were integrated. Did they serve as a melting pot? I don't know, but I doubt it.
The integration by busing experiment was undone by a variety of factors, the biggest being White flight to the suburbs. But the school within a school phenomenon, via tracking, which for me persisted into high school, is another reason it was undone. Really, it never started. Gym was integrated in high school and, as I've written elsewhere, I found gym terrifying. There were tough kids in gym, White and Black. There weren't tough kids in the honors classes. A melting pot of tough kids and middle class kids from homes where academic study was encouraged would indeed be very interesting, but for me that was a pipe dream, not a reality.
I've belabored this discussion about America as a melting pot because it is still quite relevant now. It is a story where aspiration and reality don't coincide. Yet our rhetoric seems to choose only one of those and then deny the other, rather than acknowledge both.
The other examples are for me non-experiential, so I will go through them only briefly. Each of them features that our ancestors were always seemingly on the right side of history, so the bad guys were always alien. We never had to confront in our history that we ourselves were the bad guys, except in very isolated cases; Benedict Arnold comes to mind here.
The first one of these that I remember is about the Crusades. In the history we learned in public school these were glorious quests. But I also attended Yiddish school on Saturdays. The instruction there was broken into three parts - learning Yiddish as a language, learning Jewish folk songs where we sang them ensemble, and learning Jewish history. I must have been 11 then. I remember a chapter from the Jewish history book called, The Horrible Crusades. This clearly contradicted what we were taught in public school. It meant to do just that, as a way to get our attention. It was no big deal for me at the time, but it did serve as kind of canary in the coal mine for other such stories that competed with what we were taught in school.
This next one is about American "Indians." Just about everyone I knew as a kid played Cowboys and Indians. The Cowboys were the good guys while the Indians were the bad guys. Of course, that was make believe and TV shows. In school we were taught that Peter Minuet "purchased" Manhattan Island for the equivalent of $24. And we were taught that the Indians were present at the first Thanksgiving, which was a peaceful affair. But later there was trouble, lots of it. We were taught about Custer's Last Stand and how his troops fought bravely though terribly outnumbered. Then, either in 10th or 11th grade I saw Little Big Man with some friends. It cast the Indians in an entirely different light and Custer in a different light as well. The movies were a big debunking device around that time. MASH came out the same year. Irreverence had its day during this time, part of the mood against the Vietnam War. It was hip to be irreverent. School did not prepare you for that.
The last one I will mention I believe was from American history in junior high, but we may have talked about it in high school too. We were taught manifest destiny, as the truth about 19th century America. The way west was inevitable. America would expand from the Atlantic Ocean to the Pacific Ocean. All the land in between was rightfully American, even if that was far from true at the start of the century. The doctrine allowed Americans to view the way west without contradicting Washington's advice in his farewell address - avoid foreign entanglements. The reality at the time, which we were not taught, is that the doctrine was far from universally held. Teaching it as if it was the truth made the students not consider America as an imperial nation at all in 19th century, the Mexican-American War and the Spanish-American War notwithstanding. Indeed, by not taking a more critical approach to the U.S. in school (here by critical I mean multiple perspectives, I don't mean criticizing) students were entirely unprepared (at least by school) for the protests against the Vietnam War. It was as if the Vietnam War put us in a separate parallel universe that we had never entered before.
Let me turn to the rituals we had in elementary school, for reasons that we should speculate on as we consider them. Two of these should suffice. The first one is kind of odd, shelter drills. They go under a different name now, duck and cover. Shelter drills, like fire drills which did make sense to do, were done repeatedly so everyone would know how to proceed when necessary. Fire drills were for when the building caught fire and the fire alarm went off. This is a low probability event, but still a realistic possibility. Getting everyone out of the building in an orderly manner, without panic, is the right thing to do in that circumstance. Shelter drills were an entirely different matter. You were taught to crawl under your desk and hide. This was to happen in the event of a nuclear bomb going off in the vicinity. This is a preposterous solution to a totally devastating situation. So we might consider, why go through this rigamarole, since it made no sense at all for its intended purpose.
To give some context consider the following. The Cuban Missile Crisis was in October 1962. I believe it terrified every American adult, for it made the possibility of nuclear war seem real. And in the world of fiction, there was a cottage industry about the possibility. Nevil Shute's On the Beach has a 1957 copyright. Fail-Safe has a 1962 copyright. And the satirical film, Dr. Strangelove, came out in 1963. Such plentiful offering of entertainment in this area could only happen if many people were worried about nuclear war. This was an adult worry. My conjecture about shelter drills is that they offered a way to share that worry with kids, not to protect them if they were too close to the blast sight, but so there was a story that might be told to them that they'd understand, in the event they survived a nuclear detonation when many others did not. This seems the most plausible reason to me for the shelter drills. So this was part myth and part misdirection. Maybe it actually was a good use of myth, of that I'm not sure. Would it have been better for the kids not to know the worry at all?
The other ritual is the flag ceremony. Every day in class we said the Pledge of Allegiance while standing up, with our right hands held over our hearts. After that, still standing, we sang My Country, 'Tis of Thee. Let me offer a bit of an aside before I continue. I'm not very big on ceremonial stuff. For example, I didn't attend the graduation ceremony for high school, college, or PhD. Nevertheless, I can see some point in a repeated ceremony about the flag to instill in kids some patriotism. And while some people have objected to the pledge because of the line, under God, I actually like the line, and to the Republic for which it stands. The flag is a symbol. Our true allegiance is to the Republic. What that means, however, was never explained in elementary school. I will get back to that point in a bit.
I went to sleep away camp for 6 years, and it was quite long, a full 8 weeks. At sleep away camp we also honored the flag, but in a different way. The camp relied on bugle calls played over the loudspeaker in the HQ building from a vinyl recording. For flag raising, we heard To The Colors. (I can't recall whether we stood at attention or at ease.) For flag lowering, we heard Retreat. As I noted in the post that I wrote about the bugle calls, I found them somewhat comforting. Even now, I like to hear them. But if there is some larger message they should be connected to, that eluded me then and it eludes me now.
In neither case did we hear or perform the national anthem. Indeed, when I was 11 and in Bunk 13, one of my counselors told us that they should really change the national anthem to America The Beautiful, simply because it was a better song to honor America.
Now let me turn to the performing of the Star Spangled Banner, which happened at big time sports events. I have no sense of why this was the case, but the practice existed before I went to elementary school. (Google provides a ready answer.) I find it odd now to use sporting events as a way to connect those in attendance with their patriotism. Indeed, in 1970, the Knicks won the NBA Championship and I recall going to games that season and/or watching the games on TV. The fans would never finish the singing of the song. Instead of "and the home of the brave" everyone in attendance would have burst into a very loud cheer. In other settings, you might take that as being disrespectful about the anthem. What it really showed, however, was that the fans were pumped up and getting ready for the game to start. The fans weren't trying to show any disrespect. It's just that there full attention was on the basketball game.
It is now worth asking whether the grade school instruction about honoring the flag really taught us something fundamental about patriotism, or if it really was mere window dressing, done because some muckety-mucks thought otherwise. To the extent that it conveyed honoring the flag was sufficient and that one did not need to show allegiance to the Republic is other ways, I think that myth. It is especially troublesome to me now, seeing the controversy about kneeling during the performing of the anthem, and with sporting events so often linking the performance of the anthem to paying respect to veterans, that these things get a lot of attention, while that there are homeless veterans, many of them, gets far less less attention. Something is wrong with that picture.
* * * * *
When I started to write this piece I had in mind the punchline - I had a tough adolescence, but I got through it. American can do the same. Now, having written this, I want to end in a different way, so I'd like to ask two things.
As adults, I don't believe we entirely abandon myth. Instead, I think we may replace our childhood myths with others that we're not willing to let go of. Can we have a discussion about the myths we hold, whether Democrat or Republican? Would getting those myths out there be helpful, if both sides could agree that one side holds particular myths?
The other thing is about the times in which we live. Part of my reason for going through the exercise of the politics during my teen years is to note that we never lived in a world where we were always told the unvarnished truth, by our teachers and by our political leaders. Does that make what is happening now more of the same, if at an accelerated pace as compared to the 1960s? Or is it fundamentally different now? I can't answer that other than by observing how it feels to me. Even during Vietnam and Watergate, I didn't feel as if we had gone over a cliff, unable to return. I thought things were very bad, but we might still right the ship. Now I'm much less sure of that.
This is how I prepare for hearing about the election results on Tuesday. I wonder what others do in preparation.
Some might repress the realization, apparently willing to play by the rules, perhaps for the rest of their lives, or at least until they live away from their parents and can exercise more direct control. For those who go this route, the anger builds underneath. The teenager is seething but is also frightened about overt display of those feelings. Inwardly, such kids are very unhappy. William Deresiewicz writes about the very good students who fall into this category in his book Excellent Sheep. Others might not repress it. They get overtly moody and rebellious. Defying authority becomes an act of self-expression and a way to reclaim oneself. This route requires identifying areas where the authority clearly made poor judgment and then lied about it, so is deserving of blame. For kids of my generation, the Vietnam War served this role. But there were far more personal areas where authority fell short as well. Kids challenged their parents, the long hair most of us wore one overt reminder of this.
Still others might get depressed and then do a variety of seemingly odd things as a consequence. In my case, I learned to mumble, particularly at dinner time. I had a need to express my point of view (as I still have). But I was pretty sure that making myself heard would create one more episode of conflict. I must have intuited this rather than reason it through as I seem to be doing now. Mumbling, particularly at home, became a part of how I was able to cope. I'm guessing that other kids found their own idiosyncratic ways to deal with it.
School was one place that propagated myth, both in the teaching of social studies/history and in the rituals we went through. It may have been elsewhere as well, but I will contain my examples to those two. Before getting to those, I want to observe something about my independent reading, especially in elementary school (probably starting in 3rd grade, but my memory is not good enough to be sure of that). I would go on a jag and read many books in the same theme. Then I would move onto something else and go on another jag.
Pretty early on, that was mythology itself. First, I got into the Norse myths and became enamored with those stories. Then I did a repeat with the Greek myths and maybe the Roman myths too (which seemed mainly the same as the Greek myths, though the gods had different names.) After that, I moved onto biography of figures well known in American history. What I'm asking myself now is whether that sequence is normal in a child's development and, if so, whether for that reason or something quite different it made sense to depict real historical figures in a somewhat mythical light, just as a matter of holding the child reader's interest. In any event, I think it fair to say that each of these biographies gave a romanticized telling of the person's life, as books for children are apt to do. Several of the biographies I read at the time were authored by Clara Ingram Judson.
Sometime later, probably in junior high school, I read The American, by Howard Fast. It is considered historical fiction. (Incidentally, we have an Altgeld Hall on campus, named after John Peter Altgeld, the main character of the story.) I don't know how they draw the line between works of non-fiction and novels about real people. That is not the issue for me. Some myths are delightful and benign - George Washington chopping down the cherry tree, for example. Indeed, I think we come to learn about many important historical figures through such fiction. For example, I came to know something about Vincent Van Gogh from reading Lust for Life, by Irving Stone. However, other myths develop to hide painful truths. It is the latter that is my interest here.
Debunking is what happens when the more painful truth comes to light. If school is the source of the myth, then the debunking must happen elsewhere. It's also possible, perhaps even more likely, for a competing narrative to emerge later where neither has evident claim to be called the truth. The stories then coexist until much later, when subsequent events shed light on which of the competing stories is more likely true.
The first one of these myths is about New York City directly, then about about America as a whole, indirectly, mostly by the implication that New York City was representative of the entire country. I attended elementary school in Queens, P.S. 203. We spent one grade, I believe, on the history of New York City. In that we were taught that New York City was a "melting pot" and the melting pot story was indeed omnipresent. Immigrants came into New York City, spoke the language of the mother country, and lived in communities with the other immigrants from the same country. The children were Americanized, by school primarily, also by listening to the radio, going to the movies, and other acculturating activities. Once Americanized, the background of the kid's parents faded in importance.
Like most myths, the story is partially true. I should note here that when I was a kid the story was applied mainly to people of European extraction. The heterogeneity of these people was very real when I was a kid (and maybe it still is). It makes me question the practice used today in the label White, which implies within group homogeneity.
My belief is that public school is actually pretty good at being a melting pot, at least that was true for P.S. 203 when I attended. The problem is that not every kid went to public school. Indeed, I lived two blocks from St. Robert Bellarmine Roman Catholic Church, which we all called St. Roberts. It had a school the Catholic kids went to as an alternative to going to public school, I believe for grades K-8. I had a variety of experiences with some of those kids that I would describe now as mild antisemitism. I should point out that diagonally across the street from us on 212th Street was an Italian family that we got friendly with and stayed friends with even after they moved to Manhasset. The daughters all went to St. Roberts but never showed anything but friendliness to us.
My interpretation of this is that for some the large society was itself sufficient to be a melting pot and whatever prior prejudices there were did indeed fade away. But for others, those prior prejudices were much stronger and in the absence of greater efforts at melting away those prejudices, by attending the same school for example, the prejudices would persist, even if in certain circumstances they remained unarticulated. I should also note that JFK became President just as I started elementary school. I was too young to read the newspaper then, but I became aware that he was the first Catholic President and that his religion created some tension with him and Protestant voters. Whether similar animus between Protestants and Catholics exists today, I can't say based on direct experience. What I can say is that to the extent that such feelings are still present it gives the lie to America as the melting pot.
The melting pot is a story I would like to be true. In my own interactions, it is a story I try to live by. And for some part of the population, I believe the story works for them as well. But it clearly doesn't work for other parts of the population. Let me briefly cover what I learned about this while in college.
I took a course on American Politics and we read pieces by many authors, including Nathan Glazer, though not the full book Beyond The Melting Pot. It was evident then that even the imperfect melting pot didn't work in New York City when it came to Puerto Ricans and Blacks. As difficult as religious differences are to overcome, racial differences are harder and perhaps too hard to expect the melting pot approach to work. That same year when I started elementary school, the movie version of West Side Story came out. The music is fantastic and the romantic story that is Romeo and Juliet like is entertaining. But underlying the story is that rival gangs, one white ethnic, the other Puerto Rican, fought for turf and in that way the old divisions were sustained rather than overcome. This same message was given in a much later movie, Gangs of New York.
And when I was a kid Harlem was considered a ghetto for Black people. But Harlem was in Manhattan, which seemed like a different world when I was in P.S. 203. I can't recall whether P.S. 203 had any Black students or not. Junior high school was otherwise. The Civil Rights act of 1965 either mandated busing for integration or New York City interpreted it that way. I'm not sure which. I started junior high in 1966, so the school was integrated then. But the integration was partial, at best. There was tracking and I was in SP (special progress) classes. Those classes must have inadvertently created a sense of meritocracy among the students. (Perceived meritocracy is a different counterforce to the melting pot story. It encourages elitism if not outright snobbery.) I don't believe there were any Black kids in SP when I went to junior high. We took some non-academic classes then, shop and band. Those may have been integrated, though I can't remember. The academic classes were not. In such a setting the melting pot really didn't have a chance to work. The regular academic classes (non-SP) were integrated. Did they serve as a melting pot? I don't know, but I doubt it.
The integration by busing experiment was undone by a variety of factors, the biggest being White flight to the suburbs. But the school within a school phenomenon, via tracking, which for me persisted into high school, is another reason it was undone. Really, it never started. Gym was integrated in high school and, as I've written elsewhere, I found gym terrifying. There were tough kids in gym, White and Black. There weren't tough kids in the honors classes. A melting pot of tough kids and middle class kids from homes where academic study was encouraged would indeed be very interesting, but for me that was a pipe dream, not a reality.
I've belabored this discussion about America as a melting pot because it is still quite relevant now. It is a story where aspiration and reality don't coincide. Yet our rhetoric seems to choose only one of those and then deny the other, rather than acknowledge both.
The other examples are for me non-experiential, so I will go through them only briefly. Each of them features that our ancestors were always seemingly on the right side of history, so the bad guys were always alien. We never had to confront in our history that we ourselves were the bad guys, except in very isolated cases; Benedict Arnold comes to mind here.
The first one of these that I remember is about the Crusades. In the history we learned in public school these were glorious quests. But I also attended Yiddish school on Saturdays. The instruction there was broken into three parts - learning Yiddish as a language, learning Jewish folk songs where we sang them ensemble, and learning Jewish history. I must have been 11 then. I remember a chapter from the Jewish history book called, The Horrible Crusades. This clearly contradicted what we were taught in public school. It meant to do just that, as a way to get our attention. It was no big deal for me at the time, but it did serve as kind of canary in the coal mine for other such stories that competed with what we were taught in school.
This next one is about American "Indians." Just about everyone I knew as a kid played Cowboys and Indians. The Cowboys were the good guys while the Indians were the bad guys. Of course, that was make believe and TV shows. In school we were taught that Peter Minuet "purchased" Manhattan Island for the equivalent of $24. And we were taught that the Indians were present at the first Thanksgiving, which was a peaceful affair. But later there was trouble, lots of it. We were taught about Custer's Last Stand and how his troops fought bravely though terribly outnumbered. Then, either in 10th or 11th grade I saw Little Big Man with some friends. It cast the Indians in an entirely different light and Custer in a different light as well. The movies were a big debunking device around that time. MASH came out the same year. Irreverence had its day during this time, part of the mood against the Vietnam War. It was hip to be irreverent. School did not prepare you for that.
The last one I will mention I believe was from American history in junior high, but we may have talked about it in high school too. We were taught manifest destiny, as the truth about 19th century America. The way west was inevitable. America would expand from the Atlantic Ocean to the Pacific Ocean. All the land in between was rightfully American, even if that was far from true at the start of the century. The doctrine allowed Americans to view the way west without contradicting Washington's advice in his farewell address - avoid foreign entanglements. The reality at the time, which we were not taught, is that the doctrine was far from universally held. Teaching it as if it was the truth made the students not consider America as an imperial nation at all in 19th century, the Mexican-American War and the Spanish-American War notwithstanding. Indeed, by not taking a more critical approach to the U.S. in school (here by critical I mean multiple perspectives, I don't mean criticizing) students were entirely unprepared (at least by school) for the protests against the Vietnam War. It was as if the Vietnam War put us in a separate parallel universe that we had never entered before.
Let me turn to the rituals we had in elementary school, for reasons that we should speculate on as we consider them. Two of these should suffice. The first one is kind of odd, shelter drills. They go under a different name now, duck and cover. Shelter drills, like fire drills which did make sense to do, were done repeatedly so everyone would know how to proceed when necessary. Fire drills were for when the building caught fire and the fire alarm went off. This is a low probability event, but still a realistic possibility. Getting everyone out of the building in an orderly manner, without panic, is the right thing to do in that circumstance. Shelter drills were an entirely different matter. You were taught to crawl under your desk and hide. This was to happen in the event of a nuclear bomb going off in the vicinity. This is a preposterous solution to a totally devastating situation. So we might consider, why go through this rigamarole, since it made no sense at all for its intended purpose.
To give some context consider the following. The Cuban Missile Crisis was in October 1962. I believe it terrified every American adult, for it made the possibility of nuclear war seem real. And in the world of fiction, there was a cottage industry about the possibility. Nevil Shute's On the Beach has a 1957 copyright. Fail-Safe has a 1962 copyright. And the satirical film, Dr. Strangelove, came out in 1963. Such plentiful offering of entertainment in this area could only happen if many people were worried about nuclear war. This was an adult worry. My conjecture about shelter drills is that they offered a way to share that worry with kids, not to protect them if they were too close to the blast sight, but so there was a story that might be told to them that they'd understand, in the event they survived a nuclear detonation when many others did not. This seems the most plausible reason to me for the shelter drills. So this was part myth and part misdirection. Maybe it actually was a good use of myth, of that I'm not sure. Would it have been better for the kids not to know the worry at all?
The other ritual is the flag ceremony. Every day in class we said the Pledge of Allegiance while standing up, with our right hands held over our hearts. After that, still standing, we sang My Country, 'Tis of Thee. Let me offer a bit of an aside before I continue. I'm not very big on ceremonial stuff. For example, I didn't attend the graduation ceremony for high school, college, or PhD. Nevertheless, I can see some point in a repeated ceremony about the flag to instill in kids some patriotism. And while some people have objected to the pledge because of the line, under God, I actually like the line, and to the Republic for which it stands. The flag is a symbol. Our true allegiance is to the Republic. What that means, however, was never explained in elementary school. I will get back to that point in a bit.
I went to sleep away camp for 6 years, and it was quite long, a full 8 weeks. At sleep away camp we also honored the flag, but in a different way. The camp relied on bugle calls played over the loudspeaker in the HQ building from a vinyl recording. For flag raising, we heard To The Colors. (I can't recall whether we stood at attention or at ease.) For flag lowering, we heard Retreat. As I noted in the post that I wrote about the bugle calls, I found them somewhat comforting. Even now, I like to hear them. But if there is some larger message they should be connected to, that eluded me then and it eludes me now.
In neither case did we hear or perform the national anthem. Indeed, when I was 11 and in Bunk 13, one of my counselors told us that they should really change the national anthem to America The Beautiful, simply because it was a better song to honor America.
Now let me turn to the performing of the Star Spangled Banner, which happened at big time sports events. I have no sense of why this was the case, but the practice existed before I went to elementary school. (Google provides a ready answer.) I find it odd now to use sporting events as a way to connect those in attendance with their patriotism. Indeed, in 1970, the Knicks won the NBA Championship and I recall going to games that season and/or watching the games on TV. The fans would never finish the singing of the song. Instead of "and the home of the brave" everyone in attendance would have burst into a very loud cheer. In other settings, you might take that as being disrespectful about the anthem. What it really showed, however, was that the fans were pumped up and getting ready for the game to start. The fans weren't trying to show any disrespect. It's just that there full attention was on the basketball game.
It is now worth asking whether the grade school instruction about honoring the flag really taught us something fundamental about patriotism, or if it really was mere window dressing, done because some muckety-mucks thought otherwise. To the extent that it conveyed honoring the flag was sufficient and that one did not need to show allegiance to the Republic is other ways, I think that myth. It is especially troublesome to me now, seeing the controversy about kneeling during the performing of the anthem, and with sporting events so often linking the performance of the anthem to paying respect to veterans, that these things get a lot of attention, while that there are homeless veterans, many of them, gets far less less attention. Something is wrong with that picture.
* * * * *
When I started to write this piece I had in mind the punchline - I had a tough adolescence, but I got through it. American can do the same. Now, having written this, I want to end in a different way, so I'd like to ask two things.
As adults, I don't believe we entirely abandon myth. Instead, I think we may replace our childhood myths with others that we're not willing to let go of. Can we have a discussion about the myths we hold, whether Democrat or Republican? Would getting those myths out there be helpful, if both sides could agree that one side holds particular myths?
The other thing is about the times in which we live. Part of my reason for going through the exercise of the politics during my teen years is to note that we never lived in a world where we were always told the unvarnished truth, by our teachers and by our political leaders. Does that make what is happening now more of the same, if at an accelerated pace as compared to the 1960s? Or is it fundamentally different now? I can't answer that other than by observing how it feels to me. Even during Vietnam and Watergate, I didn't feel as if we had gone over a cliff, unable to return. I thought things were very bad, but we might still right the ship. Now I'm much less sure of that.
This is how I prepare for hearing about the election results on Tuesday. I wonder what others do in preparation.
Friday, October 26, 2018
Some reasons why the return of a reasonable GOP will be so difficult
In some sense, this post is a response to a recent column by Nicholas Kristof, Desperately Seeking Principled Republicans. Kristof cites several prominent conservatives who have said, in so many words, that the Republicans have gone off the rails. It is time to vote for Democrats, just to restore some sanity. Kristof's piece makes it seem that the failure is primarily a matter of character in those Republicans who currently hold office. I certainly don't want to rule out the importance of character, but I think it necessary to consider the political environment as well. Thinking about the political environment gets you to consider changes in it, some which are not that recent, that surely have had an impact on the behavior of our elected officials, and on the electorate as well. It also gets you to consider the long term impact of those changes (by looking back historically at them) and separate that from the more immediate intended effects. Doing this, at least for me, makes the current situation seem less likely a consequence of some grand conspiracy and more likely the result of insufficient prescience in making those past decisions, so a gradual withering away at institutions that, while not perfect, were at one time reasonably functional.
I should also note here, for the reader who otherwise is unfamiliar with me, that I'm not a political scientist. I am a retired, but once well trained, economist. I don't believe the social science is all that different, regardless of the perspective. So, with that, I will offer up an annotated list of factors that seem important to me and that are not focused on the very recent past. There's been enough written about those more recent factors that I don't need to include them here.
The End of the Fairness Doctrine/The Old Oligopoly of Network News
The Wikipedia entry is interesting in that it explains the fairness doctrine as a rule imposed by the Federal Communications Commission (FCC) on broadcasters (radio and TV, but apparently not on newspapers, which were outside the FCC's jurisdiction) to present controversial issues in a fair and balanced manner. We did once have that in our country. Whether watching Walter Cronkite (CBS), Huntley and Brinkley (NBC), or Howard K. Smith (ABC) for the nightly news, the essence of the content was pretty much the same. It seems clear that we no longer have anything close to that.
The doctrine was ended in 1987 and the argument at the time was that with the advent of cable television, the oligopoly of news provision would be broken. New providers would emerge. With greater competition, the fairness doctrine was no longer necessary.
Now I want to consider newspapers a bit, even though they were outside the scope of this regulation. Newspapers have a separate section for opinions/editorials/and other columns that are not subject to the requirement of being balanced in the way the fairness doctrine required. But the news was supposed to be the news, the reporting of factual information of importance and current interest. This means that while the editorial pages of The New York Times and the Wall Street Journal could be wildly different, their front page news should have been pretty much the same. I'm unaware of anyone who has done a serious study to test whether that was ever actually true, or if instead the front pages themselves were slanted, even back in the early years of Reagan. I do recall ongoing complaints from conservatives about liberal bias in the news. I always thought that was sour grapes, as they weren't seeing the coverage they wanted to get, perhaps with a strategic element thrown in to try to influence future such coverage.
With television and radio news, the separation of news from editorial isn't as clean, at least conceptually (way back when 60 Minutes had Andy Rooney giving opinion near the end of the show, but that was not true of nightly news) so there is reason to believe that some editorial content gets inserted into news pieces. Purely as a social scientist, this idea of complete objectivity and that the news is just facts, I believe to be an illusion. There is an important selection issue, which facts to emphasize and which to push to the background. This requires a point of view to decide. The point of view is fused then with the reporting of news. Nevertheless, one might expect the variation in how the news is reported to not be too great, so a regular reader of the New York Times could have a discussion with a regular reader of the Wall Street Journal about the news and find areas of what they agree to be true, as well as points of disagreement. We are not close to this now with TV news.
There has been Yellow journalism long before the current morass. (We learned about it in grade school history class regarding the Spanish-American War. It was the precursor to fake news.) What may not be well understood is that there is a kind of market failure in news that is ad-funded or funded by subscription. In the competition for eyeballs, sensationalism that produces "addicted" viewers is a winning business strategy, even as it tarnishes the news itself. With a limited number of possible entrants, the fairness doctrine can then be seen as a counter to this market failure.
There is the question of whether we can return to something like the fairness doctrine now, with the advent of the Internet strongly suggesting otherwise. I will point out that if TV is regulated but the Internet is not, that will simply expedite the movement of programming to the unregulated environment. So one might ask, is is possible to impose the fairness doctrine on Internet news as well as on TV news. From my vantage, that would be desirable, but I don't see how it might happen.
As a social experiment, possibly one that might be done if we have a Democrat in the White House, I would like to see Fox News, and for fairness MSNBC as well, off the air for an extended period of time, say four to six months. I'm interested, in particular, in whether regular Fox viewers might willingly turn to other news programming that is "more balanced." If the answer to that is no, then one might reasonably conclude that audience is captured by those politicians who make appeal to them. This captured audience is one reason for the tribalism that has been so heavily reported.
The Hastert Rule/The End of Bipartisan Compromise in the House
The Wikipedia entry makes clear that the rule was actually first imposed by Newt Gingrich, Dennis Hastert's predecessor as Speaker. But the entry fails to point out that the rule served quite different purposes for the two. Gingrich was much more of an ideologue and a firebrand. For Gingrich, the rule was a tool for wielding power. Hastert, who is now probably remembered more for his sexual indiscretions while in office than for his politics, was much more of a conciliator, as was his successor John Boehner. Indeed, Paul Ryan might also fit in the conciliator category.
The purpose of the rule for the conciliator as Speaker, is to preserve job security and not face the threat of a challenge from the right flank. More generally, one might argue that the scarcity of principled Republicans, broadly considered, is because they have had to defend themselves against the right flank rather than take as their opponents those from across the aisle. Let's consider this specifically as it applies in the House.
In an ideal word scripted by Anthony Downs, legislation from the House should reflect the median voter in the House, aggregating across all representatives: Democrats, Republicans, and others. Next, we modify that ideal by noting that the Speaker is a politician with the power of setting the agenda, as described by Duncan Black. So the Speaker's preference matters in determining the legislation and one should then predict the legislative outcome to be somewhere between the true median of the entire House and the Speaker's preferred point, with the location depending in part on the size of the majority that will vote in favor of the legislation. If the Speaker is wary of threats to his leadership from his own caucus, that can influence the Speaker's preference, but it does not preclude having legislation emerge that has bipartisan support.
Seen in this framework, the effect the Hastert rule, when Republicans have a majority in the House, is to move proposed legislation to the median of the Republican Caucus rather than to the median of the House as a whole, and to block legislation that would require bipartisan support to pass.
One might envision a more important dynamic consequence. Compromise, with the Democratic caucus, gets cast as disloyalty among Republicans, rather than the necessary "sausage making" part of politics. This is another factor contributing to tribalism, as practiced by our elected representatives.
I think it worthwhile to consider what being principled means, from the perspective of this analysis. Is the Speaker who sticks to the party line principled or is it the Speaker who compromises with the other party the principled one? There are many different ways to answer the question. I will answer that with the following question. Which mode would be better for us all, as a long term proposition? So I'd like to entertain the following counterfactual. Suppose that Hastert or Boehner abandoned the rule entirely, which was then followed by a challenge to their own leadership from the right wing. Suppose that challenge was effective enough to remove the then current speaker. What would happen after that? Would the Republicans in the House then find themselves in disarray and as a consequence lose their majority in the next election?
If so, maybe the experience would actually be liberating for future Speakers. The threat from the right wing, like the threats of many bullies, would be seen as not decisive. Those future Speakers would have more freedom to negotiate, because the entire Republican caucus would fear another bout of disarray. Alas, we haven't yet had this experience. I'm afraid that with a conciliator as Speaker, we never will, and with a hardliner as Speaker, then of course it won't happen.
The Undemocratic Effects of the Primary System Coupled with Low Voter Turnout
If the Median Voter Model held full sway and voters voted their preference rather than voted strategically (e.g., opt for their second choice if they felt their first choice had no chance of winning in the general election) then the winning Republican candidate in a Congressional district primary would look like the median Republican voter in that district, and likewise on the Democratic side of the equation. Then this observation needs to be coupled with the Paradox of Voting. If voting is costly (from the point of view of opportunity cost of time) and an individual voter's vote will likely not sway the outcome, then voting becomes irrational. Given this, the candidate who wins the primary should reflect the median only of those voters who do vote. Such voters overcome the seeming irrationality suggested in the Paradox of Voting.
It is known that voter participation rates in the primary are lower than voter participation rates in the general election. Moreover, Republican voters further to the right are more inclined to participate (as are Democratic voters further to the left). The primary system itself is polarizing. Adding the voter participation issue to the primary system increases the polarization.
This issue does have some obvious remedies. Enabling crossover voting in the primaries, especially for those of the other party who have an eye on the general election, would help keep extreme candidates from winning. And making voting mandatory would counter the Paradox of Voting. How to get those remedies as outcomes, however, is far from obvious.
The Decline of Private Sector Unions
Let me begin this section by engaging in a stereotype from 1970s TV, Archie Bunker. He worked on the loading dock, was a union member, and voted for Nixon. (This last one is best explained because Archie Bunker was a proud American, believed that America is always right, and thus was for the War in Vietnam.) More generally, Archie Bunker is emblematic of the hard-hat type. I don't know whether that type is representative of current Trump supporters in many respects, but among urban, males who voted for Trump, I think it a useful stereotype to keep in mind.
It's the union membership aspect that I want to focus on here. While union members are now quite often depicted as lefties, that never was fully true, even when unions wielded power and the majority of union members voted Democratic while the unions themselves contributed more to the campaigns of Democratic candidates. Unions should be considered in a quite different light. They were a normalizing force with respect to social attitudes, particularly about minorities, even if not a perfect one. Because union power was related to the size of union membership, there was incentive to include minorities in the union. Unions were also also guilds and helped members - skill-wise by encouraging senior members and junior members to have a master-apprentice relationship, and socializing-wise by providing members with a ready peer group for after work-hours fun. This put unions in a paternalistic role with respect to their members. The politicians understood this. Republican politicians might garner some union votes, as long as their politics wasn't explicitly anti-union.
I don't want to sugarcoat what it is that unions did. There were, of course, serious issues about connections with the Mafia and corruption among union leadership. But, and this is the important point for what is being argued here, unions served as a force from outside the political arena that the politicians needed to confront (and perhaps be fearful of). In that way strong private sector unions steered Republican politicians towards the center. This force is absent from our current politics.
The Rise of Hostage Taking as a Strategy by Organized Special Interest Groups
Lobbying has been around at least since U.S. Grant was President. That special interests would shower gratuities and attention on politicians situated on the right committees, so that when legislation the special interests cared about was being considered, they could influence the writing of that legislation in a way favorable to themselves. While I find the practice unsavory, as I'm sure many others do as well, it has been with us for such a long time that you might think it part of the process. In particular, I would be hard pressed to consider the shift in the Republicans that Nicholas Kristof wrote about and attribute it to lobbying. We need to focus on something else.
I've called the something else strategic hostage taking. An exemplar is Grover Norquist and his organization Americans for Tax Reform. He offers a candidate to take their "Taxpayer Protection Pledge," which in game-theoretic terms can be called a credible commitment device. The pledge says the candidate will oppose any and all measures to raise taxes. Having taken the pledge that gets publicized, so people who monitor the list of candidates who have taken the pledge and who want to support them in their campaigns can do so. Americans for Tax Reform might also contribute directly to the campaign, but the real leverage is in making other high rollers who don't want their taxes raised aware of which candidates are on the list. The hostage taking part comes later. Suppose that later there is a dire situation - a war has been declared, an enormous natural disaster has taken place, or something else in this category and that necessitates substantial additional government spending. The rational response then would be to have a temporary tax surcharge to pay for that spending. But legislators who took the pledge and who want to run for reelection can't vote for such a temporary surcharge, because that would mean they've broken the pledge, and they'll be punished accordingly. That is the hostage taking.
The National Rifle Association operates in much the same way and has completely blocked any sensible reform regarding gun control in recent times. Let's note that the Brady Bill did pass 25 years ago, in spite of the NRA, though at the time the Democrats were in control of Congress and the White House. But since then, nada on the gun control front, yet there have been so many publicly known violent gun death tragedies to galvanize voters on the issue.
Let's observe that when politicians find they are hostages to a variety of different special interests, the normal path for them to break the arrangement is to not seek reelection. That may be a principled decision or it may simply represent fatigue from playing the role of a puppet. Surely, it is not principled to promise to surrender one's discretion once in office by abiding by the wishes of the special interests, for the sake of getting their support to assist them in being elected. We might call that by many other names, but principled wouldn't be one of them.
Conclusion
Except for the last section of this essay, I tried to make the argument in as abstract form as I possibly could. The point is that the environment that GOP politicians operate in has gotten more hostile over time, especially to politicians who try to conduct themselves in a principled manner. The reason to present this in an abstract way is to get the reader to focus on that main point, and not get hung up on the issues, which they might otherwise be inclined to do. The reason why I deviated from that script in the last section is that I didn't know a way to tell the story purely in the abstract, yet keep it readily understandable. In this case the examples convey the ideas better than a purely theoretical discussion does. Otherwise, I don't want to elevate the examples, at least not for this post.
I also want to repeat a caveat I gave at the beginning of this essay. I'm looking at changes from a while ago and totally ignoring more recent changes. This is deliberate to make the point that it has been going on for quite a while. It's not that everything was hunky-dory until the election of 2016 and then we went over a cliff. Asserting that would be a bad misreading of this history.
Assuming that I am right that the environment for governing has become more hostile for Republicans, one should ask what would Democrats taking control do? Would it reverse any of the hostility in the environment or merely delay the process till Republicans again take over? One might also ask whether Republicans, as the out party, might do anything themselves to reverse the hostility in the environment. That didn't happen in the past, but in the past prominent conservatives weren't ashamed of the Republican party. Now they are.
Let me conclude by saying we sometimes focus on the wrong period in our history. Recently, the 1920s have been considered because it was the last time where there were such great disparities in the income distribution. And the 1930s have also been considered, both because of the Great Depression and the rise of fascism. Yet I think we should look at an earlier time, to the Presidency of Theodore Roosevelt. TR was a Republican, but he also was a reformer. The problem then was the power of the trusts. TR came to be known as the trust buster. Power is distributed differently now and while Antitrust Law, still on the books, may be one tool to combat concentrated power, I suggest we need 21st century tools that take on the political analog. Here I don't want to speculate about what those tools might look like. My hope with this essay is to get some readers to do that and push the discussion along that way.
I should also note here, for the reader who otherwise is unfamiliar with me, that I'm not a political scientist. I am a retired, but once well trained, economist. I don't believe the social science is all that different, regardless of the perspective. So, with that, I will offer up an annotated list of factors that seem important to me and that are not focused on the very recent past. There's been enough written about those more recent factors that I don't need to include them here.
The End of the Fairness Doctrine/The Old Oligopoly of Network News
The Wikipedia entry is interesting in that it explains the fairness doctrine as a rule imposed by the Federal Communications Commission (FCC) on broadcasters (radio and TV, but apparently not on newspapers, which were outside the FCC's jurisdiction) to present controversial issues in a fair and balanced manner. We did once have that in our country. Whether watching Walter Cronkite (CBS), Huntley and Brinkley (NBC), or Howard K. Smith (ABC) for the nightly news, the essence of the content was pretty much the same. It seems clear that we no longer have anything close to that.
The doctrine was ended in 1987 and the argument at the time was that with the advent of cable television, the oligopoly of news provision would be broken. New providers would emerge. With greater competition, the fairness doctrine was no longer necessary.
Now I want to consider newspapers a bit, even though they were outside the scope of this regulation. Newspapers have a separate section for opinions/editorials/and other columns that are not subject to the requirement of being balanced in the way the fairness doctrine required. But the news was supposed to be the news, the reporting of factual information of importance and current interest. This means that while the editorial pages of The New York Times and the Wall Street Journal could be wildly different, their front page news should have been pretty much the same. I'm unaware of anyone who has done a serious study to test whether that was ever actually true, or if instead the front pages themselves were slanted, even back in the early years of Reagan. I do recall ongoing complaints from conservatives about liberal bias in the news. I always thought that was sour grapes, as they weren't seeing the coverage they wanted to get, perhaps with a strategic element thrown in to try to influence future such coverage.
With television and radio news, the separation of news from editorial isn't as clean, at least conceptually (way back when 60 Minutes had Andy Rooney giving opinion near the end of the show, but that was not true of nightly news) so there is reason to believe that some editorial content gets inserted into news pieces. Purely as a social scientist, this idea of complete objectivity and that the news is just facts, I believe to be an illusion. There is an important selection issue, which facts to emphasize and which to push to the background. This requires a point of view to decide. The point of view is fused then with the reporting of news. Nevertheless, one might expect the variation in how the news is reported to not be too great, so a regular reader of the New York Times could have a discussion with a regular reader of the Wall Street Journal about the news and find areas of what they agree to be true, as well as points of disagreement. We are not close to this now with TV news.
There has been Yellow journalism long before the current morass. (We learned about it in grade school history class regarding the Spanish-American War. It was the precursor to fake news.) What may not be well understood is that there is a kind of market failure in news that is ad-funded or funded by subscription. In the competition for eyeballs, sensationalism that produces "addicted" viewers is a winning business strategy, even as it tarnishes the news itself. With a limited number of possible entrants, the fairness doctrine can then be seen as a counter to this market failure.
There is the question of whether we can return to something like the fairness doctrine now, with the advent of the Internet strongly suggesting otherwise. I will point out that if TV is regulated but the Internet is not, that will simply expedite the movement of programming to the unregulated environment. So one might ask, is is possible to impose the fairness doctrine on Internet news as well as on TV news. From my vantage, that would be desirable, but I don't see how it might happen.
As a social experiment, possibly one that might be done if we have a Democrat in the White House, I would like to see Fox News, and for fairness MSNBC as well, off the air for an extended period of time, say four to six months. I'm interested, in particular, in whether regular Fox viewers might willingly turn to other news programming that is "more balanced." If the answer to that is no, then one might reasonably conclude that audience is captured by those politicians who make appeal to them. This captured audience is one reason for the tribalism that has been so heavily reported.
The Hastert Rule/The End of Bipartisan Compromise in the House
The Wikipedia entry makes clear that the rule was actually first imposed by Newt Gingrich, Dennis Hastert's predecessor as Speaker. But the entry fails to point out that the rule served quite different purposes for the two. Gingrich was much more of an ideologue and a firebrand. For Gingrich, the rule was a tool for wielding power. Hastert, who is now probably remembered more for his sexual indiscretions while in office than for his politics, was much more of a conciliator, as was his successor John Boehner. Indeed, Paul Ryan might also fit in the conciliator category.
The purpose of the rule for the conciliator as Speaker, is to preserve job security and not face the threat of a challenge from the right flank. More generally, one might argue that the scarcity of principled Republicans, broadly considered, is because they have had to defend themselves against the right flank rather than take as their opponents those from across the aisle. Let's consider this specifically as it applies in the House.
In an ideal word scripted by Anthony Downs, legislation from the House should reflect the median voter in the House, aggregating across all representatives: Democrats, Republicans, and others. Next, we modify that ideal by noting that the Speaker is a politician with the power of setting the agenda, as described by Duncan Black. So the Speaker's preference matters in determining the legislation and one should then predict the legislative outcome to be somewhere between the true median of the entire House and the Speaker's preferred point, with the location depending in part on the size of the majority that will vote in favor of the legislation. If the Speaker is wary of threats to his leadership from his own caucus, that can influence the Speaker's preference, but it does not preclude having legislation emerge that has bipartisan support.
Seen in this framework, the effect the Hastert rule, when Republicans have a majority in the House, is to move proposed legislation to the median of the Republican Caucus rather than to the median of the House as a whole, and to block legislation that would require bipartisan support to pass.
One might envision a more important dynamic consequence. Compromise, with the Democratic caucus, gets cast as disloyalty among Republicans, rather than the necessary "sausage making" part of politics. This is another factor contributing to tribalism, as practiced by our elected representatives.
I think it worthwhile to consider what being principled means, from the perspective of this analysis. Is the Speaker who sticks to the party line principled or is it the Speaker who compromises with the other party the principled one? There are many different ways to answer the question. I will answer that with the following question. Which mode would be better for us all, as a long term proposition? So I'd like to entertain the following counterfactual. Suppose that Hastert or Boehner abandoned the rule entirely, which was then followed by a challenge to their own leadership from the right wing. Suppose that challenge was effective enough to remove the then current speaker. What would happen after that? Would the Republicans in the House then find themselves in disarray and as a consequence lose their majority in the next election?
If so, maybe the experience would actually be liberating for future Speakers. The threat from the right wing, like the threats of many bullies, would be seen as not decisive. Those future Speakers would have more freedom to negotiate, because the entire Republican caucus would fear another bout of disarray. Alas, we haven't yet had this experience. I'm afraid that with a conciliator as Speaker, we never will, and with a hardliner as Speaker, then of course it won't happen.
The Undemocratic Effects of the Primary System Coupled with Low Voter Turnout
If the Median Voter Model held full sway and voters voted their preference rather than voted strategically (e.g., opt for their second choice if they felt their first choice had no chance of winning in the general election) then the winning Republican candidate in a Congressional district primary would look like the median Republican voter in that district, and likewise on the Democratic side of the equation. Then this observation needs to be coupled with the Paradox of Voting. If voting is costly (from the point of view of opportunity cost of time) and an individual voter's vote will likely not sway the outcome, then voting becomes irrational. Given this, the candidate who wins the primary should reflect the median only of those voters who do vote. Such voters overcome the seeming irrationality suggested in the Paradox of Voting.
It is known that voter participation rates in the primary are lower than voter participation rates in the general election. Moreover, Republican voters further to the right are more inclined to participate (as are Democratic voters further to the left). The primary system itself is polarizing. Adding the voter participation issue to the primary system increases the polarization.
This issue does have some obvious remedies. Enabling crossover voting in the primaries, especially for those of the other party who have an eye on the general election, would help keep extreme candidates from winning. And making voting mandatory would counter the Paradox of Voting. How to get those remedies as outcomes, however, is far from obvious.
The Decline of Private Sector Unions
Let me begin this section by engaging in a stereotype from 1970s TV, Archie Bunker. He worked on the loading dock, was a union member, and voted for Nixon. (This last one is best explained because Archie Bunker was a proud American, believed that America is always right, and thus was for the War in Vietnam.) More generally, Archie Bunker is emblematic of the hard-hat type. I don't know whether that type is representative of current Trump supporters in many respects, but among urban, males who voted for Trump, I think it a useful stereotype to keep in mind.
It's the union membership aspect that I want to focus on here. While union members are now quite often depicted as lefties, that never was fully true, even when unions wielded power and the majority of union members voted Democratic while the unions themselves contributed more to the campaigns of Democratic candidates. Unions should be considered in a quite different light. They were a normalizing force with respect to social attitudes, particularly about minorities, even if not a perfect one. Because union power was related to the size of union membership, there was incentive to include minorities in the union. Unions were also also guilds and helped members - skill-wise by encouraging senior members and junior members to have a master-apprentice relationship, and socializing-wise by providing members with a ready peer group for after work-hours fun. This put unions in a paternalistic role with respect to their members. The politicians understood this. Republican politicians might garner some union votes, as long as their politics wasn't explicitly anti-union.
I don't want to sugarcoat what it is that unions did. There were, of course, serious issues about connections with the Mafia and corruption among union leadership. But, and this is the important point for what is being argued here, unions served as a force from outside the political arena that the politicians needed to confront (and perhaps be fearful of). In that way strong private sector unions steered Republican politicians towards the center. This force is absent from our current politics.
The Rise of Hostage Taking as a Strategy by Organized Special Interest Groups
Lobbying has been around at least since U.S. Grant was President. That special interests would shower gratuities and attention on politicians situated on the right committees, so that when legislation the special interests cared about was being considered, they could influence the writing of that legislation in a way favorable to themselves. While I find the practice unsavory, as I'm sure many others do as well, it has been with us for such a long time that you might think it part of the process. In particular, I would be hard pressed to consider the shift in the Republicans that Nicholas Kristof wrote about and attribute it to lobbying. We need to focus on something else.
I've called the something else strategic hostage taking. An exemplar is Grover Norquist and his organization Americans for Tax Reform. He offers a candidate to take their "Taxpayer Protection Pledge," which in game-theoretic terms can be called a credible commitment device. The pledge says the candidate will oppose any and all measures to raise taxes. Having taken the pledge that gets publicized, so people who monitor the list of candidates who have taken the pledge and who want to support them in their campaigns can do so. Americans for Tax Reform might also contribute directly to the campaign, but the real leverage is in making other high rollers who don't want their taxes raised aware of which candidates are on the list. The hostage taking part comes later. Suppose that later there is a dire situation - a war has been declared, an enormous natural disaster has taken place, or something else in this category and that necessitates substantial additional government spending. The rational response then would be to have a temporary tax surcharge to pay for that spending. But legislators who took the pledge and who want to run for reelection can't vote for such a temporary surcharge, because that would mean they've broken the pledge, and they'll be punished accordingly. That is the hostage taking.
The National Rifle Association operates in much the same way and has completely blocked any sensible reform regarding gun control in recent times. Let's note that the Brady Bill did pass 25 years ago, in spite of the NRA, though at the time the Democrats were in control of Congress and the White House. But since then, nada on the gun control front, yet there have been so many publicly known violent gun death tragedies to galvanize voters on the issue.
Let's observe that when politicians find they are hostages to a variety of different special interests, the normal path for them to break the arrangement is to not seek reelection. That may be a principled decision or it may simply represent fatigue from playing the role of a puppet. Surely, it is not principled to promise to surrender one's discretion once in office by abiding by the wishes of the special interests, for the sake of getting their support to assist them in being elected. We might call that by many other names, but principled wouldn't be one of them.
Conclusion
Except for the last section of this essay, I tried to make the argument in as abstract form as I possibly could. The point is that the environment that GOP politicians operate in has gotten more hostile over time, especially to politicians who try to conduct themselves in a principled manner. The reason to present this in an abstract way is to get the reader to focus on that main point, and not get hung up on the issues, which they might otherwise be inclined to do. The reason why I deviated from that script in the last section is that I didn't know a way to tell the story purely in the abstract, yet keep it readily understandable. In this case the examples convey the ideas better than a purely theoretical discussion does. Otherwise, I don't want to elevate the examples, at least not for this post.
I also want to repeat a caveat I gave at the beginning of this essay. I'm looking at changes from a while ago and totally ignoring more recent changes. This is deliberate to make the point that it has been going on for quite a while. It's not that everything was hunky-dory until the election of 2016 and then we went over a cliff. Asserting that would be a bad misreading of this history.
Assuming that I am right that the environment for governing has become more hostile for Republicans, one should ask what would Democrats taking control do? Would it reverse any of the hostility in the environment or merely delay the process till Republicans again take over? One might also ask whether Republicans, as the out party, might do anything themselves to reverse the hostility in the environment. That didn't happen in the past, but in the past prominent conservatives weren't ashamed of the Republican party. Now they are.
Let me conclude by saying we sometimes focus on the wrong period in our history. Recently, the 1920s have been considered because it was the last time where there were such great disparities in the income distribution. And the 1930s have also been considered, both because of the Great Depression and the rise of fascism. Yet I think we should look at an earlier time, to the Presidency of Theodore Roosevelt. TR was a Republican, but he also was a reformer. The problem then was the power of the trusts. TR came to be known as the trust buster. Power is distributed differently now and while Antitrust Law, still on the books, may be one tool to combat concentrated power, I suggest we need 21st century tools that take on the political analog. Here I don't want to speculate about what those tools might look like. My hope with this essay is to get some readers to do that and push the discussion along that way.
Wednesday, October 17, 2018
The Nerd Man of Razzmatazz
Yesterday I had a look at a brief survey the ELI is doing about current issues with teaching and learning. While in Malcolm Brown's solicitation for completing the survey he welcomed a very broad audience, a good thing to do, I found that going through the topics there wasn't really anything for me. I should say here that nowadays I think of myself purely as the college instructor who uses technology as he sees fit, and no longer as the learning technology administrator who cares about where the profession as a whole is headed in driving the technology that is employed in instruction. So in writing my response to the survey I chose the last entry, Others, and then wrote in something like - Getting students to believe that their instructors care about them.
In my class, that is a big deal. My impression from the students is that in the other classes they take nobody actually does care about them. My course, then, comes as a surprise, though I wish it weren't. Then I started to noodle more on surprise. I seemed to recall Ken Bain making the argument in What the Best College Teachers Do, that students learn the most when they are genuinely surprised. Let's say that's true. As a teacher interested in promoting student learning, it becomes natural to ask, how can I promote surprise in the students by how I teach?
Even though my current memory is for the birds, my long term memory still seems to be functional. In looking for an answer to that question I recalled the Last Lecture of Randy Pausch, which if you haven't already seen it is worth viewing. He is the person I'm referring to in my title. Near the end of the lecture he explains that his approach to teaching involves misdirection. Students think the lecture is about something. But it really is about something else, although that something else is not revealed ahead of time. The students eventually discover the true purpose, after the misdirection has been played. This is what produces the surprise.
People who are not in the education biz might find nothing startling about this revelation, for it sounds just like good showmanship. The professor is like a magician who pulls a metaphorical rabbit out of his hat, near the climax of the lecture. But if you are in the ed biz, then the Randy Pausch approach might challenge your core beliefs. I wrote about those beliefs some years ago in a post called, Is "No Brainer" A Double Entendre? At issue is the following assertion from instructional design.
In my post, I deconstructed this assertion some. I'll leave the reader to have at it, other than to note that if the real lesson is a surprise, then it couldn't have been a clear goal to the student at the outset. So something is fishy here, or needs further untangling, or a different way to view things so that they come back in focus and then make sense.
I want to do something else here, create my own surprise. I actually lied above (something I rarely if ever do so with intention in these posts). While it is true that I did the ELI survey before starting to draft this post, I did not noodle on how to create surprise in learning to come up with my title. It was actually quite the opposite. I came up with the title (I'll explain how that happened in a bit) and then tried to find subject matter to fit it.
The title itself is actually a rhyme for a pretty well known movie starring Burt Lancaster that came out when I was a kid. I'm guessing that just about anyone my age would know the name of that movie, as it was quite popular at the time. Coming up with rhymes is something I do now, much of the time, as anyone who has seen my Twitter feed will be able to attest.
What may be less obvious, is how those rhymes appear to me in gestation. It is never the whole thing in one gestalt. But with some frequency the first line seemingly appears in my head from nowhere, especially if I'm not writing a rhyme as commentary on something I've just read. I've come to appreciate this form of "discovery" as the product of my subconscious at work, solving a problem I didn't know I had.
With the first line almost there, I then had to do a Google search because I thought the last word was Razamatazz (sometimes I remember things incorrectly or never heard them the right way when I first learned them). The Google search not only revealed the right spelling, but also the meaning, razzle-dazzle. So I had my line about a nerd who did razzle-dazzle.
The next step is the heart of the matter for me. It's not the initial spark, but what follows it. I'm guessing that most people who "discovered" the line for themselves would simply drop it. There's not much to make from it, so better to move onto something more important. I operate differently. I've learned to respect these bits of serendipity as gateways into something interesting. So I started to look for how I can explain the line with something we all know already. It was probably less than a minute later that I came up with Randy Pausch's last lecture. He clearly was a nerd. Misdirection and razzle-dazzle aren't necessarily the same thing, but they are pretty darn close. To me, I had found the connection I was looking for, enough to make a post out of it.
Now a different surprise, one that tries to tie things back to ELI. Can technology help in teaching with misdirection? I'll reframe the question, which I think is really more the issue. Can technology help the learner find serendipity in the process of learning?
I think that's a big question, one worth a lot more investigation, and I want to wrap up this piece, so I'm only going to comment a little on it. My piece has hyperlinks in it. What do we know about student reading of online material. Do they read the hyperlinked content? (I'm guessing many students do not.) What might get students to change their approach and follow the hyperlinks? If they did that would they start seeing connections between things that heretofore appeared disconnected? These questions aren't on the list of questions in the ELI survey. Maybe they should be there.
In my class, that is a big deal. My impression from the students is that in the other classes they take nobody actually does care about them. My course, then, comes as a surprise, though I wish it weren't. Then I started to noodle more on surprise. I seemed to recall Ken Bain making the argument in What the Best College Teachers Do, that students learn the most when they are genuinely surprised. Let's say that's true. As a teacher interested in promoting student learning, it becomes natural to ask, how can I promote surprise in the students by how I teach?
Even though my current memory is for the birds, my long term memory still seems to be functional. In looking for an answer to that question I recalled the Last Lecture of Randy Pausch, which if you haven't already seen it is worth viewing. He is the person I'm referring to in my title. Near the end of the lecture he explains that his approach to teaching involves misdirection. Students think the lecture is about something. But it really is about something else, although that something else is not revealed ahead of time. The students eventually discover the true purpose, after the misdirection has been played. This is what produces the surprise.
People who are not in the education biz might find nothing startling about this revelation, for it sounds just like good showmanship. The professor is like a magician who pulls a metaphorical rabbit out of his hat, near the climax of the lecture. But if you are in the ed biz, then the Randy Pausch approach might challenge your core beliefs. I wrote about those beliefs some years ago in a post called, Is "No Brainer" A Double Entendre? At issue is the following assertion from instructional design.
A well designed course should have clear goals.
In my post, I deconstructed this assertion some. I'll leave the reader to have at it, other than to note that if the real lesson is a surprise, then it couldn't have been a clear goal to the student at the outset. So something is fishy here, or needs further untangling, or a different way to view things so that they come back in focus and then make sense.
I want to do something else here, create my own surprise. I actually lied above (something I rarely if ever do so with intention in these posts). While it is true that I did the ELI survey before starting to draft this post, I did not noodle on how to create surprise in learning to come up with my title. It was actually quite the opposite. I came up with the title (I'll explain how that happened in a bit) and then tried to find subject matter to fit it.
The title itself is actually a rhyme for a pretty well known movie starring Burt Lancaster that came out when I was a kid. I'm guessing that just about anyone my age would know the name of that movie, as it was quite popular at the time. Coming up with rhymes is something I do now, much of the time, as anyone who has seen my Twitter feed will be able to attest.
What may be less obvious, is how those rhymes appear to me in gestation. It is never the whole thing in one gestalt. But with some frequency the first line seemingly appears in my head from nowhere, especially if I'm not writing a rhyme as commentary on something I've just read. I've come to appreciate this form of "discovery" as the product of my subconscious at work, solving a problem I didn't know I had.
With the first line almost there, I then had to do a Google search because I thought the last word was Razamatazz (sometimes I remember things incorrectly or never heard them the right way when I first learned them). The Google search not only revealed the right spelling, but also the meaning, razzle-dazzle. So I had my line about a nerd who did razzle-dazzle.
The next step is the heart of the matter for me. It's not the initial spark, but what follows it. I'm guessing that most people who "discovered" the line for themselves would simply drop it. There's not much to make from it, so better to move onto something more important. I operate differently. I've learned to respect these bits of serendipity as gateways into something interesting. So I started to look for how I can explain the line with something we all know already. It was probably less than a minute later that I came up with Randy Pausch's last lecture. He clearly was a nerd. Misdirection and razzle-dazzle aren't necessarily the same thing, but they are pretty darn close. To me, I had found the connection I was looking for, enough to make a post out of it.
Now a different surprise, one that tries to tie things back to ELI. Can technology help in teaching with misdirection? I'll reframe the question, which I think is really more the issue. Can technology help the learner find serendipity in the process of learning?
I think that's a big question, one worth a lot more investigation, and I want to wrap up this piece, so I'm only going to comment a little on it. My piece has hyperlinks in it. What do we know about student reading of online material. Do they read the hyperlinked content? (I'm guessing many students do not.) What might get students to change their approach and follow the hyperlinks? If they did that would they start seeing connections between things that heretofore appeared disconnected? These questions aren't on the list of questions in the ELI survey. Maybe they should be there.
Monday, October 08, 2018
Why are we so screwed up about sex and authority?
My social science nose tells me that all we've been reading and viewing about recent events, which has been overwhelming no matter your point of view, is mainly if not exclusively about symptoms. We have to get behind that, or under it, or segue to something earlier, to get at causes. I'm going to try to do that here. The main causal explanation advanced in the media is that this is a consequence of patriarchy, men abusing women is part and parcel of the system. I don't want to deny that is a possible cause. But I want to entertain other explanations, because in many cases the patriarchy explanation serves more to mask things than to enlighten on these matters.
I want to claim no expertise on this subject. What I have to go on is my own experience when I was younger, with the Bob Seeger line - awkward teenage blues - a huge understatement in my case. And then I have my recent experiences teaching, where I try hard both to do Socratic dialog in class and to get widespread class participation. Yet in the last few years I have failed in this endeavor, with the majority of the class and sometimes all students present opting out of responding, instead waiting for one of their classmates to make the heroic leap and then raise their hand. With this, I hope to cobble together a plausible explanation for what is going on.
Let me begin with a few awkward personal experiences - failures, at least they seemed that way from my point of view - that beyond the moment conditioned my attitudes for many years thereafter. I am writing this now from the other end of the tunnel, married 28 years and with two adult children. It's possible to speak of those earlier situations today, even if memory has developed its own spin about what happened. I'm pretty sure that I would have been unable to talk with anyone about it at or around when these events occurred. That's not because I didn't think about it. It's because I didn't trust anyone to have such a conversation.
The first was in 7th grade. I was 11 or 12. There were parties that kids would host on Saturdays, in the afternoon or evening. At some of these the purpose of the party seemed to be for kids to pair up, boy-girl, and then make out. I was horrified by that prospect. I didn't have a girl to pair up with and when the few of us who were left over were hanging around, there really wasn't anything for us to do. Should I have asked one of the unpaired girls to be with me? I never did that. I remained uncomfortable the whole time.
Sometime later, at a different party, I had a good time with a girl there, more by accident than by anything else. The whole thing was spontaneous and unplanned. For a short time thereafter we were boyfriend and girlfriend. One afternoon a bunch of us road the bus to another girl's house so that each boy-girl pair could make out. This time I had a girlfriend so that wasn't the problem. But as we were lying on the bed I had misgivings about kissing her. It wasn't that I didn't want to do that. I was concerned about the implied message I'd be sending. Suppose we made out but then I broke it off soon thereafter. Would that be okay or not? I had no answer to that question. So we lied on the bed and perhaps expressed some tender words, but didn't do any kissing. Inadvertently, my shyness in that situation broke it off with her. The funny thing is that at a subsequent party, where we played spin the bottle, I did kiss her. By then I just wanted a kiss and didn't care about the consequences. But it was too late for that to patch things up.
Now I will fast forward to my senior year in high school, in the fall when I was 16. I went on a double-date where the other guy was my friend David and the two girls were friends as well. We went to see the French Connection. We sat in two different rows. My girl and I were right behind the other couple, with each of the guys sitting on the aisle. After the movie started I desperately wanted to hold my girl's hand. But I was having a panic attack about doing it and was simply too afraid to initiate this simple thing. I may have talked a bit to the girl during the movie. That wasn't overwhelming. Holding her hand was. I never got that far. I liked this girl quite a bit. That didn't matter about overcoming my own fear of how to handle the situation.
I could give many further incidents. I will note something else instead. I struggled with my weight in high school and college. There are probably many causes for that. One, obvious in this context, is that being overweight offers a ready-made excuse for failing at the boy-girl thing. And, related to that, eating (think ice cream or some other treat) is kind of a consolation prize when having failed. Now I want to juxtapose this with a couple other factoids. Somewhere in the junior high - high school time frame I learned that a typical boy has a sexual thought about once every eight minutes. In other words, sex is on our minds much of the time. The other is the time period in which I went to high school. The sexual revolution was by then in full swing. Seemingly, everyone was making love with everyone else.
So I found myself incompetent at prelude to romance, everything that would lead up to an act, whether the act was a kiss, holding a girl's hand, or in my then unrealistic aspirations it also included nookie. This incompetence had many dimensions - not knowing what I really wanted, not knowing how to deal with the paralyzing fear that would crop up in the moment, and then having no sense whatsoever of the girl's perspective. The thing was, I knew what it meant to be competent in other areas. I definitely was not a failure across the board. But in this most important of life skills, prelude to romance, I was bottom of the barrel.
Then I made an intellectual error, projecting that the situation was quite different for most everyone else, particularly those guys who were not overweight and not too nerdy. They figured it out. They had plenty of experience and with that they got good at it. In contrast, people like me dawdled and remained incompetent at prelude to romance. Further, as we got older and they made progress while we were standing still, it actually felt like we were moving backwards. This was my (I now believe incorrect) understanding of things until quite recently.
What was my mistake? There is definitely learning by doing, but only some doing produces real learning. The type of doing that works is called deliberate practice. With deliberate practice, you try for things just outside of the current skill set. This is needed to take the next real step. But sometimes these tries end for naught. Real learning entails risk of failure as an intermediate step. So real learning can be bruising to the ego, especially when you expect to be good at the new thing from the get go.
I don't know if guys still do this, but when I was in high school there was a metaphorical baseball scorecard about how the guy did in the latest romantic encounter. It was measured by what base the guy got to. On this metric, many guys had much better early scorecards than I had, but it's quite possible that they plateaued after that and, if so, were actually not that different from me.
Now another hypothesis (guess) that explains the plateauing. People often try to make safety plays in situations where their egos might take a bruising. So they end up repeating what they did before, which produces no learning at all, rather than try something new, where they might learn from the experience. Regarding why the weekend tennis player never makes it to the professional level, this is probably enough of an explanation. However, on not being competent at prelude to romance, I think more is needed to explain the plateauing, since the rewards from getting better are much larger and are perceived as such.
The paralytic fear that I experienced while on that high school date, and on other occasions too, is quite a motivator. People who have experienced such fear more than once, in situations that others would consider ordinary and not threatening, have a very powerful motivation to encourage them to avoid a repeat of such circumstances. Now I have another hypothesis to advance, one that makes sense to me. The shy person and the bully face very similar situations. But they manage the situations quite differently. The shy person opts for avoidance. The bully opts for control. Juxtapose this with the type of intellectual error I made. Assume others make the same error as well. It would be much easier to simply chill out on incompetence at prelude to romance if the perception was that many others were likewise struggling with this. When the perception, however, is that others are full steam ahead, then this incompetence has the makings of a personal crisis. It's with this mindset that the person looks for a safety play.
Now let's bring authority into the mix. I'm no expert here, but I do have more relevant personal experience to tap into as a professor and as a campus administrator. Undergraduate students perceive the relationship between them and their instructor as vertical. The students tend to be deferential to authority. This is true even as other organizations break down hierarchical relationships in favor of flatter structures with more equality among members. There have been things written that argue the perception of the professor as authority depends on the gender of the instructor. Perhaps that is true. If so, it fits into the story being told here.
Vertical relationships are inherently trust relationships. The subordinate trusts that the superior will act in accord with what is best for the organization as a whole. Trust relationships of this sort create a reputation for the superior. Preserving the good reputation then serves as motive for the superior to indeed act in a way that is best for the organization. Yet it is possible that the superior instead 'cashes in' on the reputation. I actually teach about this in my class on the economics of organizations. We look for ways where the cashing in won't happen, whether ethos or incentive. Neither of these are perfect. Cashing in sometimes happens. And if the perception is that you can cash in but go undetected, then the behavior will persist. The incentive argument against cashing in definitely includes the likelihood of being caught as part of the incentive.
Now I want to switch to my experience as an administrator. What might be exhilarating in the work early on eventually starts to seem like a burden. This is especially true under two different circumstances. One is that you take a lot of criticism/flak for making decisions that you feel are right but that remain controversial. You didn't sign up for the job to take such criticism and you start to look for compensations that continue to make it worthwhile to do overall. The other is that you plateau in your learning from doing the work and look for other reasons than the work itself for continuing to do it. Compensations of various sorts might then offer these other reasons. In my own case, I definitely felt I was plateauing a year or so before I retired. I recall that at staff meetings I would monopolize the conversation more than was really good for the group, just because I could do that based on my position. It's a simple example to illustrate the point.
There is still one more point that is needed to give this story some bite. With this one I have no experience, so I'm having a harder time trying to explain it. It is that non-consensual sex is nonetheless perceived as reward by the person committing rape. What is the origin of that perception? Does it stem from incompetence at the prelude to romance or from something else? Admittedly, this is the part of the story where patriarchy might creep back in, even as I've been trying to construct an alternative to that explanation. Alternatively, it might be a confounding of intrinsic and extrinsic motivation, where guys get so caught up in how many times they've reached home plate that it becomes their entire focus. Then the pleasure of the moment becomes subsidiary, perhaps even entirely lost.
* * * * *
With #MeToo we have reached the possibility of punishing rape after the fact, outside of the legal system, via embarrassment of the perpetrator by exposing multiple such acts, with that possibly leading to other painful consequences, such as loss of job. This does not preclude subsequent legal penalties being imposed, but the legal penalties may be less important than the public embarrassment in the overall scheme. Fundamentally from an economics perspective, this is a deterrence approach. Deterrence can be effective. Yet most of us subscribe to the view that an ounce of prevention is worth a pound of cure. Deterrence may not always be a very good preventative, either because the person the prevention is aimed at is immature (e.g., young male drivers are known to be high risk for automobile accidents quite apart from the consequences on future car insurance premiums) or because the person rationally believes he can get away with it, which the powerful might still believe in spite of #MeToo.
The purpose of doing a causal analysis is to look for other ways to prevent misogyny and rape, methods that would take effect before the fact. While I meant my analysis as broad strokes only, and even with that it may be that the approach is wrongheaded so something entirely different is needed, I did want to conclude with posing a question that assumes the approach is not too far off the mark. Is there anything that might be done during the teenage years, also during the early twenties, in other words for high school and college students, that might combat their feelings of incompetence at prelude to romance and might help them understand that they are not alone in having these feelings?
I definitely do not have a full program to offer here. I only have a few errant thoughts. Back around 1990, I was teaching an undergraduate class where several of the kids were taking ballroom dancing (perhaps to fulfill a physical education requirement, but of that I'm less sure). It seems to me that a class in ballroom dancing, one that would get the shy kids to take it, is the sort of thing that might work.
More recently, I attended a workshop on campus about effective use of clickers in high enrollment courses. One of the presenters was the instructor for a Gen Ed course on Human Sexuality, taught in the Department of Kinesiology and Community Health. They did anonymous surveys in that class, where the student's identity remained hidden, asking about some pretty personal questions about the student's own sexual practices. As you might imagine, there was intense interest in the results those surveys produced and the anonymity was a key feature to get large if not universal participation. That made it seem possible for students to get accurate information about their peers in this domain, though whether that could be done earlier, in high school, and done online rather than with clickers, I leave for others to determine. Further, we'd need to determine whether students would be as interested in information about those prelude acts as they seem to be about the sex itself. That would have to be investigated.
I want to note one thing that seemingly cuts in the wrong direction. Some part of being competent at prelude to romance has to entail being competent at face to face conversation, including the type where romance is not part of it at all. Yet we know that younger people get less practice at this now because they are on their electronic devices so frequently. (The nervousness that I talked about above is likely absent when the communication is mediated by an electronic device.) I, for one, believe that young people should get much more practice of their schmooze skills with part of the goal that they learn to like the experience. Yet how to do that effectively is something we are all struggling with.
Once in a while I ask myself whether in some social domain, race relations for example, are we better off today than we were 50 years ago? Now the question I'm asking is what can we do now so we are better off 50 years from now? I hope others start to ask the same question.
I want to claim no expertise on this subject. What I have to go on is my own experience when I was younger, with the Bob Seeger line - awkward teenage blues - a huge understatement in my case. And then I have my recent experiences teaching, where I try hard both to do Socratic dialog in class and to get widespread class participation. Yet in the last few years I have failed in this endeavor, with the majority of the class and sometimes all students present opting out of responding, instead waiting for one of their classmates to make the heroic leap and then raise their hand. With this, I hope to cobble together a plausible explanation for what is going on.
Let me begin with a few awkward personal experiences - failures, at least they seemed that way from my point of view - that beyond the moment conditioned my attitudes for many years thereafter. I am writing this now from the other end of the tunnel, married 28 years and with two adult children. It's possible to speak of those earlier situations today, even if memory has developed its own spin about what happened. I'm pretty sure that I would have been unable to talk with anyone about it at or around when these events occurred. That's not because I didn't think about it. It's because I didn't trust anyone to have such a conversation.
The first was in 7th grade. I was 11 or 12. There were parties that kids would host on Saturdays, in the afternoon or evening. At some of these the purpose of the party seemed to be for kids to pair up, boy-girl, and then make out. I was horrified by that prospect. I didn't have a girl to pair up with and when the few of us who were left over were hanging around, there really wasn't anything for us to do. Should I have asked one of the unpaired girls to be with me? I never did that. I remained uncomfortable the whole time.
Sometime later, at a different party, I had a good time with a girl there, more by accident than by anything else. The whole thing was spontaneous and unplanned. For a short time thereafter we were boyfriend and girlfriend. One afternoon a bunch of us road the bus to another girl's house so that each boy-girl pair could make out. This time I had a girlfriend so that wasn't the problem. But as we were lying on the bed I had misgivings about kissing her. It wasn't that I didn't want to do that. I was concerned about the implied message I'd be sending. Suppose we made out but then I broke it off soon thereafter. Would that be okay or not? I had no answer to that question. So we lied on the bed and perhaps expressed some tender words, but didn't do any kissing. Inadvertently, my shyness in that situation broke it off with her. The funny thing is that at a subsequent party, where we played spin the bottle, I did kiss her. By then I just wanted a kiss and didn't care about the consequences. But it was too late for that to patch things up.
Now I will fast forward to my senior year in high school, in the fall when I was 16. I went on a double-date where the other guy was my friend David and the two girls were friends as well. We went to see the French Connection. We sat in two different rows. My girl and I were right behind the other couple, with each of the guys sitting on the aisle. After the movie started I desperately wanted to hold my girl's hand. But I was having a panic attack about doing it and was simply too afraid to initiate this simple thing. I may have talked a bit to the girl during the movie. That wasn't overwhelming. Holding her hand was. I never got that far. I liked this girl quite a bit. That didn't matter about overcoming my own fear of how to handle the situation.
I could give many further incidents. I will note something else instead. I struggled with my weight in high school and college. There are probably many causes for that. One, obvious in this context, is that being overweight offers a ready-made excuse for failing at the boy-girl thing. And, related to that, eating (think ice cream or some other treat) is kind of a consolation prize when having failed. Now I want to juxtapose this with a couple other factoids. Somewhere in the junior high - high school time frame I learned that a typical boy has a sexual thought about once every eight minutes. In other words, sex is on our minds much of the time. The other is the time period in which I went to high school. The sexual revolution was by then in full swing. Seemingly, everyone was making love with everyone else.
So I found myself incompetent at prelude to romance, everything that would lead up to an act, whether the act was a kiss, holding a girl's hand, or in my then unrealistic aspirations it also included nookie. This incompetence had many dimensions - not knowing what I really wanted, not knowing how to deal with the paralyzing fear that would crop up in the moment, and then having no sense whatsoever of the girl's perspective. The thing was, I knew what it meant to be competent in other areas. I definitely was not a failure across the board. But in this most important of life skills, prelude to romance, I was bottom of the barrel.
Then I made an intellectual error, projecting that the situation was quite different for most everyone else, particularly those guys who were not overweight and not too nerdy. They figured it out. They had plenty of experience and with that they got good at it. In contrast, people like me dawdled and remained incompetent at prelude to romance. Further, as we got older and they made progress while we were standing still, it actually felt like we were moving backwards. This was my (I now believe incorrect) understanding of things until quite recently.
What was my mistake? There is definitely learning by doing, but only some doing produces real learning. The type of doing that works is called deliberate practice. With deliberate practice, you try for things just outside of the current skill set. This is needed to take the next real step. But sometimes these tries end for naught. Real learning entails risk of failure as an intermediate step. So real learning can be bruising to the ego, especially when you expect to be good at the new thing from the get go.
I don't know if guys still do this, but when I was in high school there was a metaphorical baseball scorecard about how the guy did in the latest romantic encounter. It was measured by what base the guy got to. On this metric, many guys had much better early scorecards than I had, but it's quite possible that they plateaued after that and, if so, were actually not that different from me.
Now another hypothesis (guess) that explains the plateauing. People often try to make safety plays in situations where their egos might take a bruising. So they end up repeating what they did before, which produces no learning at all, rather than try something new, where they might learn from the experience. Regarding why the weekend tennis player never makes it to the professional level, this is probably enough of an explanation. However, on not being competent at prelude to romance, I think more is needed to explain the plateauing, since the rewards from getting better are much larger and are perceived as such.
The paralytic fear that I experienced while on that high school date, and on other occasions too, is quite a motivator. People who have experienced such fear more than once, in situations that others would consider ordinary and not threatening, have a very powerful motivation to encourage them to avoid a repeat of such circumstances. Now I have another hypothesis to advance, one that makes sense to me. The shy person and the bully face very similar situations. But they manage the situations quite differently. The shy person opts for avoidance. The bully opts for control. Juxtapose this with the type of intellectual error I made. Assume others make the same error as well. It would be much easier to simply chill out on incompetence at prelude to romance if the perception was that many others were likewise struggling with this. When the perception, however, is that others are full steam ahead, then this incompetence has the makings of a personal crisis. It's with this mindset that the person looks for a safety play.
Now let's bring authority into the mix. I'm no expert here, but I do have more relevant personal experience to tap into as a professor and as a campus administrator. Undergraduate students perceive the relationship between them and their instructor as vertical. The students tend to be deferential to authority. This is true even as other organizations break down hierarchical relationships in favor of flatter structures with more equality among members. There have been things written that argue the perception of the professor as authority depends on the gender of the instructor. Perhaps that is true. If so, it fits into the story being told here.
Vertical relationships are inherently trust relationships. The subordinate trusts that the superior will act in accord with what is best for the organization as a whole. Trust relationships of this sort create a reputation for the superior. Preserving the good reputation then serves as motive for the superior to indeed act in a way that is best for the organization. Yet it is possible that the superior instead 'cashes in' on the reputation. I actually teach about this in my class on the economics of organizations. We look for ways where the cashing in won't happen, whether ethos or incentive. Neither of these are perfect. Cashing in sometimes happens. And if the perception is that you can cash in but go undetected, then the behavior will persist. The incentive argument against cashing in definitely includes the likelihood of being caught as part of the incentive.
Now I want to switch to my experience as an administrator. What might be exhilarating in the work early on eventually starts to seem like a burden. This is especially true under two different circumstances. One is that you take a lot of criticism/flak for making decisions that you feel are right but that remain controversial. You didn't sign up for the job to take such criticism and you start to look for compensations that continue to make it worthwhile to do overall. The other is that you plateau in your learning from doing the work and look for other reasons than the work itself for continuing to do it. Compensations of various sorts might then offer these other reasons. In my own case, I definitely felt I was plateauing a year or so before I retired. I recall that at staff meetings I would monopolize the conversation more than was really good for the group, just because I could do that based on my position. It's a simple example to illustrate the point.
There is still one more point that is needed to give this story some bite. With this one I have no experience, so I'm having a harder time trying to explain it. It is that non-consensual sex is nonetheless perceived as reward by the person committing rape. What is the origin of that perception? Does it stem from incompetence at the prelude to romance or from something else? Admittedly, this is the part of the story where patriarchy might creep back in, even as I've been trying to construct an alternative to that explanation. Alternatively, it might be a confounding of intrinsic and extrinsic motivation, where guys get so caught up in how many times they've reached home plate that it becomes their entire focus. Then the pleasure of the moment becomes subsidiary, perhaps even entirely lost.
* * * * *
With #MeToo we have reached the possibility of punishing rape after the fact, outside of the legal system, via embarrassment of the perpetrator by exposing multiple such acts, with that possibly leading to other painful consequences, such as loss of job. This does not preclude subsequent legal penalties being imposed, but the legal penalties may be less important than the public embarrassment in the overall scheme. Fundamentally from an economics perspective, this is a deterrence approach. Deterrence can be effective. Yet most of us subscribe to the view that an ounce of prevention is worth a pound of cure. Deterrence may not always be a very good preventative, either because the person the prevention is aimed at is immature (e.g., young male drivers are known to be high risk for automobile accidents quite apart from the consequences on future car insurance premiums) or because the person rationally believes he can get away with it, which the powerful might still believe in spite of #MeToo.
The purpose of doing a causal analysis is to look for other ways to prevent misogyny and rape, methods that would take effect before the fact. While I meant my analysis as broad strokes only, and even with that it may be that the approach is wrongheaded so something entirely different is needed, I did want to conclude with posing a question that assumes the approach is not too far off the mark. Is there anything that might be done during the teenage years, also during the early twenties, in other words for high school and college students, that might combat their feelings of incompetence at prelude to romance and might help them understand that they are not alone in having these feelings?
I definitely do not have a full program to offer here. I only have a few errant thoughts. Back around 1990, I was teaching an undergraduate class where several of the kids were taking ballroom dancing (perhaps to fulfill a physical education requirement, but of that I'm less sure). It seems to me that a class in ballroom dancing, one that would get the shy kids to take it, is the sort of thing that might work.
More recently, I attended a workshop on campus about effective use of clickers in high enrollment courses. One of the presenters was the instructor for a Gen Ed course on Human Sexuality, taught in the Department of Kinesiology and Community Health. They did anonymous surveys in that class, where the student's identity remained hidden, asking about some pretty personal questions about the student's own sexual practices. As you might imagine, there was intense interest in the results those surveys produced and the anonymity was a key feature to get large if not universal participation. That made it seem possible for students to get accurate information about their peers in this domain, though whether that could be done earlier, in high school, and done online rather than with clickers, I leave for others to determine. Further, we'd need to determine whether students would be as interested in information about those prelude acts as they seem to be about the sex itself. That would have to be investigated.
I want to note one thing that seemingly cuts in the wrong direction. Some part of being competent at prelude to romance has to entail being competent at face to face conversation, including the type where romance is not part of it at all. Yet we know that younger people get less practice at this now because they are on their electronic devices so frequently. (The nervousness that I talked about above is likely absent when the communication is mediated by an electronic device.) I, for one, believe that young people should get much more practice of their schmooze skills with part of the goal that they learn to like the experience. Yet how to do that effectively is something we are all struggling with.
Once in a while I ask myself whether in some social domain, race relations for example, are we better off today than we were 50 years ago? Now the question I'm asking is what can we do now so we are better off 50 years from now? I hope others start to ask the same question.
Thursday, October 04, 2018
The Neville Chamberlain Moment
The name Neville Chamberlain is associated with the word appeasement. Chamberlain was the Prime Minister of Great Britain from 1937-40. The appeasement refers to Britain's and France's reaction to the German annexation of the Sudetenland, an area of Czechoslovakia. This was allowed to happen without resistance, in an effort to maintain peace. World War I was a distinct memory and avoiding war was the motivation for appeasement.
Yet Chamberlain was still Prime Minister in 1939, when Germany annexed the rest of Czechoslovakia. That move made Chamberlain change his tact. Britain declared war on Germany, a necessary reaction to this uncontained aggression. This escalated what was a regional conflict into World War II. Clearly Chamberlain didn't want war. But there was no viable alternative.
With the very odd casting of Jeff Flake in a leadership role, Republicans as a whole in Congress are now having their Neville Chamberlain moment. This is not just their embrace of Donald Trump. It is a longer trajectory thing where the Republicans have practiced a scorched earth approach to legislation - there wasn't a single Republican vote for Obamacare, even though it was modeled on a Republican approach to health care. The ultimate scorched earth tactic was not taking up the nomination of Merrick Garland to the Supreme Court.
As I write this, I don't know what will happen to the Brett Kavanaugh nomination, but I suspect that Republicans in Congress have been surprised how difficult the process has been. It should be a wake up call to them that they need to change their ways. How many other Republicans in Congress share Flake's view that the tribalism needs to end, I don't know. Those that do share the view need to be much more forthcoming about it.
At present, there is a sense that the outcome of the November election still hangs in the balance. Remaining silent on both Trump and tribalism thus is a kind of hedge. As a very large number of Republicans are not running for reelection, notably Speaker Paul Ryan, it seems clear that the hedge approach takes its emotional toll, just as appeasement must have done for Neville Chamberlain.
While previously I had thought that some of these Republicans might speak out during the lame duck session of Congress, I now believe that the various events surrounding the Kavanaugh nomination have triggered an urgent need for Republicans to push back against their own scorched earth approach. That needs to happen now.
In all likelihood, if that did happen, the Democrats would make large electoral gains this November and the Republicans would return to minority party status, but we would avoid the Civil War that seems increasingly inevitable. The Democrats offer their own sort of resistance, of course. But the Democrats can't prevent that Civil War on their own.
Yet Chamberlain was still Prime Minister in 1939, when Germany annexed the rest of Czechoslovakia. That move made Chamberlain change his tact. Britain declared war on Germany, a necessary reaction to this uncontained aggression. This escalated what was a regional conflict into World War II. Clearly Chamberlain didn't want war. But there was no viable alternative.
With the very odd casting of Jeff Flake in a leadership role, Republicans as a whole in Congress are now having their Neville Chamberlain moment. This is not just their embrace of Donald Trump. It is a longer trajectory thing where the Republicans have practiced a scorched earth approach to legislation - there wasn't a single Republican vote for Obamacare, even though it was modeled on a Republican approach to health care. The ultimate scorched earth tactic was not taking up the nomination of Merrick Garland to the Supreme Court.
As I write this, I don't know what will happen to the Brett Kavanaugh nomination, but I suspect that Republicans in Congress have been surprised how difficult the process has been. It should be a wake up call to them that they need to change their ways. How many other Republicans in Congress share Flake's view that the tribalism needs to end, I don't know. Those that do share the view need to be much more forthcoming about it.
At present, there is a sense that the outcome of the November election still hangs in the balance. Remaining silent on both Trump and tribalism thus is a kind of hedge. As a very large number of Republicans are not running for reelection, notably Speaker Paul Ryan, it seems clear that the hedge approach takes its emotional toll, just as appeasement must have done for Neville Chamberlain.
While previously I had thought that some of these Republicans might speak out during the lame duck session of Congress, I now believe that the various events surrounding the Kavanaugh nomination have triggered an urgent need for Republicans to push back against their own scorched earth approach. That needs to happen now.
In all likelihood, if that did happen, the Democrats would make large electoral gains this November and the Republicans would return to minority party status, but we would avoid the Civil War that seems increasingly inevitable. The Democrats offer their own sort of resistance, of course. But the Democrats can't prevent that Civil War on their own.
Wednesday, September 19, 2018
The Median Voter Model and Was Hillary Clinton The Wrong Candidate?
This will be a very brief post (for a change).
As our politics has seemingly become more polarized, the insight that Anthony Downs gave us in An Economic Theory of Democracy, which cast into the strategic positioning of candidates when voting is by majority rule, what Harold Hotelling had previously modeled for spatial competition, seemed to make sense in a bygone era but was obsolete now. Suppose that is not true and the Median Voter Model is still applicable.
The conjecture here is that median voters are suburban women in Republican households. A surprise to me, and I think to many others who have looked at the results from the 2016 election, is that as grotesque as Donald Trump's behavior was to women, in a way that should have been evident to all, Republican suburban women largely voted for Trump anyway. Hillary Clinton had been so demonized by the Republican attack machine that they wouldn't vote for her, even as bad as Trump was. It is in that sense that I mean Hillary Clinton was the wrong candidate. It's not about her politics and positions. It's that she couldn't win these critical voters. (Incidentally, if you notice now that Nancy Pelosi is supposedly the most hated politician in the country, this is pretty much for the same reason.)
The situation with Kavanaugh, I believe and I'm sure many others believe as well, will be determined by how suburban Republican women see his nomination at this point. My guess are that such voters are repulsed by Kavanaugh, even if many won't articulate that because they don't want to be overtly critical of the Republican party. If that's right, at a minimum the confirmation vote for Kavanaugh will be delayed till after the election in November, and quite possibly it will be withdrawn.
Of course, this all could be wishful thinking. But it is clear that the Republican attach machine can't go after Christine Blasey Ford now, and Chuck Grassley's decision not to grant her request to delay the testimony before his committee till after an investigation has taken place will almost surely blow up.
Of the Senate Republicans - they are hoist on their own petard.
As our politics has seemingly become more polarized, the insight that Anthony Downs gave us in An Economic Theory of Democracy, which cast into the strategic positioning of candidates when voting is by majority rule, what Harold Hotelling had previously modeled for spatial competition, seemed to make sense in a bygone era but was obsolete now. Suppose that is not true and the Median Voter Model is still applicable.
The conjecture here is that median voters are suburban women in Republican households. A surprise to me, and I think to many others who have looked at the results from the 2016 election, is that as grotesque as Donald Trump's behavior was to women, in a way that should have been evident to all, Republican suburban women largely voted for Trump anyway. Hillary Clinton had been so demonized by the Republican attack machine that they wouldn't vote for her, even as bad as Trump was. It is in that sense that I mean Hillary Clinton was the wrong candidate. It's not about her politics and positions. It's that she couldn't win these critical voters. (Incidentally, if you notice now that Nancy Pelosi is supposedly the most hated politician in the country, this is pretty much for the same reason.)
The situation with Kavanaugh, I believe and I'm sure many others believe as well, will be determined by how suburban Republican women see his nomination at this point. My guess are that such voters are repulsed by Kavanaugh, even if many won't articulate that because they don't want to be overtly critical of the Republican party. If that's right, at a minimum the confirmation vote for Kavanaugh will be delayed till after the election in November, and quite possibly it will be withdrawn.
Of course, this all could be wishful thinking. But it is clear that the Republican attach machine can't go after Christine Blasey Ford now, and Chuck Grassley's decision not to grant her request to delay the testimony before his committee till after an investigation has taken place will almost surely blow up.
Of the Senate Republicans - they are hoist on their own petard.
Sunday, September 09, 2018
Dissonance and Democracy
It feels as if we're living within a William Faulkner novel with the entire country part of the story.
Some of my friends have been posting in Facebook about the speech President Obama made on my campus this past Friday. I watched it on replay, in bits and chunks, so I could work through what I was hearing. It is a speech addressed to college-age students who are old enough to vote. The core message was exactly that. Vote. Work to get out the vote of others. If enough of that happens, the system will autocorrect, not immediately but over time, not perfectly but sufficiently that we can feel good about the society we live in. President Obama was careful enough to say there is no guarantee this will happen. The current situation, with concentrated powerful interests holding sway, has means and motive to sustain that. But the masses have voting as a way to restore real democracy.
The thing is, this is not a fair fight and it hasn't been for some time. The Constitution itself builds in some of this unfairness. Wyoming gets the same number of Senators as California. And Puerto Rico, which has more than 5 times the population of Wyoming, gets none. This much we probably have to live with. But that unfairness might be brought to the light of day. To my knowledge, it largely goes ignored.
Then there is the gerrymandering, which has received considerable attention. To a certain extent, gerrymandering does at the House of Representatives district level what making a state into a state does at the Senate level. Gerrymandering is definitely not in the Constitution. The number of Representatives per state is determined roughly by the Census. The population of the state relative to the population of the country as a whole gives the pro rata number of Representatives for that state. But the borders of the individual Congressional districts within a state are set at the state level. Currently the Republicans hold the vast majority of governorships and state houses. You can pretty well guess that the gerrymandering will continue.
Then there is voter participation. In today's New York Times there is an Op-Ed about voter suppression. It is disturbing to read about how the voter suppression discriminates against poor minority voters.
Then there is the Citizen United decision and the ridiculous consequences on campaign spending. The Koch brothers are reported to be spending $400 million on the upcoming election. That's free speech!
This past week David Leonhardt wrote in a column that argued, among other things, that the Republicans stole a Supreme Court seat. In a column a day later, Paul Krugman said it really was two Supreme Court seats that were stolen. The seat Merrick Garland would have filled is one. That the seat went unfilled may have turned the election, where a majority of voters nonetheless voted for Hillary Clinton. So the Presidency itself may have been stolen (Krugman didn't say that, but I am) and that, in turn, is how the second Supreme Court seat got stolen. I'm more inclined towards Krugman's accounting on this matter than Leonhardt's.
So, one wants to know first whether the view by President Obama - get out the vote - can overcome this unfairness or not. If getting out the vote does work, do the Republicans then get punished for their theft of Supreme Court seats. Or do we just move on with business as usual?
I do think President Obama can be fairly criticized now for his administration not making a big deal about Russian interference in our elections. It is possible that his administration could have made a big point of this in August 2016. But they didn't. They kept a lid on the information. I believe that was so they wouldn't be accused of tipping the election, even while the Republicans were doing everything they could to do just that.
Then one wants to know if it is time for Democratic voters to take off the gloves too and start to fight dirty, to match the Republicans who have been doing it for some time. What would fighting dirty mean? I confess here something of a mental block. My thought process is that I'm both worried and scared about what might happen. I would like to have the powerful interests and Republicans in Congress share those feelings. So I've been asking myself what would do that. My mental block is that I haven't had good answers to that question apart from the threat of violence upon them. But maybe there are other answers. Perhaps embarrassment can work or organized campaigns (shut down Koch Industries - they exacerbate global warming, shut down Fox News - they regularly broadcast lies, and so on). I am not skilled about how to make video go viral, but that knowledge exists. Such campaigns are possible.
It also might be that some more surreptitious methods, employed by hackers with Democratic sympathies, can work some magic. I don't know, but it also seems possible.
Play it clean or play it dirty, not as a first mover but as a response to the Republicans, which should it be? I think that question is worth asking as is thinking through an answer.
Some of my friends have been posting in Facebook about the speech President Obama made on my campus this past Friday. I watched it on replay, in bits and chunks, so I could work through what I was hearing. It is a speech addressed to college-age students who are old enough to vote. The core message was exactly that. Vote. Work to get out the vote of others. If enough of that happens, the system will autocorrect, not immediately but over time, not perfectly but sufficiently that we can feel good about the society we live in. President Obama was careful enough to say there is no guarantee this will happen. The current situation, with concentrated powerful interests holding sway, has means and motive to sustain that. But the masses have voting as a way to restore real democracy.
The thing is, this is not a fair fight and it hasn't been for some time. The Constitution itself builds in some of this unfairness. Wyoming gets the same number of Senators as California. And Puerto Rico, which has more than 5 times the population of Wyoming, gets none. This much we probably have to live with. But that unfairness might be brought to the light of day. To my knowledge, it largely goes ignored.
Then there is the gerrymandering, which has received considerable attention. To a certain extent, gerrymandering does at the House of Representatives district level what making a state into a state does at the Senate level. Gerrymandering is definitely not in the Constitution. The number of Representatives per state is determined roughly by the Census. The population of the state relative to the population of the country as a whole gives the pro rata number of Representatives for that state. But the borders of the individual Congressional districts within a state are set at the state level. Currently the Republicans hold the vast majority of governorships and state houses. You can pretty well guess that the gerrymandering will continue.
Then there is voter participation. In today's New York Times there is an Op-Ed about voter suppression. It is disturbing to read about how the voter suppression discriminates against poor minority voters.
Then there is the Citizen United decision and the ridiculous consequences on campaign spending. The Koch brothers are reported to be spending $400 million on the upcoming election. That's free speech!
This past week David Leonhardt wrote in a column that argued, among other things, that the Republicans stole a Supreme Court seat. In a column a day later, Paul Krugman said it really was two Supreme Court seats that were stolen. The seat Merrick Garland would have filled is one. That the seat went unfilled may have turned the election, where a majority of voters nonetheless voted for Hillary Clinton. So the Presidency itself may have been stolen (Krugman didn't say that, but I am) and that, in turn, is how the second Supreme Court seat got stolen. I'm more inclined towards Krugman's accounting on this matter than Leonhardt's.
So, one wants to know first whether the view by President Obama - get out the vote - can overcome this unfairness or not. If getting out the vote does work, do the Republicans then get punished for their theft of Supreme Court seats. Or do we just move on with business as usual?
I do think President Obama can be fairly criticized now for his administration not making a big deal about Russian interference in our elections. It is possible that his administration could have made a big point of this in August 2016. But they didn't. They kept a lid on the information. I believe that was so they wouldn't be accused of tipping the election, even while the Republicans were doing everything they could to do just that.
Then one wants to know if it is time for Democratic voters to take off the gloves too and start to fight dirty, to match the Republicans who have been doing it for some time. What would fighting dirty mean? I confess here something of a mental block. My thought process is that I'm both worried and scared about what might happen. I would like to have the powerful interests and Republicans in Congress share those feelings. So I've been asking myself what would do that. My mental block is that I haven't had good answers to that question apart from the threat of violence upon them. But maybe there are other answers. Perhaps embarrassment can work or organized campaigns (shut down Koch Industries - they exacerbate global warming, shut down Fox News - they regularly broadcast lies, and so on). I am not skilled about how to make video go viral, but that knowledge exists. Such campaigns are possible.
It also might be that some more surreptitious methods, employed by hackers with Democratic sympathies, can work some magic. I don't know, but it also seems possible.
Play it clean or play it dirty, not as a first mover but as a response to the Republicans, which should it be? I think that question is worth asking as is thinking through an answer.