Senator Elizabeth Warren at the Re/code conference was asked a question about getting voters to participate and tying that to requisite investment in America. She responded first with the sort of investment the questioner was talking about, then by blaming the plutocrats, and then saying the only way for things to change is for voters to get mad as hell - the Howard Beale line. She is right that these things are needed but she is wrong in seeming to imply that this can somehow happen on its own. It can't.
First, even if the Democrats retain the White House, if Republicans maintain control of Congress, even just one branch of Congress, little can be accomplished. This possibility looks likely from here. Understanding that, the response of the typical voter is more inclined to be "Woe is me" than it is "I'm madder than hell." Something needs to be done to break this realistic and dismayed apathy.
Second, if one were to do a serious post mortem on the initial stimulus package in Winter 2009 after President Obama took office, while it may be concluded that it was the best that could be done at the time, given the rush job that was necessary to get the bill passed, it generated a lot of criticism and disgust from the other side that there was no discipline about what got into the bill. Were the Democrats to actually sweep Congress in the 2016 election, would something similar happen again? Voters likely wouldn't want that. They'd want what the country needs but not in addition the private agenda of each Democratic member of Congress. If voters are expected to participate in great numbers, they should demand, in return, some assurance of disciplined legislation to rebuild the country.
Third, Rome wasn't built in a day and one should not expect America to be rebuilt via one election cycle, even if the 2016 elections are very important. The requisite sort of investment needs to be sustained. The idea that there is a negative reaction to the in party so the following cycle flip flops who is in control, needs to be squarely addressed. Otherwise this looks like a lot of talk but not a real commitment to action.
My regular readers know that I wrote a document last December called How to Save the Economy and the Democratic Party - A Proposal, that attempted to address these issues. I didn't really expect it to get much traction, but I was hoping to see something similar from party leaders, like Elizabeth Warren.
Our media fascination with the Presidential election fails us here, given that divided government produces gridlock. We need voters to be aware of Congressional races much more than they have in the past to understand whether the candidates for House Membership and the Senate share the same views about the appropriate policy with the Presidential candidate. Voters really want to approve a shared policy agenda that makes sense to them. The process as it is now leaves much of that agenda to be negotiated once the candidates have assumed office. It is that process which must change if voters are to turn out in high numbers.
Is it possible for that message to get through to the party leadership?
pedagogy, the economics of, technical issues, tie-ins with other stuff, the entire grab bag.
Saturday, May 30, 2015
Friday, May 29, 2015
The Economy as One Big Brain
In my second year of graduate school at Northwestern, 1977-78, I took the year-long graduate math analysis sequence. The professor was Robert Welland, who had an interesting persona, with a certain flair and personal idiosyncrasy. He kept his hair long and sometimes in the middle of lecture would pull out a comb to straighten his hair. I've never experienced another instructor doing anything like that. In class, and in conversation too, he would often preface what he had to say with the admonition, "Christ, man!" This was his alert that what you said wasn't quite on the mark.
I may have been the only non-math graduate student in the class, though of this I'm not sure, but Welland liked me for studying economics and, at least during the first quarter, that I seemed to have more on the ball than the other students. He told me he read my face while in class to measure how his lecture was going. If I seemed to show comprehension then things were going fine. If I looked confused then he assumed the class as a whole was in trouble.
I may have had certain advantages over the other students to achieve this position. Some or all of them may have been in the first year, and graduate school is a slug then. Also, I had that topology class at Cornell which I've written about before, most recently here, so I had some confidence going in that I wasn't over my head with the math. There is also that if you study pure math sometimes you lack for a sense of why the concepts matter. The economics helped me there, in particular when we studied Hilbert Spaces, since the inner product of two vectors is used in economics where one vector is a price system and the other vector is a commodity bundle and then the inner product yields the value of the bundle.
By the third quarter for sure and quite possibly earlier, some of the other students had overtaken me as class performers, but also by then Welland had formed an impression of me that would sustain. The consequence is that a few times I had discussions with him outside of class and on topics mainly unrelated to coursework. In one of those talks we are walking on the landfill east of Norris Center next to Lake Michigan. He gives me his view of economics, which at the time was quite futuristic and in retrospect still seems remarkable to me. He said that economics would die as a discipline as computing power took off and all transactions could be recorded and stored in some big database. There no longer would be a need for economic models. The data could tell the full story.
I don't know if Welland was aware of Moore's Law then or not, but even Gordon Moore wasn't predicting this law would hold over many decades at the time he made his famous prediction. And computing was remarkably primitive when I took Welland's class. This was before the personal computer. If you wanted to run a program you had to write it up on punch cards and submit it for a batch job on the mainframe at the Vogelback computing center. So, based on what was actually possible at the time, Welland's prediction seems kind of fantastic. Nonetheless, it appears he had a substantial interest in economics, based on the book titles he authored, so he clearly came to his ideas with much forethought. (I have subsequently learned that the linked book is not by Welland, as Amazon says, but by a different guy named Weiland.)
Though Welland didn't say this to me, his thinking seems to imply that socialism would eventually take over. At the time I was in graduate school, decentralization was a popular idea among economic theorists. Here decentralization means many atomistic decision makers making choices independently, coordinated by the invisible hand or some other mechanism, rather than decision making by one all powerful center. There were other reasons for valuing decentralization apart from limiting computing ability. Another biggie was limited speed in information transmission. If the lag was too great, between the time when information emerged at the edge of an organization till the time that the information was received and well understood by the center, it would be more efficient for the decision to be made at the edge, even if the decision maker couldn't factor in other information that might be relevant to the decision. A third issue was incentives. People might have reason to misreport the true information, so as to increase private gain. The last reason I'll mention here is complexity. Centralized decision making works best when the nature of the information to be collected is already understood. So, for example, a Web form could be used to elicit the requisite information. Decentralization is apt to perform better when there is so much uncertainty that it is unclear even how to describe the current situation, in which case making some sense of it requires a good deal of creativity.
* * * * *
There still is economics as a discipline. Welland was either wrong about the possibility that data can tell the full story or he under estimated how much computing power it would take to achieve that end. But the recent rise in concern about smart machines coupled with emergence of analytics as a field makes it seem at least possible that we are marching toward Welland's vision. I really don't know. Instead of further speculation, now I want to turn from how things actually are and likely to be in the near future to pure science fiction - how things might be if computing did advance in the way Welland envisioned.
In this utopian vision, smart machines are the salvation to our economic woes and, to turn things on their head, enable the economy to fully employ all people who are willing to work, pay them a decent wage, and restore a middle class lifestyle to the bulk of the population. How do we get there from here? Let's envision a bunch of different open source software development projects into artificial intelligence that are aimed at fundamentally changing capitalism as we know it. The first one, which I will describe in some detail for illustration, is called The Virtual CEO.
As our story opens, the year is 2050. Moore's Law is miraculously still alive and well. Artificial intelligence has advanced to the point that it is quite capable of performing the executive decision making function. Actually, it can perform it better. The Virtual CEO never has a feeling in the gut to drive decisions. All choices are made based on available data. The Virtual CEO remains focused on long term objectives for the organization. It cares not for scoring short term profit at the expense of long term positioning. It is never venal and always fair with customers, employees, shareholders, and the larger community.
The Virtual CEO functions best in flat organizations that aim for democratic decision making with employee input an important factor. Indeed, a big part of the impetus behind the open source software development project that has produced The Virtual CEO is to convert existing hierarchical organizations to this structure and to create new organizations with this structure that can out-compete the older hierarchical organizations in the marketplace.
Such organizations will feature a flat compensation scheme for employees. Since the Virtual CEO, which really functions for all upper level management in the organizations, demands no compensation whatsoever, there is more revenue available to pay existing employees a decent wage with solid benefits and to actually take a labor intensive approach to the work the organization produces. This is part of the underlying objective of the organization. Another part is to do well by customers, offering them a good service at a fair price. One of the sidebar consequences of the Virtual CEO project, an important one to be sure, is the discovery that people would much rather interact with other people than with machines, once they are convinced that the people are there to help them rather than to screw them. Machine interaction is maintained for the routine stuff but much is not routine. It is heavily customized. In effect, each customer has become the designer of an experience that the organization helps to provide. Much of the organization staff serve as consultants for this design.
Still another part of the object is for the organization to be socially responsible. There are several components to this. One, of course, is to embrace environmentally friendly production techniques. Another is to be a contributor to the community where the organization is located, to help keep it a place where people want to live and interact outside of work. A third is to be a fair competitor in the marketplace. This means that product and service quality are what the organization focuses on. The organization shuns market manipulation via merger or acquisition, predatory pricing, and other unfair practices. A codicil in the organization charter prevents individual shareholders from concentrating ownership. The Virtual CEO software maintains a steady vigilance, much like current day anti-virus software, to ward off attempts at stock purchases aimed at concentrating ownership, masked by the names of faux individual owners.
Companies run by the Virtual CEO software might still fail from time to time, either because the product they set out to develop failed to realize its potential or because competitors came up with something better and they couldn't catch up. But companies managed by the Virtual CEO software share information to better mitigate general business risk and thereby to better anticipate where they should be heading. The Justice Department is okay with this sort of information sharing because they know it won't be used for insider trading or other market manipulation. In this way the market coordinates where human managed firms never could, so some of the needless destruction we associate with capitalism is avoided.
As a consequence, the half-life of a Virtual CEO managed organization is longer and such places of work become attractive career opportunities. In turn, part of the Virtual CEO software aims to manage employee careers, providing opportunities for personal growth, suitable mentorship, and helping each employee balance work with life events.
* * * * *
This vision is deliberately ironical in viewing automation as a substitute for the top executive function, instead of how it currently is regarded by going for the lowest rung of the ladder first and the climbing to successive rungs. It seems clear who in the current ways of things would resist this sort of change, so would be cast in the role of heavy in our story. I wish I knew how to write a compelling short story myself, that would be a fun and engaging read, to create a broad audience for these ideas. But I lack that sort of skill. Perhaps one of my readers will take up the challenge.
To conclude this piece, let me note that once the premise from the previous section is embraced, that executive function can be performed well by artificial intelligence, then there are clearly many other areas of our economy where we'd want to see it deployed. Given its current very low approval rating, who wouldn't want a Virtual Congress, for example?
But the real reason to have such a story, or perhaps many such stories with a similar theme, is to get us to ask what would things look like if they were fundamentally better than they are now. If we could actually agree on that, couldn't we try to head in that direction without machines needing to run the show? Then, wouldn't we have benefited from Welland's vision even if he was quite wrong in his prediction? Sometimes, I think, it is better to have the intelligent mistake than the right answer.
I may have been the only non-math graduate student in the class, though of this I'm not sure, but Welland liked me for studying economics and, at least during the first quarter, that I seemed to have more on the ball than the other students. He told me he read my face while in class to measure how his lecture was going. If I seemed to show comprehension then things were going fine. If I looked confused then he assumed the class as a whole was in trouble.
I may have had certain advantages over the other students to achieve this position. Some or all of them may have been in the first year, and graduate school is a slug then. Also, I had that topology class at Cornell which I've written about before, most recently here, so I had some confidence going in that I wasn't over my head with the math. There is also that if you study pure math sometimes you lack for a sense of why the concepts matter. The economics helped me there, in particular when we studied Hilbert Spaces, since the inner product of two vectors is used in economics where one vector is a price system and the other vector is a commodity bundle and then the inner product yields the value of the bundle.
By the third quarter for sure and quite possibly earlier, some of the other students had overtaken me as class performers, but also by then Welland had formed an impression of me that would sustain. The consequence is that a few times I had discussions with him outside of class and on topics mainly unrelated to coursework. In one of those talks we are walking on the landfill east of Norris Center next to Lake Michigan. He gives me his view of economics, which at the time was quite futuristic and in retrospect still seems remarkable to me. He said that economics would die as a discipline as computing power took off and all transactions could be recorded and stored in some big database. There no longer would be a need for economic models. The data could tell the full story.
I don't know if Welland was aware of Moore's Law then or not, but even Gordon Moore wasn't predicting this law would hold over many decades at the time he made his famous prediction. And computing was remarkably primitive when I took Welland's class. This was before the personal computer. If you wanted to run a program you had to write it up on punch cards and submit it for a batch job on the mainframe at the Vogelback computing center. So, based on what was actually possible at the time, Welland's prediction seems kind of fantastic. Nonetheless, it appears he had a substantial interest in economics, based on the book titles he authored, so he clearly came to his ideas with much forethought. (I have subsequently learned that the linked book is not by Welland, as Amazon says, but by a different guy named Weiland.)
Though Welland didn't say this to me, his thinking seems to imply that socialism would eventually take over. At the time I was in graduate school, decentralization was a popular idea among economic theorists. Here decentralization means many atomistic decision makers making choices independently, coordinated by the invisible hand or some other mechanism, rather than decision making by one all powerful center. There were other reasons for valuing decentralization apart from limiting computing ability. Another biggie was limited speed in information transmission. If the lag was too great, between the time when information emerged at the edge of an organization till the time that the information was received and well understood by the center, it would be more efficient for the decision to be made at the edge, even if the decision maker couldn't factor in other information that might be relevant to the decision. A third issue was incentives. People might have reason to misreport the true information, so as to increase private gain. The last reason I'll mention here is complexity. Centralized decision making works best when the nature of the information to be collected is already understood. So, for example, a Web form could be used to elicit the requisite information. Decentralization is apt to perform better when there is so much uncertainty that it is unclear even how to describe the current situation, in which case making some sense of it requires a good deal of creativity.
* * * * *
There still is economics as a discipline. Welland was either wrong about the possibility that data can tell the full story or he under estimated how much computing power it would take to achieve that end. But the recent rise in concern about smart machines coupled with emergence of analytics as a field makes it seem at least possible that we are marching toward Welland's vision. I really don't know. Instead of further speculation, now I want to turn from how things actually are and likely to be in the near future to pure science fiction - how things might be if computing did advance in the way Welland envisioned.
In this utopian vision, smart machines are the salvation to our economic woes and, to turn things on their head, enable the economy to fully employ all people who are willing to work, pay them a decent wage, and restore a middle class lifestyle to the bulk of the population. How do we get there from here? Let's envision a bunch of different open source software development projects into artificial intelligence that are aimed at fundamentally changing capitalism as we know it. The first one, which I will describe in some detail for illustration, is called The Virtual CEO.
As our story opens, the year is 2050. Moore's Law is miraculously still alive and well. Artificial intelligence has advanced to the point that it is quite capable of performing the executive decision making function. Actually, it can perform it better. The Virtual CEO never has a feeling in the gut to drive decisions. All choices are made based on available data. The Virtual CEO remains focused on long term objectives for the organization. It cares not for scoring short term profit at the expense of long term positioning. It is never venal and always fair with customers, employees, shareholders, and the larger community.
The Virtual CEO functions best in flat organizations that aim for democratic decision making with employee input an important factor. Indeed, a big part of the impetus behind the open source software development project that has produced The Virtual CEO is to convert existing hierarchical organizations to this structure and to create new organizations with this structure that can out-compete the older hierarchical organizations in the marketplace.
Such organizations will feature a flat compensation scheme for employees. Since the Virtual CEO, which really functions for all upper level management in the organizations, demands no compensation whatsoever, there is more revenue available to pay existing employees a decent wage with solid benefits and to actually take a labor intensive approach to the work the organization produces. This is part of the underlying objective of the organization. Another part is to do well by customers, offering them a good service at a fair price. One of the sidebar consequences of the Virtual CEO project, an important one to be sure, is the discovery that people would much rather interact with other people than with machines, once they are convinced that the people are there to help them rather than to screw them. Machine interaction is maintained for the routine stuff but much is not routine. It is heavily customized. In effect, each customer has become the designer of an experience that the organization helps to provide. Much of the organization staff serve as consultants for this design.
Still another part of the object is for the organization to be socially responsible. There are several components to this. One, of course, is to embrace environmentally friendly production techniques. Another is to be a contributor to the community where the organization is located, to help keep it a place where people want to live and interact outside of work. A third is to be a fair competitor in the marketplace. This means that product and service quality are what the organization focuses on. The organization shuns market manipulation via merger or acquisition, predatory pricing, and other unfair practices. A codicil in the organization charter prevents individual shareholders from concentrating ownership. The Virtual CEO software maintains a steady vigilance, much like current day anti-virus software, to ward off attempts at stock purchases aimed at concentrating ownership, masked by the names of faux individual owners.
Companies run by the Virtual CEO software might still fail from time to time, either because the product they set out to develop failed to realize its potential or because competitors came up with something better and they couldn't catch up. But companies managed by the Virtual CEO software share information to better mitigate general business risk and thereby to better anticipate where they should be heading. The Justice Department is okay with this sort of information sharing because they know it won't be used for insider trading or other market manipulation. In this way the market coordinates where human managed firms never could, so some of the needless destruction we associate with capitalism is avoided.
As a consequence, the half-life of a Virtual CEO managed organization is longer and such places of work become attractive career opportunities. In turn, part of the Virtual CEO software aims to manage employee careers, providing opportunities for personal growth, suitable mentorship, and helping each employee balance work with life events.
* * * * *
This vision is deliberately ironical in viewing automation as a substitute for the top executive function, instead of how it currently is regarded by going for the lowest rung of the ladder first and the climbing to successive rungs. It seems clear who in the current ways of things would resist this sort of change, so would be cast in the role of heavy in our story. I wish I knew how to write a compelling short story myself, that would be a fun and engaging read, to create a broad audience for these ideas. But I lack that sort of skill. Perhaps one of my readers will take up the challenge.
To conclude this piece, let me note that once the premise from the previous section is embraced, that executive function can be performed well by artificial intelligence, then there are clearly many other areas of our economy where we'd want to see it deployed. Given its current very low approval rating, who wouldn't want a Virtual Congress, for example?
But the real reason to have such a story, or perhaps many such stories with a similar theme, is to get us to ask what would things look like if they were fundamentally better than they are now. If we could actually agree on that, couldn't we try to head in that direction without machines needing to run the show? Then, wouldn't we have benefited from Welland's vision even if he was quite wrong in his prediction? Sometimes, I think, it is better to have the intelligent mistake than the right answer.
Sunday, May 24, 2015
Fooling some of the people some of the time, No surprises, Krugman on the pending Trade Agreement
I have a fondness for old underwear. The give and take between its shape and mine has reached a tacit understanding. With something new you're never sure what you're getting. I trust the tried and true. It's not that I'm averse to taking risks of any sort. I experiment - fairly often - with my teaching and sometimes with my writing too. Some prior thinking, wishful though it may be, suggests a likelihood of success. The thinking can be wrong and sometimes the thinking is okay but the execution is poor. Either way, the experiment fails. I can handle the failure, at least most of the time. I understand the principle, nothing ventured nothing gained. And I believe that over the long haul an experimental approach produces much better results on average.
What I dislike, detest really is a better word to describe my feelings, is to be scammed. I recall being scammed in the 1980s, when I was still single. I was trying to buy a dining room table, custom made by the Amish with my own design on the top. I went through the middle man in Champaign, which proved to be a big mistake. I should have gone to Arthur and negotiated directly with the people who would build the table. As it played out, the middle man took my money for a down-payment, then nothing. Eventually I heard that the store went bankrupt. It was probably already on the verge of that when I gave him my check. My paying was throwing money down the drain.
Some years later my parents were scammed by their then financial adviser, who worked at Prudential-Bache. It was part of a larger scam; it wasn't just this one guy who was the one rotten apple. Nonetheless, this guy was really slimy. He took language lessons from my mom at our house, which in retrospect amounted to a way to get my parents to drop their guard and take this guy at his word. My parents were retired and this was their life savings we're talking about. The whole thing was vile.
I tell these stories to note that in both cases the person being scammed was highly educated. In my case, indeed, my degree is in economics, which at a minimum should make make me wary of the moral hazard that is present in any such transaction. And nowadays I teach about this very thing in my economics of organizations course. Market transactions come at a cost, as Coase noted. Sometimes those costs manifest as scamming. Even in my parents' case, while my mom was pretty clueless about financial transactions, which is why I paid her bills and managed her portfolio after my dad passed away, my dad was a lawyer and knew which way is up. Yet it wasn't enough to get him to walk away from the con before it had a chance to play out. There is a lot of talk about predatory behavior on the poor and uneducated. I have no doubt that this happens, in great volume. But I want to note that being upper middle class and well educated doesn't itself make you immune from these threats, even if it does lessen the risk.
I don't know how you'd measure this in a meaningful way, but I have a sense that the scamming is on the rise. Consequently, I've begun to see it everywhere, including in quite ordinary settings. Consider for example the checkout at the grocery store. Did they always have tabloids and candy at the checkout or is that a comparatively recent phenomenon? I don't know. I can remember as a kid going shopping with my mom at Bohack's or Waldbaum's and that the store was so crowded, unlike the wide aisles there are now where I shop. But I have no recollection of what the checkout was like then. Since in some sense "the market works," the placement of those items in the store is a tribute to human weakness as a driver of some (much) of our behavior. I don't know if the tabloids can be found elsewhere in the store as well. I'm quite sure the candy has its own aisle. One can walk right past that aisle, no problem. One cannot avoid checkout. The bible says, "lead us not into temptation." The market says otherwise.
Then think of the sponsored ads in Facebook, which appear in the right sidebar adjacent to your news feed, just below the trending items. Those ads have a tabloid feel to me and recently I've had the thought that the checkout at the grocery is being recreated in other facets of our lives, often in virtual environments where they are even more pernicious because they appear to be omnipresent. I never bought a tabloid at the grocery store - never. (It's not that I'm a purist. When I did ride the subway during the summer after my sophomore year in college, I would on occasion pick up a discarded copy of the Daily News that somebody left on their seat. But I never paid for a copy of the Daily News myself and most people wouldn't even regard it as a tabloid.) Quite recently I have clicked on occasion on those ads in Facebook. We do things on our own computers at our own homes that we'd never do in public.
Next, think about those online threats that are obviously more pernicious, phishing and malware. Because I still monitor some listservs for information technologists I know that phishing threats are on the rise. I know further that on my own campus they've taken a proactive step to deter phishing by not allowing an immediate click through on links embedded in email. This seems necessary, though it is somewhat cumbersome. Anyone remember the Microsoft Vista OS? The necessity has arisen because education efforts aimed at making users more alert to phishing have failed. I asked myself why education of this sort doesn't seem to work. When I was in the campus IT organization I argued for more of this sort of education effort. I really don't know the answer to this question, but my guess is that advertising is so pervasive and people click through so often, mainly in an unthinking way, that too often they don't perceive the threat until it is too late.
The economist's "solution" to all this moral hazard as scamming, which should gum up markets so they don't function well at all, is to look to "money burning" for the answer. The issue is inference and what an uninformed consumer makes out of such an action by an informed seller, given that at first such money burning appears entirely irrational. A paper that influenced my thinking at the time I read it, by Milgrom and Roberts, treats product pricing and advertising as conjoined signals of product quality. The argument is very clever. But it is either wrong or incomplete when applying the theory to practice. Too often consumers make the wrong inference, in actuality. In theory, consumers figure out what is going on. This seems true not just for ordinary folks like me and my parents. It seems just as true for top-flight regulators vis-à-vis the markets they are supposed to keep on the straight and narrow. Consider that Alan Greenspan thought the financial markets were self-regulating. How could he believe that considering all the evidence to the contrary?
* * * * *
I rarely go to the movie theaters these days. The last picture I can recall seeing at the theater was Lincoln. One reason for this is that my tastes diverge from the mainstream so that even so-called good movies quite likely won't appeal to me. Couple this with an inherent sensibility as a cheapskate; the thought of paying money to sit through a movie I don't like offends me. As an alternative I sometimes surf the various movie channels on the satellite TV to see whether any appeal to me enough that I might record them. The supply is abundant. Very few of the films make it through my own internal filter. I can't explain what it takes for a picture to grab me. Even some of the films I do record I end up watching only a bit and then turn them off.
Last week, when the rest of the family was out of town, I recorded two such movies that I hadn't seen before. The first I'll mention is The Wolf of Wall Street. It gets quite high ratings on the IMDB site and the main review is very favorably disposed to the film. But I could only watch a little of it before I became disgusted with it and turned it off. The story is told from the perspective of the scammer, the consummate salesman, a complete bs artist. It's not a perspective that provides entertainment for me.
The other movie is Noah. It starts off in a weird way, with odd special effects. It occurred to me while watching it that I really only have the briefest of sketches in mind about the Noah story - the flood, the saving of the animals, the building of the Ark. I don't know the details at all. I had the sense at the beginning of the movie that it was straying substantially from the story. So I paused it and went to read the review at IMDB. The review confirmed my suspicion. It said the movie was horrible as did the generally low rating. Nevertheless, I continued to watch it. I did find a different review that was quite enchanted with the film and its director, Darren Aronofsky. That was part of the reason I kept viewing. The other part was asking how Russell Crowe and Jennifer Connelly would agree to make a film that seemed this bad. I recalled the Freedom Writers had started out pretty awful and for the first half hour was hard to view, but ultimately turned into a worthwhile film. By the end of Noah, I felt likewise. And maybe it didn't stray too far from the actual Noah story, though on that I'm not sure and I'm not driven to read it just now. Here I simply want to consider it an allegory that is relevant to the present.
Noah has the reputation as an honest, strong, and good man. This is why "The Creator" chose him. (The word "God" is not mentioned in the move.) But Noah makes a mistake in trying to understand what The Creator has asked him to do. Noah correctly understands it is his task to save the animals. But he incorrectly infers that mankind is to die out, as punishment for all the sins. This includes his own progeny, who themselves have not sinned. Noah sees there is badness in all of us humans. From that he infers that humans should not be allowed to live beyond saving the animals, who don't have the evil in them that is in humans. Based on this belief Noah makes a horrible and irreversible mistake. He allows the potential mate for his son Ham to die when he was in a position to save her. Where the son trusted the father until then, now the son has doubt about whether that trust is warranted and if instead he should seek vengeance on his father for this horrible act.
Further, by a miracle, the wife of Noah's other son, Shem, is taken with child after she had been thought to be barren from a near fatal injury wrought in childhood. Noah pledges that if the baby is female he will kill it, to keep his promise that mankind must die out. By his behavior Noah loses the embrace of his wife and the rest of his family. They plot to rescue the parents and the baby. But Noah foils the plot. At birth it turns out there are twins, both girls. Noah starts in on the task of ending their life. But he can't go through with it. They live and mankind can then regenerate. Noah becomes a recluse, punishing himself for his bad acts. At the end of the movie, however, he reconciles with his family, who have forgiven him. They wonder why he didn't carry through on his promise to kill the babies. Noah tells them that when he looked at the babies he saw goodness in them.
* * * * *
When I was in college it was popular to believe that you could judge a person by how they acted when the chips were down. Somebody who came through then you could trust. Somebody who punted then was a jerk. Subsequently I learned there's a bit more to it. First, there may be a mitigating circumstance so that a person doesn't come through but that is not enough for you to infer the person is a jerk. Second, many of us are neither purely trustworthy nor purely a jerk. We have our better angels but at other times make a pact with the devil. Batting average matters here. Then there is that ignoring ordinary behavior and looking only at situations where the chips are down is throwing out a lot of information, some of which is apt to matter. Someone who regularly demonstrates small acts of kindness in circumstances that aren't quite so stressful deserves to be trusted.
As a campus administrator I learned about a different way to earn trust. This was about how to manage bad news. The approach is called "No Surprises" and is based on the idea to get bad news out early, so people can act on it and take appropriate mitigations. I should note here that getting bad news out early is not something that will be appreciated at the time. People will react to the bad news, first and foremost. The fact that you're letting them know early is of secondary importance to them, at best. So No Surprises works primarily in the negative. If you conceal bad news that in retrospect does come out and people feel they were entitled to hear the news early, then your reputation for honesty is lost and it subsequently becomes very difficult, if not impossible, to repair the reputation afterwards. In other words, No Surprises is the policy you embrace when you realize that the cover up is always worse than the original crime.
You would think that No Surprises would epitomize decision making within Higher Ed administration, but my experience is that is often not true. The near term goal arises to "control the story" and such a goal is inconsistent with No Surprises. So information that might have some elements of bad news in it gets held closely and is not released for public consumption. Bad news invites criticism. The fear that such critique might turn into a tidal wave of protest via social media, a fear that has some foundation, tends to trump the potential good from avoiding a coverup.
Many people make free speech their cause. They argue that when dissenting voices are silenced we all lose. We become too smug in our own beliefs. We fail to see the error in our ways. This view gets sanction, of course, from the First Amendment of the Constitution. There is no Amendment of the Constitution that directly addresses No Surprises. There is instead something else, a tradition of muckraking based on Freedom of the Press. It is the job of the Fourth Estate to expose the bad news. One can't trust people in authority to disclose it on their own.
As an empirical matter this is perhaps true. But I'd like to look at the two taken together, freedom of speech and No Surprises both. Were both to hold sway, public criticism of ideas must be a normal function, something that is tolerable, even if the criticism is expressed in extreme form. But real and meaningful criticism, the type that promotes debate which forces the original ideas either to strengthen or to die out, seems a rarity to me nowadays. Instead, we have the preaching to the choir type of critique only, which forces positions to harden rather than to be reconsidered. And in this way No Surprises has a potential very large role to play to reverse things, because people in the know don't see the release of information as an ethical issue to which they themselves are accountable. If it were otherwise, it could very well shift the nature of the debate itself.
* * * * *
In this last section I want to briefly consider Paul Krugman's column from last Friday on the Trans-Pacific Partnership. This is the salient paragraph from the piece:
In writing this Krugman is taking on the Obama administration. Not that long ago, Krugman was found extolling the Obama Presidency. So this critique is unlike the criticism, really ridicule is a better description than criticism, of the President from the Right, where Obama bashing has become a kind of sport. While at the beginning of his essay Krugman lauds the administration for its mainly transparent approach in governing, on the Trans-Pacific Partnership the administration has been anything but.
Contrast what Krugman says to what Secretary of Commerce Penny Pritzker says in a recent interview on the Charlie Rose show. She argues that TPP is mainly about opening up emerging markets in Asia, where high protective tariffs prevent U.S. firms from competing, "on a level playing field." It is believable to me that the overall picture is sufficiently complex that there are instances which support what the Secretary says. But those instances don't suffice in making the argument. What is the the gist of TPP? I can't determine this on my own. I rely on what I read about from pundits like Krugman and what I hear from government officials like Secretary Pritzker to form my opinion. All I can say for sure is that regarding the gist of the TPP the two views are inconsistent.
So I'm in a position where to make a determination I need to make an inference. The facts that I'm aware of are insufficient in themselves to do the task. In other words, I very well might be fooled. Given that, I look at who has incentive to mislead me, which is deciding things on quite other than the merits of the arguments about TPP.
If this is really the best that can be done, how is possible to keep the electorate from becoming extremely cynical, where it might not have been already? Or am I reading this wrong? I have several friends who are strong supporters of the President. Several of them tend to agree with what Krugman says as well. In this case you can be one or the other but not both. It would be good for us to argue about TPP, but as I've already discussed in the previous section, we don't seem capable of having these sorts of discussions. The loud tend to drown out the reasoned. That becomes quite unpleasant. Then why bother?
In the movie Noah, Ham leaves the family because there is nothing left to keep him together with them. That's the type of feeling I have now, on the scamming and on TPP as well.
What I dislike, detest really is a better word to describe my feelings, is to be scammed. I recall being scammed in the 1980s, when I was still single. I was trying to buy a dining room table, custom made by the Amish with my own design on the top. I went through the middle man in Champaign, which proved to be a big mistake. I should have gone to Arthur and negotiated directly with the people who would build the table. As it played out, the middle man took my money for a down-payment, then nothing. Eventually I heard that the store went bankrupt. It was probably already on the verge of that when I gave him my check. My paying was throwing money down the drain.
Some years later my parents were scammed by their then financial adviser, who worked at Prudential-Bache. It was part of a larger scam; it wasn't just this one guy who was the one rotten apple. Nonetheless, this guy was really slimy. He took language lessons from my mom at our house, which in retrospect amounted to a way to get my parents to drop their guard and take this guy at his word. My parents were retired and this was their life savings we're talking about. The whole thing was vile.
I tell these stories to note that in both cases the person being scammed was highly educated. In my case, indeed, my degree is in economics, which at a minimum should make make me wary of the moral hazard that is present in any such transaction. And nowadays I teach about this very thing in my economics of organizations course. Market transactions come at a cost, as Coase noted. Sometimes those costs manifest as scamming. Even in my parents' case, while my mom was pretty clueless about financial transactions, which is why I paid her bills and managed her portfolio after my dad passed away, my dad was a lawyer and knew which way is up. Yet it wasn't enough to get him to walk away from the con before it had a chance to play out. There is a lot of talk about predatory behavior on the poor and uneducated. I have no doubt that this happens, in great volume. But I want to note that being upper middle class and well educated doesn't itself make you immune from these threats, even if it does lessen the risk.
I don't know how you'd measure this in a meaningful way, but I have a sense that the scamming is on the rise. Consequently, I've begun to see it everywhere, including in quite ordinary settings. Consider for example the checkout at the grocery store. Did they always have tabloids and candy at the checkout or is that a comparatively recent phenomenon? I don't know. I can remember as a kid going shopping with my mom at Bohack's or Waldbaum's and that the store was so crowded, unlike the wide aisles there are now where I shop. But I have no recollection of what the checkout was like then. Since in some sense "the market works," the placement of those items in the store is a tribute to human weakness as a driver of some (much) of our behavior. I don't know if the tabloids can be found elsewhere in the store as well. I'm quite sure the candy has its own aisle. One can walk right past that aisle, no problem. One cannot avoid checkout. The bible says, "lead us not into temptation." The market says otherwise.
Then think of the sponsored ads in Facebook, which appear in the right sidebar adjacent to your news feed, just below the trending items. Those ads have a tabloid feel to me and recently I've had the thought that the checkout at the grocery is being recreated in other facets of our lives, often in virtual environments where they are even more pernicious because they appear to be omnipresent. I never bought a tabloid at the grocery store - never. (It's not that I'm a purist. When I did ride the subway during the summer after my sophomore year in college, I would on occasion pick up a discarded copy of the Daily News that somebody left on their seat. But I never paid for a copy of the Daily News myself and most people wouldn't even regard it as a tabloid.) Quite recently I have clicked on occasion on those ads in Facebook. We do things on our own computers at our own homes that we'd never do in public.
Next, think about those online threats that are obviously more pernicious, phishing and malware. Because I still monitor some listservs for information technologists I know that phishing threats are on the rise. I know further that on my own campus they've taken a proactive step to deter phishing by not allowing an immediate click through on links embedded in email. This seems necessary, though it is somewhat cumbersome. Anyone remember the Microsoft Vista OS? The necessity has arisen because education efforts aimed at making users more alert to phishing have failed. I asked myself why education of this sort doesn't seem to work. When I was in the campus IT organization I argued for more of this sort of education effort. I really don't know the answer to this question, but my guess is that advertising is so pervasive and people click through so often, mainly in an unthinking way, that too often they don't perceive the threat until it is too late.
The economist's "solution" to all this moral hazard as scamming, which should gum up markets so they don't function well at all, is to look to "money burning" for the answer. The issue is inference and what an uninformed consumer makes out of such an action by an informed seller, given that at first such money burning appears entirely irrational. A paper that influenced my thinking at the time I read it, by Milgrom and Roberts, treats product pricing and advertising as conjoined signals of product quality. The argument is very clever. But it is either wrong or incomplete when applying the theory to practice. Too often consumers make the wrong inference, in actuality. In theory, consumers figure out what is going on. This seems true not just for ordinary folks like me and my parents. It seems just as true for top-flight regulators vis-à-vis the markets they are supposed to keep on the straight and narrow. Consider that Alan Greenspan thought the financial markets were self-regulating. How could he believe that considering all the evidence to the contrary?
* * * * *
I rarely go to the movie theaters these days. The last picture I can recall seeing at the theater was Lincoln. One reason for this is that my tastes diverge from the mainstream so that even so-called good movies quite likely won't appeal to me. Couple this with an inherent sensibility as a cheapskate; the thought of paying money to sit through a movie I don't like offends me. As an alternative I sometimes surf the various movie channels on the satellite TV to see whether any appeal to me enough that I might record them. The supply is abundant. Very few of the films make it through my own internal filter. I can't explain what it takes for a picture to grab me. Even some of the films I do record I end up watching only a bit and then turn them off.
Last week, when the rest of the family was out of town, I recorded two such movies that I hadn't seen before. The first I'll mention is The Wolf of Wall Street. It gets quite high ratings on the IMDB site and the main review is very favorably disposed to the film. But I could only watch a little of it before I became disgusted with it and turned it off. The story is told from the perspective of the scammer, the consummate salesman, a complete bs artist. It's not a perspective that provides entertainment for me.
The other movie is Noah. It starts off in a weird way, with odd special effects. It occurred to me while watching it that I really only have the briefest of sketches in mind about the Noah story - the flood, the saving of the animals, the building of the Ark. I don't know the details at all. I had the sense at the beginning of the movie that it was straying substantially from the story. So I paused it and went to read the review at IMDB. The review confirmed my suspicion. It said the movie was horrible as did the generally low rating. Nevertheless, I continued to watch it. I did find a different review that was quite enchanted with the film and its director, Darren Aronofsky. That was part of the reason I kept viewing. The other part was asking how Russell Crowe and Jennifer Connelly would agree to make a film that seemed this bad. I recalled the Freedom Writers had started out pretty awful and for the first half hour was hard to view, but ultimately turned into a worthwhile film. By the end of Noah, I felt likewise. And maybe it didn't stray too far from the actual Noah story, though on that I'm not sure and I'm not driven to read it just now. Here I simply want to consider it an allegory that is relevant to the present.
Noah has the reputation as an honest, strong, and good man. This is why "The Creator" chose him. (The word "God" is not mentioned in the move.) But Noah makes a mistake in trying to understand what The Creator has asked him to do. Noah correctly understands it is his task to save the animals. But he incorrectly infers that mankind is to die out, as punishment for all the sins. This includes his own progeny, who themselves have not sinned. Noah sees there is badness in all of us humans. From that he infers that humans should not be allowed to live beyond saving the animals, who don't have the evil in them that is in humans. Based on this belief Noah makes a horrible and irreversible mistake. He allows the potential mate for his son Ham to die when he was in a position to save her. Where the son trusted the father until then, now the son has doubt about whether that trust is warranted and if instead he should seek vengeance on his father for this horrible act.
Further, by a miracle, the wife of Noah's other son, Shem, is taken with child after she had been thought to be barren from a near fatal injury wrought in childhood. Noah pledges that if the baby is female he will kill it, to keep his promise that mankind must die out. By his behavior Noah loses the embrace of his wife and the rest of his family. They plot to rescue the parents and the baby. But Noah foils the plot. At birth it turns out there are twins, both girls. Noah starts in on the task of ending their life. But he can't go through with it. They live and mankind can then regenerate. Noah becomes a recluse, punishing himself for his bad acts. At the end of the movie, however, he reconciles with his family, who have forgiven him. They wonder why he didn't carry through on his promise to kill the babies. Noah tells them that when he looked at the babies he saw goodness in them.
* * * * *
When I was in college it was popular to believe that you could judge a person by how they acted when the chips were down. Somebody who came through then you could trust. Somebody who punted then was a jerk. Subsequently I learned there's a bit more to it. First, there may be a mitigating circumstance so that a person doesn't come through but that is not enough for you to infer the person is a jerk. Second, many of us are neither purely trustworthy nor purely a jerk. We have our better angels but at other times make a pact with the devil. Batting average matters here. Then there is that ignoring ordinary behavior and looking only at situations where the chips are down is throwing out a lot of information, some of which is apt to matter. Someone who regularly demonstrates small acts of kindness in circumstances that aren't quite so stressful deserves to be trusted.
As a campus administrator I learned about a different way to earn trust. This was about how to manage bad news. The approach is called "No Surprises" and is based on the idea to get bad news out early, so people can act on it and take appropriate mitigations. I should note here that getting bad news out early is not something that will be appreciated at the time. People will react to the bad news, first and foremost. The fact that you're letting them know early is of secondary importance to them, at best. So No Surprises works primarily in the negative. If you conceal bad news that in retrospect does come out and people feel they were entitled to hear the news early, then your reputation for honesty is lost and it subsequently becomes very difficult, if not impossible, to repair the reputation afterwards. In other words, No Surprises is the policy you embrace when you realize that the cover up is always worse than the original crime.
You would think that No Surprises would epitomize decision making within Higher Ed administration, but my experience is that is often not true. The near term goal arises to "control the story" and such a goal is inconsistent with No Surprises. So information that might have some elements of bad news in it gets held closely and is not released for public consumption. Bad news invites criticism. The fear that such critique might turn into a tidal wave of protest via social media, a fear that has some foundation, tends to trump the potential good from avoiding a coverup.
Many people make free speech their cause. They argue that when dissenting voices are silenced we all lose. We become too smug in our own beliefs. We fail to see the error in our ways. This view gets sanction, of course, from the First Amendment of the Constitution. There is no Amendment of the Constitution that directly addresses No Surprises. There is instead something else, a tradition of muckraking based on Freedom of the Press. It is the job of the Fourth Estate to expose the bad news. One can't trust people in authority to disclose it on their own.
As an empirical matter this is perhaps true. But I'd like to look at the two taken together, freedom of speech and No Surprises both. Were both to hold sway, public criticism of ideas must be a normal function, something that is tolerable, even if the criticism is expressed in extreme form. But real and meaningful criticism, the type that promotes debate which forces the original ideas either to strengthen or to die out, seems a rarity to me nowadays. Instead, we have the preaching to the choir type of critique only, which forces positions to harden rather than to be reconsidered. And in this way No Surprises has a potential very large role to play to reverse things, because people in the know don't see the release of information as an ethical issue to which they themselves are accountable. If it were otherwise, it could very well shift the nature of the debate itself.
* * * * *
In this last section I want to briefly consider Paul Krugman's column from last Friday on the Trans-Pacific Partnership. This is the salient paragraph from the piece:
In any case, the Pacific trade deal isn’t really about trade. Some already low tariffs would come down, but the main thrust of the proposed deal involves strengthening intellectual property rights — things like drug patents and movie copyrights — and changing the way companies and countries settle disputes. And it’s by no means clear that either of those changes is good for America.
In writing this Krugman is taking on the Obama administration. Not that long ago, Krugman was found extolling the Obama Presidency. So this critique is unlike the criticism, really ridicule is a better description than criticism, of the President from the Right, where Obama bashing has become a kind of sport. While at the beginning of his essay Krugman lauds the administration for its mainly transparent approach in governing, on the Trans-Pacific Partnership the administration has been anything but.
Contrast what Krugman says to what Secretary of Commerce Penny Pritzker says in a recent interview on the Charlie Rose show. She argues that TPP is mainly about opening up emerging markets in Asia, where high protective tariffs prevent U.S. firms from competing, "on a level playing field." It is believable to me that the overall picture is sufficiently complex that there are instances which support what the Secretary says. But those instances don't suffice in making the argument. What is the the gist of TPP? I can't determine this on my own. I rely on what I read about from pundits like Krugman and what I hear from government officials like Secretary Pritzker to form my opinion. All I can say for sure is that regarding the gist of the TPP the two views are inconsistent.
So I'm in a position where to make a determination I need to make an inference. The facts that I'm aware of are insufficient in themselves to do the task. In other words, I very well might be fooled. Given that, I look at who has incentive to mislead me, which is deciding things on quite other than the merits of the arguments about TPP.
If this is really the best that can be done, how is possible to keep the electorate from becoming extremely cynical, where it might not have been already? Or am I reading this wrong? I have several friends who are strong supporters of the President. Several of them tend to agree with what Krugman says as well. In this case you can be one or the other but not both. It would be good for us to argue about TPP, but as I've already discussed in the previous section, we don't seem capable of having these sorts of discussions. The loud tend to drown out the reasoned. That becomes quite unpleasant. Then why bother?
In the movie Noah, Ham leaves the family because there is nothing left to keep him together with them. That's the type of feeling I have now, on the scamming and on TPP as well.
Wednesday, May 20, 2015
D(Evaluations)
Universities supposedly have a bunch of smart people working for them. Why do they continue to rely on such poor methods for evaluating teaching? Is this really the best that can be done? Why haven't we seen innovation in this area? (Simply putting the paper evaluations online doesn't count as innovation to me.) Below let me suggest questions to ask that might motivate the type of innovation I have in mind.
Let me begin by noting that often when I attend a workshop on teaching there is an evaluation performed at the end. This is for a single session, which usually goes for an hour. It suggests the possibility that something similar could happen in a course. Indeed, I tried this once in my own teaching. Technically, I relied on Google Forms, which was easy to use and allowed me to share the results with the students. Content-wise, I didn't try to imitate the end of semester course evaluations at all. Instead I concocted my own metrics for what would be it a good class session. (The reader should note that this was in an honors seminar with only 17 students.) Eventually the students tired of this and the response rate got so low it didn't seem worthwhile any longer. But while we were getting good response the information the students supplied helped to shape subsequent sessions and the students could see their own influence in the course trajectory. So one might imagine early on that this sort of more formative feedback would encourage student participation. If anyone else tried something like this one might not want to do each session individually, but say have the survey done at some frequency - weekly, biweekly, monthly, I'm not sure which of these is best - or if the surveys should only be at the start of the course and then taper off. What I'd like to convey to the reader here is not that I know what is optimal, I don't, but in determining what is optimal there is some tradeoff in getting timely formative feedback while not burning out the students in providing that feedback.
So envision some system of formative feedback provided by the students within the semester. Now a bunch of related questions.
Let me close with one other point. The piece linked below talks about instructors manipulating their course evaluations, with one technique baking chocolate chip cookies for the students. So there has been some innovation in the way courses are conducted generated by need to boost teaching evaluations. Might some of these innovations in instruction be useful if severed temporally from when the end of semester evaluations are given?
In particular, I like the idea of having a celebration in class somewhere around mid semester and I'm all for the instructor bringing in treats for this purpose. It creates a little bond between teacher and students and it shows that at least in some ways they're all on the same team. I came to understand this a few years ago when in the middle of the semester I spent 5 days in the hospital and had to miss a couple of classes as a result. That hospital stay coincided with Halloween, which my wife and I missed that year. As a result there was a bunch of candy in the house. The first class where I returned to teaching I had each student come up to the front of the classroom, give me a high five, and take a piece of candy. I know I was touched by that moment. I think some of the students were as well.
The next year (I now teach one course a year) around the same time of the semester I received a small inheritance. This was a different cause for celebration. This time it was donuts and cider. I ditched the high five, though now I'm not sure why. Then last fall we had a celebration again, though this time there was no external event that warranted the celebration. It just seemed to me a good thing to do based on the prior experience.
Are there other things instructors might do as regular practice that they first tried just to boost their teaching evaluations? I don't know. I do know that we don't experiment enough with our teaching overall and if some experiments happened for less than ethical reasons, that should not itself condemn the practice, particularly if the ethical issues can be addressed. So let's improve teaching evaluation. And in the process, let's improve teaching as well.
Let me begin by noting that often when I attend a workshop on teaching there is an evaluation performed at the end. This is for a single session, which usually goes for an hour. It suggests the possibility that something similar could happen in a course. Indeed, I tried this once in my own teaching. Technically, I relied on Google Forms, which was easy to use and allowed me to share the results with the students. Content-wise, I didn't try to imitate the end of semester course evaluations at all. Instead I concocted my own metrics for what would be it a good class session. (The reader should note that this was in an honors seminar with only 17 students.) Eventually the students tired of this and the response rate got so low it didn't seem worthwhile any longer. But while we were getting good response the information the students supplied helped to shape subsequent sessions and the students could see their own influence in the course trajectory. So one might imagine early on that this sort of more formative feedback would encourage student participation. If anyone else tried something like this one might not want to do each session individually, but say have the survey done at some frequency - weekly, biweekly, monthly, I'm not sure which of these is best - or if the surveys should only be at the start of the course and then taper off. What I'd like to convey to the reader here is not that I know what is optimal, I don't, but in determining what is optimal there is some tradeoff in getting timely formative feedback while not burning out the students in providing that feedback.
So envision some system of formative feedback provided by the students within the semester. Now a bunch of related questions.
- Should that feedback be used only within the class or should it shared with the department and others on campus to count for measuring teaching performance?
- Should the type of questions asked in the formative feedback be standardized (within department, within college, on campus) to make cross course comparisons, the way instructors ratings are now used for measuring teaching performance? Would standardization of this sort lessen the effectiveness of the feedback? Would it, in contrast, encourage more instructors to try the approach?
- Suppose, hypothetically, that an outside observer were also to attend each class session. Would that observer come up with similar impressions to the students, as measured by the questions on the feedback form, or differ from the students in significant ways? Would it then matter who that observer was, whether a faculty member in the same department, a faculty member in a different department, a campus pedagogy expert, a student not taking the course but instead representing the student senate, or outsiders of campus on an accreditation visit?
- In that seminar class I didn't give exams. Should there be formative assessment about the exams too? (In a recent piece in the New York Times, Richard Thaler, one of the founding fathers of Behavioral Economics, wrote that students care about raw scores on exams even when there is grading on a curve, so rationally they shouldn't care. But they seem to prefer when the outcomes are near 100 in raw score, which seems to convey the message that mostly they understand - even if it really doesn't.) If such formative assessment was given and if it indicated student disposition about the exams and if that proved to correlate highly with how the course was rated in the end of term evaluation, what would the campus response be? After all, in this contingency the data would be showing that course evaluation is essentially irrational reaction to testing. Wouldn't that put pressure on campus to lessen its reliance on course evaluations for evaluating teaching?
Let me close with one other point. The piece linked below talks about instructors manipulating their course evaluations, with one technique baking chocolate chip cookies for the students. So there has been some innovation in the way courses are conducted generated by need to boost teaching evaluations. Might some of these innovations in instruction be useful if severed temporally from when the end of semester evaluations are given?
In particular, I like the idea of having a celebration in class somewhere around mid semester and I'm all for the instructor bringing in treats for this purpose. It creates a little bond between teacher and students and it shows that at least in some ways they're all on the same team. I came to understand this a few years ago when in the middle of the semester I spent 5 days in the hospital and had to miss a couple of classes as a result. That hospital stay coincided with Halloween, which my wife and I missed that year. As a result there was a bunch of candy in the house. The first class where I returned to teaching I had each student come up to the front of the classroom, give me a high five, and take a piece of candy. I know I was touched by that moment. I think some of the students were as well.
The next year (I now teach one course a year) around the same time of the semester I received a small inheritance. This was a different cause for celebration. This time it was donuts and cider. I ditched the high five, though now I'm not sure why. Then last fall we had a celebration again, though this time there was no external event that warranted the celebration. It just seemed to me a good thing to do based on the prior experience.
Are there other things instructors might do as regular practice that they first tried just to boost their teaching evaluations? I don't know. I do know that we don't experiment enough with our teaching overall and if some experiments happened for less than ethical reasons, that should not itself condemn the practice, particularly if the ethical issues can be addressed. So let's improve teaching evaluation. And in the process, let's improve teaching as well.
Saturday, May 09, 2015
Boundaries Are Always Harder to Define
Among other categories, the campus tracks enrollments by race. Using those categories, last fall I had a majority non-white class. It was the first time for me. I suspect it won't be the last. Here is the breakdown, though before I provide it I want to note that instructors are not given this information. The information they are given that comes from Banner (the Student Information System) is the home address. Based on that and other identifying information, of the 23 students overall who completed the class 8 were international students (7 Chinese and 1 Korean), 3 were Asian-American students, and 1 was Hispanic.
I suspect that many instructors will never look at the home address in Banner. Will they be aware of what category each of their students is in, simply by an eyeball test? In yesterday's NY Times, Nicholas Kristof's column, called Our Biased Brains, argues that very early on we learn about racial identities and most of us (African Americans are the exception) develop a preference for the race we are a member of. No doubt we are conscious about race, but do we make mistakes from time to time in assignment of category? And might it be that our own mental categories don't coincide with the official categories the campus maintains?
One troublesome aspect in the official categories is that International Student is a designation, which in theory encompasses all the races but in practice has come to imply East Asian, witness the piece from Inside Higher Ed, The University of China at Illinois. In other contexts the expression, "they all look alike to me," is offensive. Yet to the unwitting instructor (I include myself here) it is quite easy to mistake an Asian-American student for an International student and vice versa.
Another category I struggle with is Hispanic. There is first the Hispanic-Latino naming dispute. The campus seems to be hedging its bets on this one, where one of the race categories is Hispanic yet in area studies there is the Department of Latina/Latino Studies. More to the point for me is not really knowing who counts in this category and how my mental model of who counts is not reliable this way. To illustrate, I did a Google Image Search on Sephardic Jew. Below is one of the pictures I found, apparently a well known actor, Hank Azaria, though I was not familiar with him. Is he Hispanic? My immediate answer is, maybe. He was born in Queens, where I grew up. His grandparents apparently came from Greece. Is that what's decisive or is it largely immaterial? In my process of looking via Google I learned that Jerry Seinfeld is a Sephardic Jew. I do not consider Jerry Seinfeld to be Hispanic.
These puzzles for me led to further mental associations. I thought of the film version of West Side Story. That movie came out when I was a kid and was something of a big deal, in part because Leonard Bernstein did Young People's Concerts that were on TV in NYC, and the music from West Side Story was sometimes featured in those. Anyway, in the movie George Chakiris (born in Ohio, of Greek ancestry) plays Bernardo, the leader of the Puerto Rican gang, The Sharks. And Natalie Wood (born in California, of Russian ancestry) plays Maria, Bernardo's sister. Then I started to think of other films where White actors play characters of other races. One I came up with immediately, because I saw part of it on TV recently while doing my workout on the elliptical, was Remo Williams, where Joel Grey plays the Korean martial arts master, Chiun. That movie is an enjoyable farce. What about in a more serious setting, are there movie examples of that? I thought of A Passage to India, which has at its core the tensions across race and culture in the presence of colonial rule. The chameleon-like actor, Alec Guinness, plays an Indian character, Professor Godbole. I'm sure the reader of this piece can come up with many other such examples.
For me, the effect of these examples is to blur what it means to be of one race or another. In contrast, the type of bean counting that the campus does on enrollments, perhaps mandated by law, of that I'm unsure, seems to sharpen the distinction between the races. I then asked myself a question I wasn't able to fully answer. Which do we want, blurring or sharpening? My partial answer is this. Most people identify themselves with some subgroup of all human beings, and that subgroup serves as their primary identifier. In some cases that subgroup may be racially defined, in which case sharpening the distinction is preferred. In other cases the subgroup may be defined by other than race, religion for example, in which case the racial distinctions should be blurred. But then, trying to apply this thinking to myself, I'm not even sure of what my own primary subgroup is. I feel like something of an outsider to any one category, though my parents were Jewish, as were a majority of the kids I went to public school with, I'm completely non-observant now. Similarly, in some ways I feel strongly that I'm an academic and defined by my professional identity, but I don't try to publish in refereed journals anymore and haven't for some time. How many people feel like outsiders this way? I don't know. I do know that I don't like to be lumped into the single category, White. I hope with this to convince the reader that I feel uneasy with the entire discussion, but that much of these feelings are useful to keep in mind as we turn to the next part.
* * * * *
When I started at Illinois back in 1980, upwards of 90% of the undergraduate students were from within the state. Most came from the more affluent suburbs of Chicago. There was also a sizable population from down state. The city of Chicago itself was underrepresented. The only real way I learned how this mattered was via one of my fellow assistant professors in the Economics department, who happened to be the daughter of the Belgian Ambassador to the United States. She said our students were too provincial. That is not a word I used regularly in my working vocabulary, which is perhaps why I remember it now. Also, as someone who grew up in NYC I had substantial fear of people from the Midwest, which my 4 years in graduate school at Northwestern didn't eradicate completely. So I was accepting of that conclusion without wondering how she reached it. It occurs to me now that she must have directly experienced some intolerance, a woman teaching economics was uncommon then, and somebody who spoke English with a French accent even rarer here.
Fast forward twenty years. I am the representative from my campus on the CIC Learning Technology Group. The CIC is the academic arm of the Big Ten but also includes the University of Chicago, which is not in the Big Ten. At the time the group was sponsored by the Provosts. The various representatives in the group were either Associate Provosts for Undergraduate Education or like me the leading Educational Technologist for their campus. It was quite a collegial group and I became friendly on an individual basis with many of the members. In a sidebar conversation with my colleague from UIC, our sister campus in Chicago, she tells me that my campus is not hospitable to African-Americans. I was already somewhat aware of the this via discussions on my campus about "digital divide" that I was engaged in. But I hadn't realized the issues impacted the entire university, not just my campus. Regarding selectivity and the prestige of a degree, my campus was ranked much higher than UIC. But qualified African-American students might nonetheless prefer attending UIC. That was a real issue. It probably still is.
The above two anecdotes are there to show that issues with lack of collegiality along racial/cultural lines have been with us on campus at least since when I began here as a faculty member. Those issues are more prominent now. There are several reasons why. One of those is the change in demographics of our undergraduate student population toward a much larger share of international students coupled with a much greater reliance on tuition as a revenue generator for the university. A second is prominent role that the Internet plays and its enabling of hate speech, even in circumstances where race is not really at issue, such as on whether to cancel classes or not due to cold weather. A third is the greater attention being given to race at a national level as a consequence of the senseless killing of black men by police that has made the news.
I am a fairly regular reader of the New York Times Opinion Page. Among the regular columnists, Charles Blow is the one who writes regularly about race issues, often taking on the Republican attack machine in the process. It might be expected that an African American columnist will write on race issues, but as a regular diet of columns I find this problematic. So it occurred to me that Blow should swap columns with somebody else at the Times, Joe Nocera for example. Nocera has written a spate of columns on the NCAA as evil cartel. Imagine if for a month or so that Blow would write about the NCAA, race could certainly enter the discussion there but the constraint would be that there was a connection to NCAA issues, and Nocera would write about race relations, preferably entirely outside the world of sports. The alternative perspective would be helpful to readers.
In that spirit I am writing about issues that were taken up in a recent campus report on Racial Microaggressions. While I have written on race issues before, it is far from my usual fare and it is not easy for me to do. This post is taking much longer to write than is typical for me. It almost certainly would be a good thing if other voices who don't normally talk about race to begin to do so in a thoughtful way and do that publicly. Perhaps this piece will encourage others to do likewise.
Part of the reason to do this is to show that while commentators are like minded, they do disagree on occasion. For example, I took issue with some of the recommendations in the report, particularly its call for mandatory diversity training for all on campus. The campus doesn't do mandatory training well. It doesn't do such training okay. It does a poor job. The Ethics Training that all faculty and staff must do on an annual basis makes people resentful of the training and doesn't make them any more ethical in performance of their jobs. I believe that newly entering students get Alcohol Awareness training, but that clearly doesn't work. Even the training that the IRB administers for anyone doing research on human subjects is heavy handed. One has the feeling taking this training that part of it is there for the campus to avoid liability. Better educational approaches wouldn't worry about that at all, but instead ask what activities best get the trainees to appreciate the issues at hand. This calls for the trainee to make some contribution to the product, as a co-author contributes to writing a paper. Campus training doesn't allow for that. It is much more heavy handed.
In the process of doing background reading for writing this piece I became aware of the Office of Inclusion and Intercultural Relations. Among other activities for students, they offer an I-Connect Diversity and Inclusion Workshop for all first-year students on campus. I could not find on their Web site how long this office has existed nor could I find how many cohorts of students have been through the I-Connect workshops. It would be good to have some sense of the efficacy of this workshop. Lacking that here, I will simply move on. But let me make one point first. This office is part of Student Affairs. That may explain, in part, why I was unaware of it. Faculty tend to be more knowledgeable about the academic affairs side of the house.
In the concluding section I suggest several possible more bottom up approaches that potentially would be better than mandatory training. These are things that might be tried. I really don't know what will work. I'm not sure anybody else knows that either. But it seems to me there are many possible activities that could energize the community in a good way around race issues. So I certainly don't want to argue that we should just sit on our hands. I only want to discourage top down mandates, which tend to be heavy handed and are therefore ineffective. In the remainder of this section I'd like to recount a few other anecdotes to illustrate other dimensions of the issues.
The first of these is my own experience witnessing something very much like a microaggression, really a series of these, mainly coming from one student in particular. This happened in a seminar for the Campus Honors Program that I taught in fall 2009. I wrote about it at the end of the semester in a post called Theism - "Pan", "Mono", and "A". The thing is, I inadvertently invited it to happen, without realizing I was doing it till too late. Once the door is open, it is very hard to close it again. What I didn't understand at the time, but have a better feel for now, is that kids who have a desire to avoid all the drinking that many undergraduates engage in on campus need some alternative that they find compelling and engages them. For some students, this ends up being a faith-based living arrangement. In that class several of my students resided in faith-based housing. It was really only that one student who committed microaggressions. Since there was another student in the class who lived at the same place, this other student was extremely quiet and spoke but rarely in class, many of his classmates also were very quiet, I take it that there were multiple factors in confluence which caused the microaggressions. Based on that experience I conjecture that extroverts among the students, the type of kid who can dominate a class discussion, tend to be less sensitive to needs of their classmates and are therefore more prone to commit a microaggression.
Then, as I said, I encouraged this through my prompts for their blog posts, asking the students to bring in their own experiences into the discussion. I didn't intend that encouragement to enable discussion of religion in our class, but neither did I proscribe such discussion up front. I have continued to use blogging in my teaching and continue to provide prompts that encourage the students to relate the topic at hand to their personal experience. And I haven't had another experience like the one in that fall 2009 class. So I would deem that sort of thing unlikely, though not impossible. I probably could manage it better were it to happen again. It was a bit unnerving that first time. The other issue I want to bring up here is that apart from deterring the microaggression, reasonable instructors might disagree about what the teacher's role should have been in this case. As I wrote in the linked piece, my prior was that discussion of religion should be cordoned off from the classroom entirely. I can imagine that in a biology course where evolution is discussed that religion might come up there and the instructor shouldn't entirely deflect the discussion. Of course, we do have a Department of Religion on campus. I really don't know whether instructors in those courses encourage students to discuss their own religious beliefs. And I don't know whether other instructors would agree that discussion of religion should be cordoned off in the courses they teach. That, in itself, might be an interesting question on which to poll the faculty, though getting at that might also create more enmity than it's worth.
The next anecdote concerns a class I taught in Behavioral Economics in spring 2011, the first time I taught since I retired in summer 2010 and a course that hadn't been offered by the Economics department before then. It had been my intent in designing this class that I would teach it repeatedly thereafter, but my experience that spring was so bad that I taught the course only that one time. I now teach a course on the Economics of Organizations instead. In the behavioral course I had the students read the best selling book Nudge, by Thaler and Sunstein. In that prior campus honors seminar several of the readings were popular books and that worked well, so I was prone to try it again. This time around it created lots of problems.
Let me explain this by beginning with lessons learned and then working back from there. Many students who study economics (and business too) are conservative/libertarian in their politics, have an instinctive distrust for government, and have a strong belief that people earn what they deserve. If these people make a lot of money, it's because they worked hard for it. And if they are poor, therefore, they most be loafers. These students have views that are different from mine. I am more liberal. I think government has a positive role to play, though I do distrust excessive bureaucracy, but I also believe big business can be a threat and needs a counter force to rein it in. I also believe that income-wise where you end up depends a lot on where you start and, hence, it is not only effort that matters in income. Further, if you do fabulously well, you were apt to have had more than your fair share of good luck. In most classes I teach neither the students' views nor mine come out in the open. I prefer it that way. In this class these views did come out and they clashed. There wasn't microaggression. There was overt hostility and anger. I lost control of the class as a result.
The underlying philosophy in Nudge is something the authors call Libertarian Paternalism, a seeming oxymoron that the authors claim is really quite sensible. The Libertarian piece is that people should be free to choose what they want. The behavioral econ tweak of the neoclassical choice model, one that has substantial bite in practice, is that often people make choices passively rather than actively. In other words, they come to accept what is first presented to them and then don't consider possible alternatives. Thus, an agent who set the default that precedes the choice will influence the actual outcome. The paternalism part comes from setting defaults in a way to achieve socially desirable objectives.
Ahead of time I thought the class needed some intellectual background on paternalism, the old fashioned kind. So I assigned some readings that we'd discuss in class. One of those was this piece by Amy Gutmann. Gutmann is the President of the University of Pennsylvania and a distinguished scholar. That mattered not to the students. Some of the students went ballistic about me assigning this piece. The class went downhill after that.
Let me add some things here that I conjecture about but don't really know. Let's imagine that the conservative political views I described above are correlated positively with microaggressions, something that I don't think is too hard to believe. Let's also consider the microaggressions themselves as symptoms rather than root cause, with the cause stemming from the students' underlying beliefs. Then it will occur to some to want to address the cause directly. And in articulating why, one reason will be to protect students from microagresssions, which is consistent with the Student Code. But another reason will be to benefit the students who are apt to commit the microaggressions, to give them a healthier value system on which to based their judgments and actions. If my experience in that behavioral econ class is any indication, there would be a lot of pushback from some segment of the student population that this is the worst sort of paternalism, liberals imposing their values on conservatives. If something like this is to be expected, the issues are whether a treating the symptoms only approach can be effective and, if not, how progress can be made otherwise.
The last anecdote is aimed at reminding us that a student being uncomfortable in class can at times be a good thing; the discomfort fosters learning and the student feels it should be overcome via the student's own efforts. The following passage is from a Chinese-American student who took my class last fall, writing in her last post of the semester. She liked the course but was nonetheless uncomfortable in participating in class discussion.
As my post title is about boundaries, an exploration of the boundary would consider what happens on both sides of it. If instructors sometimes deliberately make students uncomfortable for good learning reasons, then student discomfort can't itself be taken as sufficient that there is a problem which requires some remedy. What distinguishes the discomfort caused by microaggressions from the discomfort my student writes about? Are those easy to parse or not? We should be concerned with Type I and Type II errors here.
* * * * *
What might we do to make things better? Much activity of this sort probably should be about raising awareness. If possible awareness would be about more than merely alerting people that there is a problem. It would point them to instances where the problem doesn't manifest or where the problem has been overcome. Then people might emulate these good examples. Things will improve on campus if such emulation happens at scale. But as the good examples might encourage some to deny that there is a problem at all, some documentation of the problem itself is surely necessary as well, just to counter the naysayers.
We live in a world where broad scale communication happens via social media, where online video is the preferred form of communication, and where if a video "goes viral" it can then influence a very large audience. I have made many instructional videos related to the courses I've taught. None of these videos have gone viral. I do not know "the trick" for making a video that will go viral. So what I say next should be taken as aspiration only, not a game plan that were it followed is known ahead of time to succeed.
There are two courses on campus that I am aware of that satisfy the Advanced Composition Gen Ed requirement and entail video production. One of these is Writing with Video. The other is Writing Across Media. Students do projects in these classes. A part of each project is making a video from scratch. This semester, one of my students from last fall is taking Writing with Video. As part of one of his projects, he did a video interview with me and a clip from that became part of his project. Based on this, I suspect many students taking these classes are looking for appropriate subjects for their projects and for people whom they can video interview.
What if the principal investigators of the Racial Microaggressions paper met with the course coordinators of the video production classes, offering to send a solicitation to all the students who were asked to complete the survey that formed the basis of their paper, asking those students whether they'd be willing to appear in a video interview as well as to identify friends they might have of another race who'd be willing to appear along with them? Part of the solicitation would be to note that the video projects might very well have a half-life beyond the course where the projects are created, so to also request that the students being interviewed give permission to make the videos public. If such a solicitation produced some positive response this would enable a social experiment aimed at making things better. (There are a host of logistical issues that would need to be addressed to make this work. I am not going to get into those here. My point is that something like this might be tried, not how to orchestrate it if it were tried.)
Beyond this, what of diversity education for faculty, staff, and graduate teaching assistants that would be of the opt in kind? There are many potential venues for this. The various college teaching academies offer one possibility. CITL has a variety of different workshop series that provide a different possibility. But who with the appropriate expertise would offer these sessions and wouldn't they end up mainly as preaching to the choir?
Scratching my head about this for a while it occurred to me to trying something like a particular training session for learning technologists that I was involved in for the Educause Learning Technology Leadership Program back in 2007. We made a video vignette of a "how not to" kind, deliberately campy in style so that when it was shown to the group in attendance it elicited quite a bit of laughter. After the showing of the video was completed, the group was instructed to find all the errors that were made during the video. It worked remarkably well in that setting. So my question is might something similar work for diversity education? I should note here that though we didn't plan it at the time that video was used again in later institutes, after the people who were involved in the video were no longer in attendance.
At issue here is whether the real felt pain from actual microaggressions gets diminished this way, so as to disregard the problem that the training is trying to address. If that happened, of course, then this sort of approach would fail in a fundamental way. So there would be a risk in trying this, no doubt. But there seems to me an upside potential as well. If the videos were sufficiently illustrative of what is at issue then the campy humor would help to make the audience larger and the message better understood. For that reason it seems to me worth trying, though I will admit here I'm entirely uncertain as to who would get the ball rolling.
Let me wrap up. Gandhi said, "You must be the change you wish to see in the world." To this I'd add that up front most of us don't know what it is we want. We need to think it through, try out some tentative possibilities, and then go from there. So let's talk about this some more, let's try some things while we're doing that, and in so doing let's make things better by practicing the art of the possible.
I suspect that many instructors will never look at the home address in Banner. Will they be aware of what category each of their students is in, simply by an eyeball test? In yesterday's NY Times, Nicholas Kristof's column, called Our Biased Brains, argues that very early on we learn about racial identities and most of us (African Americans are the exception) develop a preference for the race we are a member of. No doubt we are conscious about race, but do we make mistakes from time to time in assignment of category? And might it be that our own mental categories don't coincide with the official categories the campus maintains?
One troublesome aspect in the official categories is that International Student is a designation, which in theory encompasses all the races but in practice has come to imply East Asian, witness the piece from Inside Higher Ed, The University of China at Illinois. In other contexts the expression, "they all look alike to me," is offensive. Yet to the unwitting instructor (I include myself here) it is quite easy to mistake an Asian-American student for an International student and vice versa.
Another category I struggle with is Hispanic. There is first the Hispanic-Latino naming dispute. The campus seems to be hedging its bets on this one, where one of the race categories is Hispanic yet in area studies there is the Department of Latina/Latino Studies. More to the point for me is not really knowing who counts in this category and how my mental model of who counts is not reliable this way. To illustrate, I did a Google Image Search on Sephardic Jew. Below is one of the pictures I found, apparently a well known actor, Hank Azaria, though I was not familiar with him. Is he Hispanic? My immediate answer is, maybe. He was born in Queens, where I grew up. His grandparents apparently came from Greece. Is that what's decisive or is it largely immaterial? In my process of looking via Google I learned that Jerry Seinfeld is a Sephardic Jew. I do not consider Jerry Seinfeld to be Hispanic.
These puzzles for me led to further mental associations. I thought of the film version of West Side Story. That movie came out when I was a kid and was something of a big deal, in part because Leonard Bernstein did Young People's Concerts that were on TV in NYC, and the music from West Side Story was sometimes featured in those. Anyway, in the movie George Chakiris (born in Ohio, of Greek ancestry) plays Bernardo, the leader of the Puerto Rican gang, The Sharks. And Natalie Wood (born in California, of Russian ancestry) plays Maria, Bernardo's sister. Then I started to think of other films where White actors play characters of other races. One I came up with immediately, because I saw part of it on TV recently while doing my workout on the elliptical, was Remo Williams, where Joel Grey plays the Korean martial arts master, Chiun. That movie is an enjoyable farce. What about in a more serious setting, are there movie examples of that? I thought of A Passage to India, which has at its core the tensions across race and culture in the presence of colonial rule. The chameleon-like actor, Alec Guinness, plays an Indian character, Professor Godbole. I'm sure the reader of this piece can come up with many other such examples.
For me, the effect of these examples is to blur what it means to be of one race or another. In contrast, the type of bean counting that the campus does on enrollments, perhaps mandated by law, of that I'm unsure, seems to sharpen the distinction between the races. I then asked myself a question I wasn't able to fully answer. Which do we want, blurring or sharpening? My partial answer is this. Most people identify themselves with some subgroup of all human beings, and that subgroup serves as their primary identifier. In some cases that subgroup may be racially defined, in which case sharpening the distinction is preferred. In other cases the subgroup may be defined by other than race, religion for example, in which case the racial distinctions should be blurred. But then, trying to apply this thinking to myself, I'm not even sure of what my own primary subgroup is. I feel like something of an outsider to any one category, though my parents were Jewish, as were a majority of the kids I went to public school with, I'm completely non-observant now. Similarly, in some ways I feel strongly that I'm an academic and defined by my professional identity, but I don't try to publish in refereed journals anymore and haven't for some time. How many people feel like outsiders this way? I don't know. I do know that I don't like to be lumped into the single category, White. I hope with this to convince the reader that I feel uneasy with the entire discussion, but that much of these feelings are useful to keep in mind as we turn to the next part.
* * * * *
When I started at Illinois back in 1980, upwards of 90% of the undergraduate students were from within the state. Most came from the more affluent suburbs of Chicago. There was also a sizable population from down state. The city of Chicago itself was underrepresented. The only real way I learned how this mattered was via one of my fellow assistant professors in the Economics department, who happened to be the daughter of the Belgian Ambassador to the United States. She said our students were too provincial. That is not a word I used regularly in my working vocabulary, which is perhaps why I remember it now. Also, as someone who grew up in NYC I had substantial fear of people from the Midwest, which my 4 years in graduate school at Northwestern didn't eradicate completely. So I was accepting of that conclusion without wondering how she reached it. It occurs to me now that she must have directly experienced some intolerance, a woman teaching economics was uncommon then, and somebody who spoke English with a French accent even rarer here.
Fast forward twenty years. I am the representative from my campus on the CIC Learning Technology Group. The CIC is the academic arm of the Big Ten but also includes the University of Chicago, which is not in the Big Ten. At the time the group was sponsored by the Provosts. The various representatives in the group were either Associate Provosts for Undergraduate Education or like me the leading Educational Technologist for their campus. It was quite a collegial group and I became friendly on an individual basis with many of the members. In a sidebar conversation with my colleague from UIC, our sister campus in Chicago, she tells me that my campus is not hospitable to African-Americans. I was already somewhat aware of the this via discussions on my campus about "digital divide" that I was engaged in. But I hadn't realized the issues impacted the entire university, not just my campus. Regarding selectivity and the prestige of a degree, my campus was ranked much higher than UIC. But qualified African-American students might nonetheless prefer attending UIC. That was a real issue. It probably still is.
The above two anecdotes are there to show that issues with lack of collegiality along racial/cultural lines have been with us on campus at least since when I began here as a faculty member. Those issues are more prominent now. There are several reasons why. One of those is the change in demographics of our undergraduate student population toward a much larger share of international students coupled with a much greater reliance on tuition as a revenue generator for the university. A second is prominent role that the Internet plays and its enabling of hate speech, even in circumstances where race is not really at issue, such as on whether to cancel classes or not due to cold weather. A third is the greater attention being given to race at a national level as a consequence of the senseless killing of black men by police that has made the news.
I am a fairly regular reader of the New York Times Opinion Page. Among the regular columnists, Charles Blow is the one who writes regularly about race issues, often taking on the Republican attack machine in the process. It might be expected that an African American columnist will write on race issues, but as a regular diet of columns I find this problematic. So it occurred to me that Blow should swap columns with somebody else at the Times, Joe Nocera for example. Nocera has written a spate of columns on the NCAA as evil cartel. Imagine if for a month or so that Blow would write about the NCAA, race could certainly enter the discussion there but the constraint would be that there was a connection to NCAA issues, and Nocera would write about race relations, preferably entirely outside the world of sports. The alternative perspective would be helpful to readers.
In that spirit I am writing about issues that were taken up in a recent campus report on Racial Microaggressions. While I have written on race issues before, it is far from my usual fare and it is not easy for me to do. This post is taking much longer to write than is typical for me. It almost certainly would be a good thing if other voices who don't normally talk about race to begin to do so in a thoughtful way and do that publicly. Perhaps this piece will encourage others to do likewise.
Part of the reason to do this is to show that while commentators are like minded, they do disagree on occasion. For example, I took issue with some of the recommendations in the report, particularly its call for mandatory diversity training for all on campus. The campus doesn't do mandatory training well. It doesn't do such training okay. It does a poor job. The Ethics Training that all faculty and staff must do on an annual basis makes people resentful of the training and doesn't make them any more ethical in performance of their jobs. I believe that newly entering students get Alcohol Awareness training, but that clearly doesn't work. Even the training that the IRB administers for anyone doing research on human subjects is heavy handed. One has the feeling taking this training that part of it is there for the campus to avoid liability. Better educational approaches wouldn't worry about that at all, but instead ask what activities best get the trainees to appreciate the issues at hand. This calls for the trainee to make some contribution to the product, as a co-author contributes to writing a paper. Campus training doesn't allow for that. It is much more heavy handed.
In the process of doing background reading for writing this piece I became aware of the Office of Inclusion and Intercultural Relations. Among other activities for students, they offer an I-Connect Diversity and Inclusion Workshop for all first-year students on campus. I could not find on their Web site how long this office has existed nor could I find how many cohorts of students have been through the I-Connect workshops. It would be good to have some sense of the efficacy of this workshop. Lacking that here, I will simply move on. But let me make one point first. This office is part of Student Affairs. That may explain, in part, why I was unaware of it. Faculty tend to be more knowledgeable about the academic affairs side of the house.
In the concluding section I suggest several possible more bottom up approaches that potentially would be better than mandatory training. These are things that might be tried. I really don't know what will work. I'm not sure anybody else knows that either. But it seems to me there are many possible activities that could energize the community in a good way around race issues. So I certainly don't want to argue that we should just sit on our hands. I only want to discourage top down mandates, which tend to be heavy handed and are therefore ineffective. In the remainder of this section I'd like to recount a few other anecdotes to illustrate other dimensions of the issues.
The first of these is my own experience witnessing something very much like a microaggression, really a series of these, mainly coming from one student in particular. This happened in a seminar for the Campus Honors Program that I taught in fall 2009. I wrote about it at the end of the semester in a post called Theism - "Pan", "Mono", and "A". The thing is, I inadvertently invited it to happen, without realizing I was doing it till too late. Once the door is open, it is very hard to close it again. What I didn't understand at the time, but have a better feel for now, is that kids who have a desire to avoid all the drinking that many undergraduates engage in on campus need some alternative that they find compelling and engages them. For some students, this ends up being a faith-based living arrangement. In that class several of my students resided in faith-based housing. It was really only that one student who committed microaggressions. Since there was another student in the class who lived at the same place, this other student was extremely quiet and spoke but rarely in class, many of his classmates also were very quiet, I take it that there were multiple factors in confluence which caused the microaggressions. Based on that experience I conjecture that extroverts among the students, the type of kid who can dominate a class discussion, tend to be less sensitive to needs of their classmates and are therefore more prone to commit a microaggression.
Then, as I said, I encouraged this through my prompts for their blog posts, asking the students to bring in their own experiences into the discussion. I didn't intend that encouragement to enable discussion of religion in our class, but neither did I proscribe such discussion up front. I have continued to use blogging in my teaching and continue to provide prompts that encourage the students to relate the topic at hand to their personal experience. And I haven't had another experience like the one in that fall 2009 class. So I would deem that sort of thing unlikely, though not impossible. I probably could manage it better were it to happen again. It was a bit unnerving that first time. The other issue I want to bring up here is that apart from deterring the microaggression, reasonable instructors might disagree about what the teacher's role should have been in this case. As I wrote in the linked piece, my prior was that discussion of religion should be cordoned off from the classroom entirely. I can imagine that in a biology course where evolution is discussed that religion might come up there and the instructor shouldn't entirely deflect the discussion. Of course, we do have a Department of Religion on campus. I really don't know whether instructors in those courses encourage students to discuss their own religious beliefs. And I don't know whether other instructors would agree that discussion of religion should be cordoned off in the courses they teach. That, in itself, might be an interesting question on which to poll the faculty, though getting at that might also create more enmity than it's worth.
The next anecdote concerns a class I taught in Behavioral Economics in spring 2011, the first time I taught since I retired in summer 2010 and a course that hadn't been offered by the Economics department before then. It had been my intent in designing this class that I would teach it repeatedly thereafter, but my experience that spring was so bad that I taught the course only that one time. I now teach a course on the Economics of Organizations instead. In the behavioral course I had the students read the best selling book Nudge, by Thaler and Sunstein. In that prior campus honors seminar several of the readings were popular books and that worked well, so I was prone to try it again. This time around it created lots of problems.
Let me explain this by beginning with lessons learned and then working back from there. Many students who study economics (and business too) are conservative/libertarian in their politics, have an instinctive distrust for government, and have a strong belief that people earn what they deserve. If these people make a lot of money, it's because they worked hard for it. And if they are poor, therefore, they most be loafers. These students have views that are different from mine. I am more liberal. I think government has a positive role to play, though I do distrust excessive bureaucracy, but I also believe big business can be a threat and needs a counter force to rein it in. I also believe that income-wise where you end up depends a lot on where you start and, hence, it is not only effort that matters in income. Further, if you do fabulously well, you were apt to have had more than your fair share of good luck. In most classes I teach neither the students' views nor mine come out in the open. I prefer it that way. In this class these views did come out and they clashed. There wasn't microaggression. There was overt hostility and anger. I lost control of the class as a result.
The underlying philosophy in Nudge is something the authors call Libertarian Paternalism, a seeming oxymoron that the authors claim is really quite sensible. The Libertarian piece is that people should be free to choose what they want. The behavioral econ tweak of the neoclassical choice model, one that has substantial bite in practice, is that often people make choices passively rather than actively. In other words, they come to accept what is first presented to them and then don't consider possible alternatives. Thus, an agent who set the default that precedes the choice will influence the actual outcome. The paternalism part comes from setting defaults in a way to achieve socially desirable objectives.
Ahead of time I thought the class needed some intellectual background on paternalism, the old fashioned kind. So I assigned some readings that we'd discuss in class. One of those was this piece by Amy Gutmann. Gutmann is the President of the University of Pennsylvania and a distinguished scholar. That mattered not to the students. Some of the students went ballistic about me assigning this piece. The class went downhill after that.
Let me add some things here that I conjecture about but don't really know. Let's imagine that the conservative political views I described above are correlated positively with microaggressions, something that I don't think is too hard to believe. Let's also consider the microaggressions themselves as symptoms rather than root cause, with the cause stemming from the students' underlying beliefs. Then it will occur to some to want to address the cause directly. And in articulating why, one reason will be to protect students from microagresssions, which is consistent with the Student Code. But another reason will be to benefit the students who are apt to commit the microaggressions, to give them a healthier value system on which to based their judgments and actions. If my experience in that behavioral econ class is any indication, there would be a lot of pushback from some segment of the student population that this is the worst sort of paternalism, liberals imposing their values on conservatives. If something like this is to be expected, the issues are whether a treating the symptoms only approach can be effective and, if not, how progress can be made otherwise.
The last anecdote is aimed at reminding us that a student being uncomfortable in class can at times be a good thing; the discomfort fosters learning and the student feels it should be overcome via the student's own efforts. The following passage is from a Chinese-American student who took my class last fall, writing in her last post of the semester. She liked the course but was nonetheless uncomfortable in participating in class discussion.
At first I was worried about the blogging since I feel that I am a horrible writer. I dreaded writing essays ever since middle school. It is usually tough for me to formulate my thoughts and make them flow. I would sometimes spend several hours on these posts, but most of that time, I was thinking about what to write and how to start writing about it. After that obstacle, it was a bit easier. Despite the difficulty, this was actually my favorite part of the course because I was pushed to make connections between my real life experiences and the economics behind it. If a professor were to just teach me topics like transfer pricing and the Shapiro Stiglitz model with definitions and graphs, there is no way I would recall the material several weeks from now, but bringing it to a personal level in these blog posts really helps with absorbing the concepts.Furthermore, I really enjoyed the structure of the class and the fact that it was discussion-oriented. Although I never talked, I felt engaged in the topics discussed and I was able to absorb a lot of information aside from days when I did not get much sleep the night before. You may have seen some "glazed" looks from me those times (I apologize for that). Moreover, the reason why I did not chime in as much as other students is because I either felt that I could not relate or was too shy to contribute. There were multiple times when I had wanted to but remained silent because I have a fear of being wrong in front of people even in the most trivial situations. This is due to a somewhat traumatic experience that happened in the past, but I am getting better! And hopefully I will continue to do so in the midst of searching for full-time jobs.
As my post title is about boundaries, an exploration of the boundary would consider what happens on both sides of it. If instructors sometimes deliberately make students uncomfortable for good learning reasons, then student discomfort can't itself be taken as sufficient that there is a problem which requires some remedy. What distinguishes the discomfort caused by microaggressions from the discomfort my student writes about? Are those easy to parse or not? We should be concerned with Type I and Type II errors here.
* * * * *
What might we do to make things better? Much activity of this sort probably should be about raising awareness. If possible awareness would be about more than merely alerting people that there is a problem. It would point them to instances where the problem doesn't manifest or where the problem has been overcome. Then people might emulate these good examples. Things will improve on campus if such emulation happens at scale. But as the good examples might encourage some to deny that there is a problem at all, some documentation of the problem itself is surely necessary as well, just to counter the naysayers.
We live in a world where broad scale communication happens via social media, where online video is the preferred form of communication, and where if a video "goes viral" it can then influence a very large audience. I have made many instructional videos related to the courses I've taught. None of these videos have gone viral. I do not know "the trick" for making a video that will go viral. So what I say next should be taken as aspiration only, not a game plan that were it followed is known ahead of time to succeed.
There are two courses on campus that I am aware of that satisfy the Advanced Composition Gen Ed requirement and entail video production. One of these is Writing with Video. The other is Writing Across Media. Students do projects in these classes. A part of each project is making a video from scratch. This semester, one of my students from last fall is taking Writing with Video. As part of one of his projects, he did a video interview with me and a clip from that became part of his project. Based on this, I suspect many students taking these classes are looking for appropriate subjects for their projects and for people whom they can video interview.
What if the principal investigators of the Racial Microaggressions paper met with the course coordinators of the video production classes, offering to send a solicitation to all the students who were asked to complete the survey that formed the basis of their paper, asking those students whether they'd be willing to appear in a video interview as well as to identify friends they might have of another race who'd be willing to appear along with them? Part of the solicitation would be to note that the video projects might very well have a half-life beyond the course where the projects are created, so to also request that the students being interviewed give permission to make the videos public. If such a solicitation produced some positive response this would enable a social experiment aimed at making things better. (There are a host of logistical issues that would need to be addressed to make this work. I am not going to get into those here. My point is that something like this might be tried, not how to orchestrate it if it were tried.)
Beyond this, what of diversity education for faculty, staff, and graduate teaching assistants that would be of the opt in kind? There are many potential venues for this. The various college teaching academies offer one possibility. CITL has a variety of different workshop series that provide a different possibility. But who with the appropriate expertise would offer these sessions and wouldn't they end up mainly as preaching to the choir?
Scratching my head about this for a while it occurred to me to trying something like a particular training session for learning technologists that I was involved in for the Educause Learning Technology Leadership Program back in 2007. We made a video vignette of a "how not to" kind, deliberately campy in style so that when it was shown to the group in attendance it elicited quite a bit of laughter. After the showing of the video was completed, the group was instructed to find all the errors that were made during the video. It worked remarkably well in that setting. So my question is might something similar work for diversity education? I should note here that though we didn't plan it at the time that video was used again in later institutes, after the people who were involved in the video were no longer in attendance.
At issue here is whether the real felt pain from actual microaggressions gets diminished this way, so as to disregard the problem that the training is trying to address. If that happened, of course, then this sort of approach would fail in a fundamental way. So there would be a risk in trying this, no doubt. But there seems to me an upside potential as well. If the videos were sufficiently illustrative of what is at issue then the campy humor would help to make the audience larger and the message better understood. For that reason it seems to me worth trying, though I will admit here I'm entirely uncertain as to who would get the ball rolling.
Let me wrap up. Gandhi said, "You must be the change you wish to see in the world." To this I'd add that up front most of us don't know what it is we want. We need to think it through, try out some tentative possibilities, and then go from there. So let's talk about this some more, let's try some things while we're doing that, and in so doing let's make things better by practicing the art of the possible.
Wednesday, May 06, 2015
The virtues in making it up as you go along
When you come to the fork in the road....
A good background on Yogi Berra quotes in general and this one, in particular.
I don't like scripts that I have to follow. I don't even like them when cooking, where I probably should follow a recipe. I definitely don't like them in teaching. I don't really like having a GPS, or that voice in Google Maps, at least most of the time. I want to do what I want to do. I don't want to be told what to do by somebody else, definitely not by by some machine. If there is a chance that I might screw up (there always is) and I'm aware of it ahead of time (not nearly as often) then I might want some help right before the fateful moment. Making a plan in advance that I will adhere to so as to avoid the screw ups, however, is overkill. Getting a general idea, sure. You don't want to do anything totally blind. Filling that in with detail? Absolutely not!
Until a few days ago I knew this about me, but I didn't understand why. Now I have a better idea. That came from reading this paper by Bruffee (1984), Collaborative Learning and the "Conversation of Mankind." Let me explain how that came about in a bit. First let me note that Bruffee was a teacher of writing and his piece was meant at the time for others who taught English. The rest of us, who teach whatever it is that we teach, could learn a thing or two about how to teach our courses better if we first asked, how would a teacher of Writing go about the teaching task in my class? Only after chewing on that one for a while and coming up with some spark on something new to try should you then ask, now what do I have to do to modify the approach to fit my subject matter?
Bruffee's paper is about that triad: conversation, thinking, and writing. And his key point, I'm not sure it is original to him (Vygotsky is mentioned somewhere in the paper but I haven't read Vygotsky) is that these are really all the same thing, conversation, a social concept that requires more than one participant. Thinking is internalized conversation between the thinker and imagined others. So thinking, often envisioned as a solitary act, is really a social activity and it proceeds according to conventions defined by social discourse. Writing is then externalizing the thinking, bringing the conversation of the mind out to where others can participate in it.
I so liked this framing. It definitely captures what I do. Indeed, it is why I like the slow blogging approach to writing - it is conversational at root. It may explain why I struggle with digest forms of email, such as from Inside Higher Ed and The Chronicle, where each blurb is not conversation but instead more like an ad for some conversation to follow. It's also why I struggle with micro forms like Twitter. If you have all these different and disjoint blurbs running around in your head, is there a unifying conversation in which to embed them all? Most of the time to me it just sees like a lot of noise. Perhaps there is visceral appeal in an individual item. Lead us not into temptation. There's already too much of that with the sidebar in Facebook.
I've now reached the fork in the road in this essay. Are we human beings hard wired for conversation, with each of us thrilled to be in a discussion where all of the participants can hold up their end and seem to be doing just that but they are also sympathetic to the others so will help if one stumbles? Or is it that some people have a predilection for conversation, Bruffee for one, me for another, while other people get their jollies in some other way? My story is better if conversation is fundamental to the human condition. But if that is true it remains a puzzle why more people don't engage in conversation more often. My best explanation for that is people often act out of fear first and foremost. In this case they are fearful that they can't hold up their end of the discussion. This make sense to me for shy people. For the gregarious types, it must be something else, though I suspect many of them don't venture beyond very familiar terrain, so that most of their conversations don't go anywhere.
The high point for me in having actual conversation was in college at 509 Wyckoff Road, during my junior and senior years at Cornell. It's not that I didn't have conversations in high school. But I had fewer friends to have real talks with so there was less variety in what we talked about, and I don't remember any conversations outside of school in a group setting, while in the kitchen/eating area at Wyckoff Road that was the norm. It may be that a few close friends is all you need to have a really good talk, though I like to explore different things and I need others to help me get there. With one close friend you can go deep, but it's less likely that you go wide as well.
My first few years as an administrator, running SCALE and then CET, I was able to spend a good chunk of the time in conversation. The Espresso Royale on Sixth and Daniel was my unofficial other office. I got to talk with a wide variety of people there - faculty, other administrators, teaching with technology types, and once in a while pure technology types too, though many of those discussions happened someplace north of Green Street. It was either good fortune or me shaping the job to do what I liked. Over time, there was less of that and more of the dreaded time suck ---- committee meetings. My calendar became more crowded. My enjoyment at work started to wane. One real reason for starting this blog was to get back that sense of joy. If I couldn't have the type of conversations I'd like to have with others very often, then I'd have them with myself.
There is a question for me whether I would have stuck with the activity had I kept the blog private, in essence a journal or a diary. Bruffee helped me to understand that one too. Most of the stuff in this blog is potentially interesting to people I know, or people I know of, or people where I've read what they've written and I'm commenting on that. This doesn't mean they'll want to read my stuff or like it if they do read it. But there is that possibility. So in that way the blogging is holding up my end of a larger conversation. A private journal is something else entirely.
Now let me get back to making it up as you go along, which I do quite regularly in writing. So the blogging I do is different that way than how I would write up an economics paper aimed for publication in a refereed journal. For that sort of thing the thinking would precede the writing. I would spend an inordinate amount of time working through a model, perhaps 3 months or more. During that time the model became my universe and I'd try to understand everything that was in it, intuit what my main results would be and how to prove them. Only as I neared completion of this modeling part would I begin on the writing. So the writing was other than learning. It was communicating what I had already learned in a way that might be intelligible to others.
It is quite different with the blogging and with some of my other mental sojourns that I end up not writing about at all. There it is just exploration via conversation and if the exploration seems promising, I'll start writing then still with a lot to discover ahead of me. I hope it's now obvious why you have to make it up as you go along this way. If you do that, there is something to still talk about. The mystery isn't yet solved. The verdict hasn't yet been rendered. The outcome is not known. (I'm sure there are yet more metaphors here for describing the uncertainty in the process, but I hope the reader gets the idea.) This is not the skillful writer holding back information till the last possible moment to build suspense. This is the writer himself not knowing where things will end up other than some vague idea that he will not insist on if he misses that mark. Keeping the internal conversation going is a way to find out how things will turn out.
This allows the discussion to cover familiar themes, make only a mild departure from prior conversations, and still be absorbed with the discussion because it has a freshness to it that is captivating. I could not produce a blog post by first making a detailed outline of what I want to cover and then writing paragraph by paragraph, sentence by sentence, adhering to that outline. And individual sentence or two might be better that way since my full attention could be brought to how to shape what it is that I'm going to say. But my commitment to the activity would wane well before I'd finish. That approach with outlines would end up killing my interest in writing.
There's one other thing that making it up as you go along does for you. If you produce something tolerable that way, it builds some confidence in you to try it again. Being able to improvise in the moment, I suspect, whether a jazz musician in a band, or a comedic actor doing a sketch with fellow actors, a painter trying a new technique on a canvas, or a slow blogger like me writing on a different theme, is an act of confidence. If you try it, a spark will come. Logically, it's not necessarily true. There are those excruciating times where you fall flat on your face. As an empirical regularity, however, for me with the writing it seems true most of the time, especially if it is open ended when the writing will stop, meaning where there is no day job or other regular obligation that takes precedence.
I do throw out pieces that I've started but that don't seem to have enough oomph in them to make it worthwhile to finish them. But most of the time I do finish. And when I come back later to reread the piece, I frequently like what I'm reading. Maybe that's narcissism, though I tend to be quite critical of my own performance when I find it below par. It's hard to argue for objectivity on this score. But I'd wager that any one who practiced making it up as they go along and did it for quite some time, they'd start to like the results of what they produced. And the reason, in case it isn't clear, is that when you have a good conversation it comes to a good conclusion.
But you also better watch out. I'm cooking tonight!
Sunday, May 03, 2015
On Social Issues Is There Ever A "Right Answer"....
....or is there only "different points of view?"
In his column last Friday Paul Krugman writes:
The 2016 campaign should be almost entirely about issues. The parties are far apart on everything from the environment to fiscal policy to health care, and history tells us that what politicians say during a campaign is a good guide to how they will govern.
It seems to me that this paragraph is uncontroversial. The parties are far apart. Would anyone question that? One might imagine on going down the list of issues, noting the current positions, and then asking, can the parties reconcile on this one? What would it take to achieve such reconciliation? If there were a right answer on a particular issue and if one of the two held positions were the right answer, one might expect that over time there'd be convergence to it, as evidence accumulated to support that position. But there are reasons to believe this won't happen and as Krugman writes people seem free to deny the evidence or to deny their articulation of what they previously believed. Further, because the focus here is social issues, there really can't be controlled experiments to test hypotheses and disprove the false ones.
But there are other reasons why there might not be convergence. The most obvious of these is that neither position is the answer, perhaps because each position is articulated in a straightforward way but the right answer is complicated. Or it might be that the right answer is not that complex but it is other than the two positions as articulated. In this case the evidence will never point to either of the positions convincingly.
I tend to think there is still a different reason that explains the lack of convergence most of the time. It is that we don't know what it is we want because we can't imagine it in the absence of seeing it. So we are not rational in the way the previous reasons imply. And this lack of rationality keeps us from learning about a right answer, because in the absence of rationality there isn't one, as a right answer depends fundamentally on what it is that we want.
An alternative that recognizes this dilemma would be to view our beliefs as evolving based on what we learn from experience and that everything we hold true at present is contingent on current beliefs. Alas, most people seem to find this approach quite unsettling. They want things to be more certain.
I wrote about this issue some years ago as I tried to write a book on the precepts that should underlie undergraduate education, which I called Guessing Games, because developing good intuition is at the heart of critical thinking. I ultimately gave up on the book writing because I became aware that I was lecturing in many of the essays and readers don't like to be lectured at. I then didn't know how to reconcile the points I wanted to make without lecturing. I still don't know how to do that. Indeed, this post might very well seem like lecturing, though I'm less concerned about doing it in individual blog posts.
The passage below is from the chapter Just The Facts and Guessing. It sets up my concluding section.
* * * * *
There are many who are not well versed in the scientific method, who
nonetheless invoke the mantra that is the title of this chapter – just
the facts. They too are aiming for objectivity though sometimes I fear
they have an additional agenda, to close off further argument. Anecdotes
are evidence. They may not be the best sort of evidence, especially
when more systematic evidence is available, in which case relying on
anecdotes exclusively is silly. But throwing them out is bias. When the
systematic evidence points one way and the anecdote another, there is
learning in carefully reconciling the two. Likewise, the expressed
opinion of a friend, colleague, or opponent is evidence too. The vast
majority of people are rational and thoughtful. When they express an
opinion that appears contrary, they are apt to have access to
information that you don’t have or to have related experiences that are
unknown to you. Ignoring the opinion then is inconsistent with weighing
all the evidence. Of course, we are awash in polemical argument in the
political arena, where often the goal is to seek political advantage
rather than to illuminate the truth. So there is a tendency to discount
if not entirely ignore opinions of the other side. To the extent that
politics is like sports and we voters are like fans, perhaps that’s ok.
Outside of sports and politics, however, it’s a problem.
The best articulation of the principle I’ve seen is by Steven Sample in his book The Contrarian’s Guide to Leadership. The first chapter is on Thinking Gray,
which means several things all at once. First, don’t make a decision
before you have to and don’t tip your hand as to how the decision will
eventually come out to encourage others to provide you with evidence
that you will weigh fairly. Second, actively encourage argument and
debate about the decision so different points of view can be well
articulated. Third, while the first two are really external behaviors
this one is truly internal to yourself. It’s not that you have a quickly
formed opinion that you are not sharing because of the first two
reasons. It’s that you maintain neutrality on the issues until when
judgment is needed. You do this so you can make the best and therefore
unbiased judgment when it’s time for that. As Sample says, this is
contrary to the way most of us behave because we’ve been taught to make
snap judgments.
F. Scott Fitzgerald once observed something similar to thinking gray when he observed that the test of a first-rate mind is the ability to hold two opposing thoughts at the same time while still retaining the ability to function.
Thinking gray is contrary to what I was taught in formal economic theory/statistics, Bayesian Decision Theory. This theory admits an element of subjectivity captured in the decision maker’s beliefs represented by the prior distribution over the unknown parameter. The theory then explains how beliefs get updated based on experience, generating a posterior
distribution. That part is pure statistics. The economics comes in when
experience is driven by choice, call it a consumption choice, and when
different consumption choices have varying degrees of informativeness.
For example, in consuming a drug, a low dose will have little effect
simply because it is low, while a higher dose may have substantial
effect if the drug actually works. So taking a high dose is more
informative than taking a low dose. The economic theory prediction is
that early on part of the drive of choice is experimental consumption,
to encourage learning. Ultimately the choice settles down to what is
optimal given beliefs. (In some cases beliefs reach the truth with
certainty, but there can be instances where beliefs are stable with
residual uncertainty.) This approach can rationalize the binge drinking
of teenagers.
Really, the two approaches are distinct. Sample is contemplating a large
decision that once made remains fixed for quite a while. The theory of
experimental consumption focuses on repeated decisions of a smaller
nature. The information gathering that Sample has in mind is also
different from the statistical approach in Bayesian Theory. One metaphor
that might help in understanding the Sample view is to imagine having
to understand a three dimensional object from getting to view a finite
number of two dimensional snapshots of the object, each taken from a
different perspective. Another snapshot from essentially the same
perspective doesn’t really help. One from a new perspective helps a lot.
Sample doesn’t argue that we get to choose the perspectives from which
we get to take the snapshots. He just argues that we have a better
understanding with more perspectives.
Much as I like Sample, however, he is an engineer by training and he
leaves you with the impression that after all the information is in the
situation and high intelligence he brings applied to the situation more
or less dictates the solution he comes up with. Mostly, I don’t think it
works that way. Prior disposition and point of view matter for these
decisions. Consider this episode from the West Wing called The Supremes,
with Glenn Close as Judge Evelyn Baker Lang (very left of center) and
William Fichtner as Judge Christopher Mulready (just as far right of
center). Mulready exemplifies the F. Scott Fitzgerald conception of a
first-rate mind; he is able to articulate the Liberal view better than
the staffers at the White House while he comes at his opinions from an
opposing vantage. We care about the politics of our Supreme Court
Justices because in the way they decide on cases their politics matters.
In the context of judicial opinion, that is an unremarkable assertion.
In broader contexts prior disposition plays the role politics plays in
the judicial case, hence there is an inherent subjectivity to the
decisions. Sample conveys the idea of an optimal (and unique) solution
to his decisions as the afterward of thinking gray. Optimal is the
engineer’s credo. Though as an economist I was trained to think that way
as well, my experience as an administrator suggests there are multiple
possible approaches, none a priori optimal, with preference over a
particular alternative determined by prior disposition. So I’m
inherently subjective in my approach and my interest is in understanding
the interplay of that subjectivity with the facts.
* * * * *In the Op-Ed page from Friday there was an essay by N.D.B. Connolly entitled Black Culture Is Not the Problem, which argues that the problems we've seen in West Baltimore stem from the extant power structure and the predatory business practices which create a breeding ground for disaffection and ultimately destructive violence. In the same paper there was a column by David Brooks, The Nature of Poverty, which I read as a blame the victim piece, even if the title conveys the idea that the victims can't help themselves. This is the way opinion pieces are written - to argue for one position on the matter. The Times doesn't always present two pieces in the same day with competing views, but clearly they believe that competing views need to be presented, which is why they have Conservative and Liberal Columnists and why they will have guest columns from people whom the Editorial Board clearly disagrees with.
Indeed go back to the 1970s and consider 60 Minutes, a highly respected and well viewed program at the time. Their Point/Counterpoint segment celebrated this form of debate. We don't seem to have innovated on how news shows present argument since then.
What we don't have, in other words, are examples for newspaper readers and TV viewers of a first rate mind in practice thinking gray on the matter. One wonders why not. Would the audience be turned off by that because the complexity would dumbfound them? Or is it that such first rate minds are too scarce? Or might it be that media outlets don't view it their job to teach the audience on how to analyze and synthesize news and opinion. Leave the analysis and synthesis for the audience to perform; the media's job is simply to present the raw ingredients so the audience can get at it.
I do want to single out Thomas Edsall here, the exception that proves the rule. Not only does he seek out even handed treatment on whatever topic he discusses, but he always consults a variety of experts yet weaves his own compelling narrative that does help to educate the reader about how to think through the issues. I wish Edsall were the norm. But he is not. The norm is that if you care about an issue you take one side of it or another, but not both. If you don't take sides it means you don't care.
I'm afraid the same is true at universities across the country, with the Salaita Case an exemplar from my campus. I wish our rhetoric could embrace a thinking gray approach. For the most part, however, my experience is that it doesn't.