There is a provocative Op-ed in today's New York Times. The piece was written by Andrew Hacker, a professor emeritus from Queens College. A selection from the piece is below as is a link to the full article.
One criticism I can offer of the piece is that it demonstrates how difficult (perhaps impossible) it is to delineate the boundary between those who learn math and those who don't. Poets would seem to be not close to this boundary. But do note that Whitman wrote When I Heard the Learn'd Astronomer. Though a critique of an exclusively mathematical and analytic approach to astronomy, wouldn't it be true that if Whitman knew no math whatsoever he wouldn't have attended the lecture in the first place? Perhaps a more telling criticism is Hacker's comment about Medical School requirements. Doctors, one might surmise, need a working knowledge of statistics to be able to assess new developments in the profession in order to recommend treatments and therapies. Where does that working knowledge of statistics come from? Framing the issue this way, the calculus requirement doesn't appear misplaced to me.
There is a different way in which I agree with Hacker. Many students aim is to "get through" the math as distinct from learning it so it is part of their own way of thinking about the world. In essence getting through the math is an admission that they can't learn it. I've written about the issue in an essay called Guessing In Math. There is very little value to be had in any area of study when then the student's aim is to get through the course.
Nonetheless, I'm not enamored with Hacker's solution to the issue. Having some recall of my own development with math, I believe the focus should rightly be grades 3 to 5. Some algebra and geometry ideas need to be inserted there, in a way that are accessible to these very young minds. Jerome Bruner's discovery approach would seem essential. I'm guessing that most kids aren't taught that way early on. That's the real shame of it all.
pedagogy, the economics of, technical issues, tie-ins with other stuff, the entire grab bag.
Sunday, July 29, 2012
Friday, July 20, 2012
Why Professor Edmundson Is Wrong
As a faculty member it is a kudo to get an Op-Ed piece in the NY Times, one I've wanted on occasion. I've tried a couple of times by submitting an unsolicited piece. Perhaps its no surprise that I failed in getting a hit. So I applaud Professor Edmundson for getting his essay The Trouble With Online Learning into today's paper. (Though perhaps his piece was solicited.) Regardless, here I want to take issue with the arguments he made in that piece. I do think that online learning can be critiqued, but the way Professor Edmundson went about it is not helpful and some of the things he asserts are ill considered.
Let me begin with where I agree with Professor Edmundson. For that I think it useful to make an analogy between teaching and acting. (See page 3 of this piece, which I wrote in fall 2000 as the then chair of the oversight committee for the Center for Writing Studies. It was part of a failed effort to get other instructors on the committee and still other instructors who taught courses in the style of Writing Across the Curriculum to write similar pieces about themselves and to have them express their views regarding the importance of writing in learning.) Around the time of that piece I was a regular viewer of Inside The Actors Studio. Mainly the show provided hour long interviews with well known film actors, who would talk about their craft and their formative experiences with acting. Universally, these guests would report a strong preference for the theater over film. They'd rather act on stage because they felt close affinity to the audience that way and could modulate their delivery based on the audience reaction. Invariably you'd hear them say that while they'd do the same play night after night each performance was unique owing to the interaction with the audience Professor Edmundson feels the same way about the importance of interaction with his students while teaching and he believes that such interactions are an essential characteristic of excellence in instruction. I agree. Indeed some of my least favorite experiences with learning technology have been giving Webinars or live class sessions online, especially when there is little textual feedback from the audience through the chat window. It's like talking to a wall with no way of knowing whether the message is getting through.
With that observation let me begin my critique. When creating a blog post or a short video to be viewed in asynchronous manner I also don't know whether the audience gets it, unless I subsequently get comments from them. But I can review it to see if it satisfies my own standards, modify the creation if that first time through what I've made falls short on my personal metric, and eventually come up with something I'm okay with and perhaps even fond of. Thus, I treat my own satisfaction with the work as an alternative to the audience response I expect in teaching. In good part this is why I'm not a big fan of repurposing classroom capture for students not taking the class. Moreover, though seven years ago I was excited by this technology I'm much more cool to it now, even for those students in the class. I prefer the use of online micro-lectures integrated with a variety of other content so students have varied activities to do online. Professor Edmundson misses this distinction between developing prepared content over time versus simply recording sessions primarily intended for a live audience. Further, he makes no attempt to take the student perspective and compare online learning with reading the textbook or other print materials, which like the live classroom provide fodder for student learning. He forces the comparison with the face to face classroom experience, because that's where the teacher presence is centered. He therefore doesn't ask whether students can learn a subject in a deep way entirely on their own and when that is likely to happen. I'll return to that question later in this piece.
Professor Edmundson talks about his experience with an online course taught by a highly respected instructor. That course featured recorded lectures delivered to a live audience. While the lectures were thoughtful they were not situated in Professor Edmundon's experience. Ultimately he found them unsatisfying. Based on this one course a conclusion is drawn about an entire medium. This is a very poor way to reason.
There are plenty of bad movies, some made by very good directors and with excellent actors. The existence of these bad movies is not sufficient to condemn film, broadly considered. And let's note as well that reviewers can strongly disagree about the merits of a particular movie. For most of the nation film is a much greater source of entertainment and education than the theater is, the actors' preference for theater notwithstanding. In some cases that may represent an absolute preference for the medium by the audience. (Last night, my younger son felt compelled to go to the opening of the Dark Knight Rises, a case in point.) For example, computerized special effects are hard to deliver on stage and some movie goers may be wowed by movies with lots of special effects. But often the preference for movies is simply a reflection of availability. Teaching at a major university it is sometimes easy to forget that both for reasons of price and location the instruction that goes on there is not readily available to a large fraction of the population. Availability matters. In measuring overall importance of a medium it's quite possible for availability to trump quality. Better to argue for a proper balance between the two than to focus solely on the quality dimension.
Let's move on. The biggest flaw in Professor Edmundson's piece is an exclusive focus on the ceiling in education. His argument amounts to making the point that with online learning the ceiling is too low. Implicit in making this argument is that one of two possible assumptions should hold. The first is that much learning in the traditional mode happens at or near the ceiling. The second is that the learning experiences away from the ceiling don't matter. He may truly believe one or the other of these is true. Illinois is not Virgina and teaching Economics is not the same as teaching English. That much I'll readily grant. His experience is undoubtedly different from mine. But I suspect we reside in the same universe, not separate ones.
My experience suggests it is equally important to talk about the floor with traditional approaches and that more often than we'd care to admit the instruction and the student performance falls below what we'd say was an acceptable floor. There is a tendency to ignore the issue because it is unpleasant and exposes the weak underbelly of Higher Ed. Yet if recent national events have taught us anything, they've made clear the need to shine a light even on what is unpleasant. Trying to do my part, I've written about this issue on multiple occasions, most recently this past spring.
In my view the best learning in a course setting happens when the student follows two paths that are distinct but interrelated. The first is the path constructed by the instructor of the course. The second is a path entirely of the student's own making. When the student has done substantial investigation on his own, the good instructor can provide much greater context and generality to what the student is learning as well as to supply the student with further directions for self-study. In this ideal the two paths are mutually supportive. When this happens the student will almost certainly like the course, like the instructor, and learn a good deal.
Shallow to no learning happens when, instead, the student only has one path to follow, the one the instructor provides, and when the student takes a rote learning approach in following that singular path. Alas, rote remains the preferred approach for many students. For such a student the primary goal appears to be to get through the course unscathed. The unintended consequence is that the student emerges from the course unaltered in his thinking. More importantly, the student has gotten no practice at making his own path. Entirely instrumental about his education, he quite possibly becomes cynical about school as a consequence. Early results from the National Survey of Student Engagement showed that often the student indicates satisfaction from the course nonetheless. This appears to be a consequence from the expectation that he will get a good grade and the feeling that he was well prepared for the exams in the course (the way to the good grade). George Kuh explained this finding by arguing that the instruction matched rather than combated the student preference to remain unchallenged intellectually and yet receive good grades. Kuh referred to this as the Disengagement Pact.
Having discussed the functional dual path approach and the dysfunctional instructor generated uni-path approach leads to the question whether there can be good learning when there is a path of the student's making even though the instructor lurks far in the background. The answer appears to be yes depending on the subject matter, the maturity of the student as a learner, and the materials that are made available in advance to facilitate good outcomes. The students seem to intuit this themselves. I recall the opening plenary of the ELI conference in 2007, where Julie Evans spoke about what she had learned from high school students regarding their attitudes about use of technology in instruction. As a general proposition, the students felt it was missing in their math instruction, where it could have been applied to good use. I agree with them, the technology really helps with mathematical content. Similarly, it is entirely believable to me that it could also be good in learning computer science and other STEM disciplines.
Arguing that online learning and particularly the MOOC approach to it can be effective for some students and some subject matter is quite different form claiming that the approach can work for all students and all types of courses. Professor Edmundson offers up a blanket condemnation. He was wrong to do that. A blanket embrace is just as inappropriate. Our time would be better spent were we to try to delimit the boundary of where online can be effective. It would also be well for us to return to the issue of how to break the vicious cycle at the heart of the Disengagement Pact. Each of these stand ahead of us as worthy areas of investigation and further debate.
Let me begin with where I agree with Professor Edmundson. For that I think it useful to make an analogy between teaching and acting. (See page 3 of this piece, which I wrote in fall 2000 as the then chair of the oversight committee for the Center for Writing Studies. It was part of a failed effort to get other instructors on the committee and still other instructors who taught courses in the style of Writing Across the Curriculum to write similar pieces about themselves and to have them express their views regarding the importance of writing in learning.) Around the time of that piece I was a regular viewer of Inside The Actors Studio. Mainly the show provided hour long interviews with well known film actors, who would talk about their craft and their formative experiences with acting. Universally, these guests would report a strong preference for the theater over film. They'd rather act on stage because they felt close affinity to the audience that way and could modulate their delivery based on the audience reaction. Invariably you'd hear them say that while they'd do the same play night after night each performance was unique owing to the interaction with the audience Professor Edmundson feels the same way about the importance of interaction with his students while teaching and he believes that such interactions are an essential characteristic of excellence in instruction. I agree. Indeed some of my least favorite experiences with learning technology have been giving Webinars or live class sessions online, especially when there is little textual feedback from the audience through the chat window. It's like talking to a wall with no way of knowing whether the message is getting through.
With that observation let me begin my critique. When creating a blog post or a short video to be viewed in asynchronous manner I also don't know whether the audience gets it, unless I subsequently get comments from them. But I can review it to see if it satisfies my own standards, modify the creation if that first time through what I've made falls short on my personal metric, and eventually come up with something I'm okay with and perhaps even fond of. Thus, I treat my own satisfaction with the work as an alternative to the audience response I expect in teaching. In good part this is why I'm not a big fan of repurposing classroom capture for students not taking the class. Moreover, though seven years ago I was excited by this technology I'm much more cool to it now, even for those students in the class. I prefer the use of online micro-lectures integrated with a variety of other content so students have varied activities to do online. Professor Edmundson misses this distinction between developing prepared content over time versus simply recording sessions primarily intended for a live audience. Further, he makes no attempt to take the student perspective and compare online learning with reading the textbook or other print materials, which like the live classroom provide fodder for student learning. He forces the comparison with the face to face classroom experience, because that's where the teacher presence is centered. He therefore doesn't ask whether students can learn a subject in a deep way entirely on their own and when that is likely to happen. I'll return to that question later in this piece.
Professor Edmundson talks about his experience with an online course taught by a highly respected instructor. That course featured recorded lectures delivered to a live audience. While the lectures were thoughtful they were not situated in Professor Edmundon's experience. Ultimately he found them unsatisfying. Based on this one course a conclusion is drawn about an entire medium. This is a very poor way to reason.
There are plenty of bad movies, some made by very good directors and with excellent actors. The existence of these bad movies is not sufficient to condemn film, broadly considered. And let's note as well that reviewers can strongly disagree about the merits of a particular movie. For most of the nation film is a much greater source of entertainment and education than the theater is, the actors' preference for theater notwithstanding. In some cases that may represent an absolute preference for the medium by the audience. (Last night, my younger son felt compelled to go to the opening of the Dark Knight Rises, a case in point.) For example, computerized special effects are hard to deliver on stage and some movie goers may be wowed by movies with lots of special effects. But often the preference for movies is simply a reflection of availability. Teaching at a major university it is sometimes easy to forget that both for reasons of price and location the instruction that goes on there is not readily available to a large fraction of the population. Availability matters. In measuring overall importance of a medium it's quite possible for availability to trump quality. Better to argue for a proper balance between the two than to focus solely on the quality dimension.
Let's move on. The biggest flaw in Professor Edmundson's piece is an exclusive focus on the ceiling in education. His argument amounts to making the point that with online learning the ceiling is too low. Implicit in making this argument is that one of two possible assumptions should hold. The first is that much learning in the traditional mode happens at or near the ceiling. The second is that the learning experiences away from the ceiling don't matter. He may truly believe one or the other of these is true. Illinois is not Virgina and teaching Economics is not the same as teaching English. That much I'll readily grant. His experience is undoubtedly different from mine. But I suspect we reside in the same universe, not separate ones.
My experience suggests it is equally important to talk about the floor with traditional approaches and that more often than we'd care to admit the instruction and the student performance falls below what we'd say was an acceptable floor. There is a tendency to ignore the issue because it is unpleasant and exposes the weak underbelly of Higher Ed. Yet if recent national events have taught us anything, they've made clear the need to shine a light even on what is unpleasant. Trying to do my part, I've written about this issue on multiple occasions, most recently this past spring.
In my view the best learning in a course setting happens when the student follows two paths that are distinct but interrelated. The first is the path constructed by the instructor of the course. The second is a path entirely of the student's own making. When the student has done substantial investigation on his own, the good instructor can provide much greater context and generality to what the student is learning as well as to supply the student with further directions for self-study. In this ideal the two paths are mutually supportive. When this happens the student will almost certainly like the course, like the instructor, and learn a good deal.
Shallow to no learning happens when, instead, the student only has one path to follow, the one the instructor provides, and when the student takes a rote learning approach in following that singular path. Alas, rote remains the preferred approach for many students. For such a student the primary goal appears to be to get through the course unscathed. The unintended consequence is that the student emerges from the course unaltered in his thinking. More importantly, the student has gotten no practice at making his own path. Entirely instrumental about his education, he quite possibly becomes cynical about school as a consequence. Early results from the National Survey of Student Engagement showed that often the student indicates satisfaction from the course nonetheless. This appears to be a consequence from the expectation that he will get a good grade and the feeling that he was well prepared for the exams in the course (the way to the good grade). George Kuh explained this finding by arguing that the instruction matched rather than combated the student preference to remain unchallenged intellectually and yet receive good grades. Kuh referred to this as the Disengagement Pact.
Having discussed the functional dual path approach and the dysfunctional instructor generated uni-path approach leads to the question whether there can be good learning when there is a path of the student's making even though the instructor lurks far in the background. The answer appears to be yes depending on the subject matter, the maturity of the student as a learner, and the materials that are made available in advance to facilitate good outcomes. The students seem to intuit this themselves. I recall the opening plenary of the ELI conference in 2007, where Julie Evans spoke about what she had learned from high school students regarding their attitudes about use of technology in instruction. As a general proposition, the students felt it was missing in their math instruction, where it could have been applied to good use. I agree with them, the technology really helps with mathematical content. Similarly, it is entirely believable to me that it could also be good in learning computer science and other STEM disciplines.
Arguing that online learning and particularly the MOOC approach to it can be effective for some students and some subject matter is quite different form claiming that the approach can work for all students and all types of courses. Professor Edmundson offers up a blanket condemnation. He was wrong to do that. A blanket embrace is just as inappropriate. Our time would be better spent were we to try to delimit the boundary of where online can be effective. It would also be well for us to return to the issue of how to break the vicious cycle at the heart of the Disengagement Pact. Each of these stand ahead of us as worthy areas of investigation and further debate.
Tuesday, July 17, 2012
Heat Waves
Looks like it's time for corny jokes. Only the hot weather and the resulting brown out is no joke. I don't know how the crop is doing around here, but I do know our lawn is done for the season. We had a pretty good soaking rain over the weekend. The brown grass did not return to green. I'm told we need perhaps six of those in a short time period for the lawn to recover. The Dust Bowl happened during the Great Depression. I wonder if this is history reminding us that it tends to repeat or if Global Warming doesn't know about the economic woes. One or two more summers like this and the weather and drought could replace the economy as the number one issue.
If we're going to keep on having heat waves, why can't we have the tropical kind? They're certainly more colorful.
If we're going to keep on having heat waves, why can't we have the tropical kind? They're certainly more colorful.
Saturday, July 14, 2012
Satellite TV, DVRs, and User Confusion
On a whim, or because she trusted the salesperson at Sam's, my wife switched our TV provider from Dish to DirecTV. Unbeknownst to her at the time, DirecTV and Viacom were having their blowup. She's a big fan of Jon Stewart and Stephen Colbert. Though she can watch their shows on her laptop, it's just not the same experience. I did tell her that Dish has had a similar problem with AMC. She's not a fan of Breaking Bad, but I was. There's just nothing like seeing road rage in an academic.
Anyway, here I write about a different issue. We had multiple receivers with Dish and now we have multiple receivers with DirecTV. One difference though, with Dish each receiver had a DVR and what was recorded on that receiver could only be watched on the TV hooked to it. With DirecTV we have one receiver with a DVR and via the Internet connections in the house, the other receivers can access the recorded videos on it. So, like their commercials say, you can watch the same recording from any location in the house.
However, this doesn't seem to work with live TV. The receiver with the DVR automatically makes a temporary recording of what is being viewed. So you can pause it or rewind. That's kind of nice and once you use that for a few years it becomes a habit that you want to satisfy, at least on occasion. It turns out that the receivers without the DVRs don't do this for live TV. So for them the pause and rewind buttons don't work. Of course, she didn't understand this when she signed up for the service.
I did tell my wife that you can watch YouTube on the DirectTV boxes we have. That includes videos from my ProfArvan channel with voice over Excel screen capture doing intermediate microeconomics. She was not amused by this observation.
Within the first month you can stop the DirecTV service if you are not satisfied. Looks like we're going back to Dish.
Anyway, here I write about a different issue. We had multiple receivers with Dish and now we have multiple receivers with DirecTV. One difference though, with Dish each receiver had a DVR and what was recorded on that receiver could only be watched on the TV hooked to it. With DirecTV we have one receiver with a DVR and via the Internet connections in the house, the other receivers can access the recorded videos on it. So, like their commercials say, you can watch the same recording from any location in the house.
However, this doesn't seem to work with live TV. The receiver with the DVR automatically makes a temporary recording of what is being viewed. So you can pause it or rewind. That's kind of nice and once you use that for a few years it becomes a habit that you want to satisfy, at least on occasion. It turns out that the receivers without the DVRs don't do this for live TV. So for them the pause and rewind buttons don't work. Of course, she didn't understand this when she signed up for the service.
I did tell my wife that you can watch YouTube on the DirectTV boxes we have. That includes videos from my ProfArvan channel with voice over Excel screen capture doing intermediate microeconomics. She was not amused by this observation.
Within the first month you can stop the DirecTV service if you are not satisfied. Looks like we're going back to Dish.
Thursday, July 12, 2012
Gaming The System Versus Designing It
The main point in this piece is that many people have become quite good at gaming the system but few understand what a good system design looks like. Here I'm talking about social systems - whether driving on the highway, student and teacher behavior in school, citizen and legislator behavior in state or local government, or any other such system. People have learned to play the game to their own personal advantage and do so without thought of whether their behavior is beneficial, benign, or deleterious to others. Indeed, a further point of this piece is that quite often we don't know how to evaluate the outcome socially, so instead build a plausible (but possibly quite incorrect) narrative to give a thumbs up or thumbs down about the social outcome. This second point I'm borrowing from Daniel Kahneman's Thinking Fast and Slow, where he argues that when we can't answer a difficult question because of the complexity involved we replace it with a simpler question, answer that, and then assume it is the answer to the original question as well, even when it is not.
Innovation often aids the gaming behavior. Consider radar detectors and driving, perhaps the least controversial example one might come up with. Presumably a radar detector is purchased for one reason only, to lessen the likelihood of getting a ticket for speeding. Armed with a detector, the motorist will drive faster, as long as the detector indicates there are no police in the vicinity. If it is not the absolute speed of the vehicles that matters for safety (above a certain minimum speed, say 50 mph) but rather the variability in speeds that most impacts safety, then it seems clear that the detectors increase variability. Those who have them will drive faster than the rest. The impact on safety is unambiguous. The detectors lower safety. However, in a standard cost-benefit analysis that economists do, one must acknowledge the benefit to the drivers with the detectors. They get to their destinations sooner. So there is a time value to them. What is the result in aggregate? I really don't know. What does seem clear, however, is that there is an equity issue with gaming. Other motorists bear some of the safety risk but get none of the time value benefit. So, conceptually, the analysis is straightforward but on the actual arithmetic, adding up the benefits and subtracting out the costs, who knows? On a personal note, I'll make the following additional observation. If the possessor of the radar detector is a well paid professional (hence someone with high time value), middle aged, and without serious mental or physical health issues, then I believe their own self-preservation instincts mitigate the safety risks. On the other hand, if the possessor of the radar detector is the teenaged child of such a well paid professional, all bets are off.
Let's move on to a harder example, standardized test prep courses a la Stanley Kaplan. When I took the SAT, in 1971, I believe the College Board's position was that the test prep was of no consequence with regard to the results. My brother bombed the PSAT, took a test prep course thereafter, and his scores went up by over 300 points on the SAT. That's only a single observation but based on that and what I have gleaned since, there is definitely real consequence from giving the student confidence about the strategy to use when taking the test, whether to guess or to not answer a question when the student is unsure, and then becoming proficient in implementing that strategy. There can also be a real benefit from taking practice exams, doing diagnostics on those, and then cramming in areas of weakness where some concentrated study might help. Since each student should have the opportunity to put his best foot forward, some might not even consider private test prep as gaming the system. Do note its only commercial test prep done outside of school that is being considered here. Lower income students who can't afford commercial test prep therefore don't get its benefit. So, again, there is an equity issue with regard to the behavior. Leaving that aside, where is the social harm from the practice? I'm not sure everyone would agree, but this makes sense to me. If you treat the performance on the test as signal, an indicator of a hidden attribute of the individual which we might call "intellectual ability," then it might be that the test prep makes the signal less informative. Students with good scores might have high ability, but perhaps their ability is closer to average and the score is more a consequence of the private coaching the students have received. If that's true then schools in their admission practices either end up making more mistakes regarding whom to admit as a consequence of the test prep or the schools must incur substantial costs to consider other indicators of the students' ability, so as not to make those mistakes. That's where the social harm is.
Before pushing on, I'd like to refine the above based on my perspective as an instructor. The expression intellectual ability may not convey the right meaning. What one really wants are certain habits of mind and that the student reads on a regular basis stimulating and challenging material of his own choosing. Below is an excerpt from a post I wrote soon after starting this blog in 2005. It indicates that frequently we fall far short of this requirement. Further, I should observe that this requirement cannot be gamed the way a standardized test can. It requires substantial deliberate practice.
* * * * *
The innovations discussed above happened some time ago. By focusing on the past where we are familiar with how the innovations have been subsequently utilized we can consider the impact of these innovations over the fullness of time. I don't know the extent of the radar detection business since the 1970s (during the second OPEC price shock many states lowered their speed limits and that gave a boost to radar detection but when the price of gasoline came back down the speed limits went back up), but clearly test prep has been a growth industry, at least till the recent downturn in the economy. Let's now focus our attention on past innovation in financial engineering. In particular, I want to look at junk bonds and the subsequent behavior that engendered as well as the securitization of residential mortgages and the consequences from that, from the lens of whether they enabled gaming the system, and if so how, or if alternatively they promoted economic efficiency and then describe the mechanism by which that occurred.
There are a several reasons for doing so. First and foremost, these innovations in financial engineering are what many people associate with the economic consequences of the Reagan Revolution. Ironically, both came into being during the Carter Presidency, though they clearly were popularized under Reagan. Michael Milken, who worked at Drexel Burnham Lambert, invented the junk bond. Since about 10 years later he was convicted of securities fraud, in the minds of many the junk bond concept is highly suspect. I hope that readers can suspend judgment in reading my arguments below. In contrast, Lewis Ranieri, the inventor of securitization, had a largely intact reputation, at least until the subprime crisis unfolded. A second reason is that in preparing for my class this fall, which will use the text Economics, Organization, and Management by Milgrom and Roberts, in chapter six that introduces the concept "moral hazard" there is a rather extensive discussion of the economics behind the Savings and Loan Crisis of the 1980s. I thought their straightforward analysis was a good way to consider both the gaming the system and efficiency questions. The third reason is to consider the issue of resolution of underwater mortgages and the impact securitization has had on such resolution, especially in the case where an entire community is beset by such mortgages and hence where there isn't a market for repurchase of the homes at discounted prices. I doubt the issue was considered at all in the 1970s, when Ranieri invented these instruments. Yet it is something to think through now, since its resolution clearly matters a lot moving forward. Joe Nocera had an interesting column about this yesterday. He takes the point of view of the homeowners, which is natural. One should be sympathetic to them. But what of the system overall. That system includes those who hold the mortgage securities. What is a good outcome for the system taking into account all interests? The last reason is that by thinking through the gaming the system versus efficiency arguments, one can get a better feel for how much regulation in this sector is appropriate.
Before getting to this, let's take a brief look at the macroeconomics by looking at the following two graphs. The reason for doing this is to ask whether it is possible to "see" the efficiency consequences of a change in the economic environment by looking at GDP growth. One might conjecture that if an important practice makes industry more efficient, that in turn would make the economy grow faster. Alternatively, one might expect that improved efficiency would reduce the volatility in growth rates. Then too, by looking at growth rates, one entirely abstracts from the distributional consequences of the change in environment. So much has been written about income (and wealth) inequality as of late that I don't want to take it on here. This is not to say those issues are unimportant. It is only to say that there is enough on our plate to look at these other issues here.
To generate this first graph I took historical data from the Bureau of Economic Analysis available as an Excel workbook, with a series that measures real GDP in constant 2005 dollars. For every pair of consecutive years, Y0 and Y1, I computed GDP(Y1)/GDP(Y0) - 1, expressed that as a percentage and then called that the growth rate in Y1. Though Edward Tufte probably wouldn't like what I did next, I had Excel connect consecutive plotted points with straight line segments. I found this easier to read than looking directly at the scatter without the line segments included. Let me make a few observations, simply from reading the graph. There are multiple periods where the growth rate is negative. These are recessions. (There were 2 in the 1950s, none in the 1960s. Note that this chart uses annual data. Typically when we speak about recession we use quarterly data and it requires two consecutive quarters of negative growth.) Usually when the recession ends, there is a brief period of very high (over 5% real) growth. Much of that is catch up for the lost time during the recession. Then growth calms down to a more modest rate, until the next trough begins. Also, note that external "shocks" can trigger a recession rather than simply the business cycle doing its thing. In the 1970s, there were two OPEC Oil Price shocks. The first in 1973-74. The second in 1979. This was the period known as stagflation. Paul Volker pursued a tight money policy to wrest the inflation out of the economy. The created another recession in the early 1980s. The next recession, mild as indicated by this graph, happened during the 1992 election season. You'll recall the the expression coined at that time, "It's the economy, stupid." I trust that people know the rest of the history well enough to explain the remaining portions of the graph.
GDP growth can be broken up into two components - population growth and productivity growth. To abstract from the former, one also looks at per capita GDP growth. This next table was generated from different data provide by the Census. It is from the spreadsheet on Per Capita Income All Races. The same sort of process was used to produce the growth rates. It doesn't go back quite so far in time, but it covers the years that are relevant for this discussion. The periods of negative growth in this table are what most of us think of as a recession, though that is not the official definition. Using nothing more than an eyeball test, the cycle under Reagan looks quite similar to the cycle under Clinton. The prior cycle, roughly from the end of Nixon to the end of Carter, has the same amplitude as the other two, but had shorter duration. The cycle under Bush II had less amplitude, even before the financial crisis. And the recession that started just before Obama took office had a lower trough and has not yet concluded.
Let me make one more point before getting to the financial engineering. These data only show the actual history. They say nothing of the road not taken. One might argue that without the move to deregulation, that Carter started but that was accentuated under Reagan, the economy would have performed much more sluggishly, especially in light of the threat from international competition, particularly from the Japanese. That's a conjecture. It might be right, but we really can't say. A different conjecture is that it would have performed pretty much the same as the PC revolution would have happened anyway and that was a big driver of the economy under Reagan. Likewise the rise of the commercial Internet gave a big boost to the economy under Clinton, irrespective of the policies his administration pursued. This alternative conjecture suggests that much of the economic growth would have happened regardless of of which party controlled the White House. I don't know if either of these arguments is correct. All that the data allow you to compare is different historical periods. The data also don't allow you to conclude whether the policy consequences are fairly immediate or, in contrast, only result after some very long lags. It may be that the slower growth under Bush II is attributable to the deregulation under Reagan. It's possible. The data don't let you conclude that.
Let's get back to financial engineering and begin with the Milgrom and Robert's analysis. Thee are three players - depositors, the S&L itself, and the government which provides deposit insurance. The analysis shows the following, intuitive result. When taking on an investment project that has both upside and downside risk, when the S&L faces limited downside risk (with the government bearing the bulk of that) then the S&L has a preference to invest in projects with a high upside regardless of the expected return on the project. The moral hazard that Milgrom and Roberts discuss occurs in an example where the safer project also has higher expected return, but the S&L opts for the riskier project, thereby creating an expected social loss, which is born by the taxpayers.
Let's next note that for any investor, financing a project with junk bonds in the presence of limited liability from the bankruptcy laws is very much like the situation for an S&L with deposit insurance. There will be a preference for riskier projects, to take advantage of the big upside, since the investor is largely protected from the downside. It's other creditors who will bear that risk.
Now the new wrinkle, an important one. It may very well be that from the social perspective insufficient risk per project is taken. Consider this piece about cancer research over the last forty years, which makes exactly this point. Individual researchers, who have their own reputations to care about, take on safe projects so that they have a very good chance to succeed, thereby enhancing their own reputation. But the field as a whole advances little as a consequence. Let's bring this thought back to the investment context. The efficiency argument about the hostile takeovers that the junk bonds enabled is that entrenched senior management, which received substantial perqs from the status quo, worked to preserve that rather than to invent the next great thing and engage in creative destruction of the old. Further, the traditional motive for innovation, from competition in the product market, was muted in the early 1980s because of the oligopolistic nature of industry and the tacit collusion between the players. (Think of the Big Three auto makers.) Holmstrom and Kaplan use this argument to explain the merger activity and other changes in corporate governance that were widespread in the 1980s.
Let's stick with this framework as is for a bit before going outside it and critiquing it. The incentive for hostile takeover or making other risky investments is driven by the potential upside, not the expected return. When the expected return is high, the behavior can be socially advantageous, especially if the risks are independent going from one industry to the next. When the expected return is low, the behavior will be socially deleterious and then amounts to gaming the system. The same behavior can be one or the other. There is no telltale sign up front which it is. It may be that over the course of the business cycle early on there is a comparative abundance of risky but high expected return projects. Later there may be comparatively few of those but more high risk and low expected reward projects. Thus the gaming of the system is apt to speed up before the next trough. It's what feeds the bubbles and makes the trough deeper. Knowing this, a regulator would want to be pretty hands off coming out of a slump, but then start to apply the breaks as the economy improves and become heavy handed when the gaming behavior seems to be on the uptake. If it is not possible to apply a time sensitive approach to regulation - the law is the law, the enforcement is the enforcement, some choice has to be made between more rapid growth out of a trough accompanied with more rapid plunging into the next trough, versus perhaps lower growth overall but less extreme in the business cycle.
Now some critique. I offer up three distinct criticisms. Quite possibly more could be generated.
When an entire community is beset by underwater mortgages, there is no market for resale of the home and the efficient solution probably entails the existing owners staying in their homes. If the mortgages were held locally, this solution could be obtained by a negotiation between the lender and the homeowner to a new mortgage that makes sense in the current environment. There might be some delay getting to that new mortgage as the result of the haggling between the lender and the homeowner over the terms of the loan. But given those terms, each party has incentive to move to the new mortgage asap, for fear that the old mortgage will become under performing and then the lender will assume the property with no ability to resell it. That outcome would be unfortunate and benefits nobody. Thus, if the mortgages were locally held, one might envision that it would take a while to get the first few mortgages renegotiated but thereafter many of the mortgages would be renegotiated quickly as the new terms became more standardized.
The situation is quite different with securitized mortgages. There is first the question of the extent of reinsurance (credit default swaps). If a loan renegotiation triggers a payment on a credit default swap, then those obligated to make such payments have incentive to block the renegotiation if they can. There is second the issue of whether the valuation technique in the securitization accurately reflects true loan value, inclusive of the likelihood of renegotiation, or if it is more an artistic rendering tied principally to the face value of the original loans. If there is this artistic valuation aspect, the holders of the securities may have incentive to block the renegotiation, since that would force downward the securities asset values. There is third the issue of whether the terms of renegotiation of loans in one community would be independent of the terms so negotiated elsewhere in the country. If rather than independence early settlement in one community created a benchmark for settlement elsewhere then the security holders would have incentive to block negotiation in that initial community unless the terms were quite favorable to them. Taken together, these reasons suggest that securitization is a significant force in blocking getting to a sensible solution and hence is a significant factor in prolonging the economic slump.
Now let's unfold this via a backwards induction. Miliken, in the NY Times piece about him that is linked above, argues that the sub-prime crises was entirely attributable to a decline in underwriting standards (making 100% loans and not verifying the income of the new homeowner). But he does not ask whether underwriting standards can be controlled by the market or if they will deteriorate naturally over the course of the business cycle and further whether securitization facilitates their decline, by masking the riskiness of the loans. Even absent this masking from combining mortgages, at the local level there is an incentive for appraisals to come in high and for the lenders to want to make the selling price high, especially for loans with points, because the originators of the loan profit more this way. Full securitization of the mortgage undoes the incentive for the local lender to play the role of impartial monitor of the loan and thereby to not make loans that are likely to go bad. This is a problem that can be anticipated.
Yet a system of partial securitization, where some of the loan is locally held and the remainder is resold, means there is a lower volume overall in mortgage securities. Since these instruments seem so profitable in good times, the market wants 100% securitiztation then. This myopia can then be seen as a reason to regulate and also provides a sense of the sort of regulations that are necessary.
The gamers of the system can move faster than the designers of it. System design is always a compromise between competing ends. Gaming can be pure towards a sole purpose. That much should be understood about any system. These thoughts should be with us and it is why a total embrace of Laissez-Faire is extremely dangerous. It empowers gamers and allows them to do much damage. Our debates on the matter should be about comparing different forms of imperfection rather than about a search for the elusive and the impossible.
Innovation often aids the gaming behavior. Consider radar detectors and driving, perhaps the least controversial example one might come up with. Presumably a radar detector is purchased for one reason only, to lessen the likelihood of getting a ticket for speeding. Armed with a detector, the motorist will drive faster, as long as the detector indicates there are no police in the vicinity. If it is not the absolute speed of the vehicles that matters for safety (above a certain minimum speed, say 50 mph) but rather the variability in speeds that most impacts safety, then it seems clear that the detectors increase variability. Those who have them will drive faster than the rest. The impact on safety is unambiguous. The detectors lower safety. However, in a standard cost-benefit analysis that economists do, one must acknowledge the benefit to the drivers with the detectors. They get to their destinations sooner. So there is a time value to them. What is the result in aggregate? I really don't know. What does seem clear, however, is that there is an equity issue with gaming. Other motorists bear some of the safety risk but get none of the time value benefit. So, conceptually, the analysis is straightforward but on the actual arithmetic, adding up the benefits and subtracting out the costs, who knows? On a personal note, I'll make the following additional observation. If the possessor of the radar detector is a well paid professional (hence someone with high time value), middle aged, and without serious mental or physical health issues, then I believe their own self-preservation instincts mitigate the safety risks. On the other hand, if the possessor of the radar detector is the teenaged child of such a well paid professional, all bets are off.
Let's move on to a harder example, standardized test prep courses a la Stanley Kaplan. When I took the SAT, in 1971, I believe the College Board's position was that the test prep was of no consequence with regard to the results. My brother bombed the PSAT, took a test prep course thereafter, and his scores went up by over 300 points on the SAT. That's only a single observation but based on that and what I have gleaned since, there is definitely real consequence from giving the student confidence about the strategy to use when taking the test, whether to guess or to not answer a question when the student is unsure, and then becoming proficient in implementing that strategy. There can also be a real benefit from taking practice exams, doing diagnostics on those, and then cramming in areas of weakness where some concentrated study might help. Since each student should have the opportunity to put his best foot forward, some might not even consider private test prep as gaming the system. Do note its only commercial test prep done outside of school that is being considered here. Lower income students who can't afford commercial test prep therefore don't get its benefit. So, again, there is an equity issue with regard to the behavior. Leaving that aside, where is the social harm from the practice? I'm not sure everyone would agree, but this makes sense to me. If you treat the performance on the test as signal, an indicator of a hidden attribute of the individual which we might call "intellectual ability," then it might be that the test prep makes the signal less informative. Students with good scores might have high ability, but perhaps their ability is closer to average and the score is more a consequence of the private coaching the students have received. If that's true then schools in their admission practices either end up making more mistakes regarding whom to admit as a consequence of the test prep or the schools must incur substantial costs to consider other indicators of the students' ability, so as not to make those mistakes. That's where the social harm is.
Before pushing on, I'd like to refine the above based on my perspective as an instructor. The expression intellectual ability may not convey the right meaning. What one really wants are certain habits of mind and that the student reads on a regular basis stimulating and challenging material of his own choosing. Below is an excerpt from a post I wrote soon after starting this blog in 2005. It indicates that frequently we fall far short of this requirement. Further, I should observe that this requirement cannot be gamed the way a standardized test can. It requires substantial deliberate practice.
In particular I want to consider information literacy and its importance in the curriculum. As a teacher, I have to say that "old fashioned" literacy is more important to me. I'm of the mind that many of my students don't get the meaning from a New York Times story. I've tested that proposition on occasion with articles I've picked and assigned to the class, either from the Business section or the Magazine. I don't talk about this issue much if at all (except with a particular colleague who teaches Natural Resource Economics who agrees with me fully on this proposition). And I haven't seen it discussed, but it seems to me to be at the heart of the matter.Students need a well trained "voice in their head" which argues propositions, including what they read. They need to disagree with things when they don't add up, but they need to be able to "get it" without undo difficulty when the meaning is straightforward. It is a reasonable expectation (in the normative sense) that students have these abilities when they enter college. But, I fear, all too many of the students falter here. Because these kids are bright, I'm going to say the culprit is they don't read enough and so this habit of arguing with the voice in their head is not well cultivated. This is a real problem. I don't have a great solution for it, other than that the kids need to develop the habit of reading and to think of reading as internal argument.
* * * * *
The innovations discussed above happened some time ago. By focusing on the past where we are familiar with how the innovations have been subsequently utilized we can consider the impact of these innovations over the fullness of time. I don't know the extent of the radar detection business since the 1970s (during the second OPEC price shock many states lowered their speed limits and that gave a boost to radar detection but when the price of gasoline came back down the speed limits went back up), but clearly test prep has been a growth industry, at least till the recent downturn in the economy. Let's now focus our attention on past innovation in financial engineering. In particular, I want to look at junk bonds and the subsequent behavior that engendered as well as the securitization of residential mortgages and the consequences from that, from the lens of whether they enabled gaming the system, and if so how, or if alternatively they promoted economic efficiency and then describe the mechanism by which that occurred.
There are a several reasons for doing so. First and foremost, these innovations in financial engineering are what many people associate with the economic consequences of the Reagan Revolution. Ironically, both came into being during the Carter Presidency, though they clearly were popularized under Reagan. Michael Milken, who worked at Drexel Burnham Lambert, invented the junk bond. Since about 10 years later he was convicted of securities fraud, in the minds of many the junk bond concept is highly suspect. I hope that readers can suspend judgment in reading my arguments below. In contrast, Lewis Ranieri, the inventor of securitization, had a largely intact reputation, at least until the subprime crisis unfolded. A second reason is that in preparing for my class this fall, which will use the text Economics, Organization, and Management by Milgrom and Roberts, in chapter six that introduces the concept "moral hazard" there is a rather extensive discussion of the economics behind the Savings and Loan Crisis of the 1980s. I thought their straightforward analysis was a good way to consider both the gaming the system and efficiency questions. The third reason is to consider the issue of resolution of underwater mortgages and the impact securitization has had on such resolution, especially in the case where an entire community is beset by such mortgages and hence where there isn't a market for repurchase of the homes at discounted prices. I doubt the issue was considered at all in the 1970s, when Ranieri invented these instruments. Yet it is something to think through now, since its resolution clearly matters a lot moving forward. Joe Nocera had an interesting column about this yesterday. He takes the point of view of the homeowners, which is natural. One should be sympathetic to them. But what of the system overall. That system includes those who hold the mortgage securities. What is a good outcome for the system taking into account all interests? The last reason is that by thinking through the gaming the system versus efficiency arguments, one can get a better feel for how much regulation in this sector is appropriate.
Before getting to this, let's take a brief look at the macroeconomics by looking at the following two graphs. The reason for doing this is to ask whether it is possible to "see" the efficiency consequences of a change in the economic environment by looking at GDP growth. One might conjecture that if an important practice makes industry more efficient, that in turn would make the economy grow faster. Alternatively, one might expect that improved efficiency would reduce the volatility in growth rates. Then too, by looking at growth rates, one entirely abstracts from the distributional consequences of the change in environment. So much has been written about income (and wealth) inequality as of late that I don't want to take it on here. This is not to say those issues are unimportant. It is only to say that there is enough on our plate to look at these other issues here.
To generate this first graph I took historical data from the Bureau of Economic Analysis available as an Excel workbook, with a series that measures real GDP in constant 2005 dollars. For every pair of consecutive years, Y0 and Y1, I computed GDP(Y1)/GDP(Y0) - 1, expressed that as a percentage and then called that the growth rate in Y1. Though Edward Tufte probably wouldn't like what I did next, I had Excel connect consecutive plotted points with straight line segments. I found this easier to read than looking directly at the scatter without the line segments included. Let me make a few observations, simply from reading the graph. There are multiple periods where the growth rate is negative. These are recessions. (There were 2 in the 1950s, none in the 1960s. Note that this chart uses annual data. Typically when we speak about recession we use quarterly data and it requires two consecutive quarters of negative growth.) Usually when the recession ends, there is a brief period of very high (over 5% real) growth. Much of that is catch up for the lost time during the recession. Then growth calms down to a more modest rate, until the next trough begins. Also, note that external "shocks" can trigger a recession rather than simply the business cycle doing its thing. In the 1970s, there were two OPEC Oil Price shocks. The first in 1973-74. The second in 1979. This was the period known as stagflation. Paul Volker pursued a tight money policy to wrest the inflation out of the economy. The created another recession in the early 1980s. The next recession, mild as indicated by this graph, happened during the 1992 election season. You'll recall the the expression coined at that time, "It's the economy, stupid." I trust that people know the rest of the history well enough to explain the remaining portions of the graph.
GDP growth can be broken up into two components - population growth and productivity growth. To abstract from the former, one also looks at per capita GDP growth. This next table was generated from different data provide by the Census. It is from the spreadsheet on Per Capita Income All Races. The same sort of process was used to produce the growth rates. It doesn't go back quite so far in time, but it covers the years that are relevant for this discussion. The periods of negative growth in this table are what most of us think of as a recession, though that is not the official definition. Using nothing more than an eyeball test, the cycle under Reagan looks quite similar to the cycle under Clinton. The prior cycle, roughly from the end of Nixon to the end of Carter, has the same amplitude as the other two, but had shorter duration. The cycle under Bush II had less amplitude, even before the financial crisis. And the recession that started just before Obama took office had a lower trough and has not yet concluded.
Let's get back to financial engineering and begin with the Milgrom and Robert's analysis. Thee are three players - depositors, the S&L itself, and the government which provides deposit insurance. The analysis shows the following, intuitive result. When taking on an investment project that has both upside and downside risk, when the S&L faces limited downside risk (with the government bearing the bulk of that) then the S&L has a preference to invest in projects with a high upside regardless of the expected return on the project. The moral hazard that Milgrom and Roberts discuss occurs in an example where the safer project also has higher expected return, but the S&L opts for the riskier project, thereby creating an expected social loss, which is born by the taxpayers.
Let's next note that for any investor, financing a project with junk bonds in the presence of limited liability from the bankruptcy laws is very much like the situation for an S&L with deposit insurance. There will be a preference for riskier projects, to take advantage of the big upside, since the investor is largely protected from the downside. It's other creditors who will bear that risk.
Now the new wrinkle, an important one. It may very well be that from the social perspective insufficient risk per project is taken. Consider this piece about cancer research over the last forty years, which makes exactly this point. Individual researchers, who have their own reputations to care about, take on safe projects so that they have a very good chance to succeed, thereby enhancing their own reputation. But the field as a whole advances little as a consequence. Let's bring this thought back to the investment context. The efficiency argument about the hostile takeovers that the junk bonds enabled is that entrenched senior management, which received substantial perqs from the status quo, worked to preserve that rather than to invent the next great thing and engage in creative destruction of the old. Further, the traditional motive for innovation, from competition in the product market, was muted in the early 1980s because of the oligopolistic nature of industry and the tacit collusion between the players. (Think of the Big Three auto makers.) Holmstrom and Kaplan use this argument to explain the merger activity and other changes in corporate governance that were widespread in the 1980s.
Let's stick with this framework as is for a bit before going outside it and critiquing it. The incentive for hostile takeover or making other risky investments is driven by the potential upside, not the expected return. When the expected return is high, the behavior can be socially advantageous, especially if the risks are independent going from one industry to the next. When the expected return is low, the behavior will be socially deleterious and then amounts to gaming the system. The same behavior can be one or the other. There is no telltale sign up front which it is. It may be that over the course of the business cycle early on there is a comparative abundance of risky but high expected return projects. Later there may be comparatively few of those but more high risk and low expected reward projects. Thus the gaming of the system is apt to speed up before the next trough. It's what feeds the bubbles and makes the trough deeper. Knowing this, a regulator would want to be pretty hands off coming out of a slump, but then start to apply the breaks as the economy improves and become heavy handed when the gaming behavior seems to be on the uptake. If it is not possible to apply a time sensitive approach to regulation - the law is the law, the enforcement is the enforcement, some choice has to be made between more rapid growth out of a trough accompanied with more rapid plunging into the next trough, versus perhaps lower growth overall but less extreme in the business cycle.
Now some critique. I offer up three distinct criticisms. Quite possibly more could be generated.
- Reform the company or gut it? Reform of a company does not occur with a flick of switch. It is hard work to change the corporate culture. It may take substantial new investment. It likely requires an extended duration to put the new business practices into place and make them effective. Perhaps in some cases the efficient solution is to gut the company - its current assets on hand exceed the expected value to be obtained from reform. The issue here is whether the hostile takeover approach gets this determination right or if there is bias in it. On this I think it is telling to read this George Gilder quote from Miliken's Wikipedia entry offered up to defend Miliken from his critics. "Milken was a key source of the organizational changes that have impelled economic growth over the last twenty years. Most striking was the productivity surge in capital, as Milken … and others took the vast sums trapped in old-line businesses and put them back into the markets." To me this reads - if a company has a lot of cash on hand, it is a good target for acquisition. That money should be put into circulation in new business. Observe that many quite successful companies today, particularly the big technology firms, have a practice of keeping a lot of cash on hand. If that really was the main criterion for takeover (and takeover threat) it readily could be biased toward too much gutting activity and insufficient reform of businesses that should have been sustained.
- Reactions at other companies? It is well understood that many other companies which ended up not being acquired adopted defensive practices to avoid being taken over. Thus poison pills and golden parachutes became part of the lexicon. I'd like to focus on a different adjustment - the movement away from defined benefit pension plans to 401K plans. I've written about this elsewhere in an essay entitled Rethinking The Social Contract. There I argued that we should return to the defined benefit pension approach, but make the plans balanced in an actuarial sense. People have confused the generosity of benefits from the issue of who should bear the risk in the returns to savings. Retirement savings should be insured. Defined benefit plans do that while 401K plans do not. The move to 401K plans surely has been pernicious for society as a whole. It has lowered the personal saving rate. It has substantially weakened the bond between employer and employee. And it has created a large population on the verge of retirement yet unprepared for it, because employees have been myopic with regard to their contributions. (They don't contribute enough early in their careers and can't make it up by contributing more later.)
- The wrong tool in a flat world? What does the threat of takeover do in industries where there is vibrant competition between the players? A good argument is that the threat is pernicious in that it forces incumbent firms to overly focus on near term profitability, especially when longer term projects require substantial lumpy investment and hence demand that they build up of piles of cash as discussed above. So these firms eschew the longer term projects and invariably shorten their own half-life as successful companies as a result. Perhaps there was a one time benefit in the Reagan years from the spate of takeovers that took place then. But in the presence of tough external competition, under performing firms will fail of their own accord. Yet these possibilities engendered by leveraged buyouts still exist. Why? Here it also should be noted that balance sheets are a very limited way to measure the likelihood of future success in an industry. There has been a historical tension between MBA leadership in firms and engineer leadership. The financial engineering tools have tilted the balance in the wrong direction.
When an entire community is beset by underwater mortgages, there is no market for resale of the home and the efficient solution probably entails the existing owners staying in their homes. If the mortgages were held locally, this solution could be obtained by a negotiation between the lender and the homeowner to a new mortgage that makes sense in the current environment. There might be some delay getting to that new mortgage as the result of the haggling between the lender and the homeowner over the terms of the loan. But given those terms, each party has incentive to move to the new mortgage asap, for fear that the old mortgage will become under performing and then the lender will assume the property with no ability to resell it. That outcome would be unfortunate and benefits nobody. Thus, if the mortgages were locally held, one might envision that it would take a while to get the first few mortgages renegotiated but thereafter many of the mortgages would be renegotiated quickly as the new terms became more standardized.
The situation is quite different with securitized mortgages. There is first the question of the extent of reinsurance (credit default swaps). If a loan renegotiation triggers a payment on a credit default swap, then those obligated to make such payments have incentive to block the renegotiation if they can. There is second the issue of whether the valuation technique in the securitization accurately reflects true loan value, inclusive of the likelihood of renegotiation, or if it is more an artistic rendering tied principally to the face value of the original loans. If there is this artistic valuation aspect, the holders of the securities may have incentive to block the renegotiation, since that would force downward the securities asset values. There is third the issue of whether the terms of renegotiation of loans in one community would be independent of the terms so negotiated elsewhere in the country. If rather than independence early settlement in one community created a benchmark for settlement elsewhere then the security holders would have incentive to block negotiation in that initial community unless the terms were quite favorable to them. Taken together, these reasons suggest that securitization is a significant force in blocking getting to a sensible solution and hence is a significant factor in prolonging the economic slump.
Now let's unfold this via a backwards induction. Miliken, in the NY Times piece about him that is linked above, argues that the sub-prime crises was entirely attributable to a decline in underwriting standards (making 100% loans and not verifying the income of the new homeowner). But he does not ask whether underwriting standards can be controlled by the market or if they will deteriorate naturally over the course of the business cycle and further whether securitization facilitates their decline, by masking the riskiness of the loans. Even absent this masking from combining mortgages, at the local level there is an incentive for appraisals to come in high and for the lenders to want to make the selling price high, especially for loans with points, because the originators of the loan profit more this way. Full securitization of the mortgage undoes the incentive for the local lender to play the role of impartial monitor of the loan and thereby to not make loans that are likely to go bad. This is a problem that can be anticipated.
Yet a system of partial securitization, where some of the loan is locally held and the remainder is resold, means there is a lower volume overall in mortgage securities. Since these instruments seem so profitable in good times, the market wants 100% securitiztation then. This myopia can then be seen as a reason to regulate and also provides a sense of the sort of regulations that are necessary.
The gamers of the system can move faster than the designers of it. System design is always a compromise between competing ends. Gaming can be pure towards a sole purpose. That much should be understood about any system. These thoughts should be with us and it is why a total embrace of Laissez-Faire is extremely dangerous. It empowers gamers and allows them to do much damage. Our debates on the matter should be about comparing different forms of imperfection rather than about a search for the elusive and the impossible.
Monday, July 09, 2012
Why Don't Economists Run for High Political Office?
I wonder whether there would be more coherence in discussing fiscal Policy if some Members of Congress or the occupant of the White House were economists. Then, too, one can ask whether that being the case would place a higher standard on the press to explain what is going on with the economics.
With respect to the example given by the McLaughlin's in the clip below, what was their income in the decade before the Bush Tax Cuts were put into place? Was it that same $82,000? If their income has declined precipitously since Bill Clinton was President, then perhaps one can understand why going back to the pre-Bush tax rates would be so onerous. If, alternatively, their income has been flat, why would going back to the Clinton tax rates cause such a hardship? Do they now have private spending needs that they didn't have twelve years ago? Or is something else amiss here?
There is the further issue that in talking about fiscal policy sensibly, one has to consider the entire picture - expenditure and revenue, near term and long term. Economists, whether of the left or the right, would insist on that much. Alas, we hardly seem to get this in the public discussion.
The article makes it seem as this latest policy recommendation is merely posturing for the fall election. Suppose that's true. Why should the posturing matter at all to voters. Given the gridlock we've seen with this Congress, there shouldn't be any hope that the lame duck session will do something sensible about the "fiscal cliff" mentioned in the clip below. The thought would then be that with a different Congress perhaps something sensible might be done. If that's the thinking, economists in high office would insist it be discussed now.
There is a strong case to be made for new near term federal spending, simply to offset the reduction in state spending since the stimulus package has worn off. Where is that in the policy recommendation?
With respect to the example given by the McLaughlin's in the clip below, what was their income in the decade before the Bush Tax Cuts were put into place? Was it that same $82,000? If their income has declined precipitously since Bill Clinton was President, then perhaps one can understand why going back to the pre-Bush tax rates would be so onerous. If, alternatively, their income has been flat, why would going back to the Clinton tax rates cause such a hardship? Do they now have private spending needs that they didn't have twelve years ago? Or is something else amiss here?
There is the further issue that in talking about fiscal policy sensibly, one has to consider the entire picture - expenditure and revenue, near term and long term. Economists, whether of the left or the right, would insist on that much. Alas, we hardly seem to get this in the public discussion.
The article makes it seem as this latest policy recommendation is merely posturing for the fall election. Suppose that's true. Why should the posturing matter at all to voters. Given the gridlock we've seen with this Congress, there shouldn't be any hope that the lame duck session will do something sensible about the "fiscal cliff" mentioned in the clip below. The thought would then be that with a different Congress perhaps something sensible might be done. If that's the thinking, economists in high office would insist it be discussed now.
There is a strong case to be made for new near term federal spending, simply to offset the reduction in state spending since the stimulus package has worn off. Where is that in the policy recommendation?
Saturday, July 07, 2012
Lessons Learned
We offer up a note of thanks.
Here's hats off to fraudulent banks.
Their profits from the trading desk
Are more obscene than bad burlesque.
Wanting big returns more and more
They fixed the rate known as Libor.
"In God we trust," they danced the jig.
Now we know the system is rigged.
This has created a schism
Regarding capitalism.
The markets know best some will say.
Their advice? Get out of the way.
Others see a vast infection,
Which suffers from late detection,
And from not applying a cure,
A part of which must be de jure.
Punish wrongdoers they aver.
That's how to malfeasance deter.
Reasonable folks disagree.
That much is plain to you and me.
Doing too much or too little
Credit markets now are brittle.
Government efforts to no avail
The true end of too big to fail.
The banks had the ability
To show us instability
About the system as a whole;
That has been their positive role.
Ignorance is bliss, it is said.
With knowledge there is only dread.
Yet here let's give the banks a hand.
It will work out though not as planned.
A corrupt system has a plus
For it shows the true problem's us.
Here's hats off to fraudulent banks.
Their profits from the trading desk
Are more obscene than bad burlesque.
Wanting big returns more and more
They fixed the rate known as Libor.
"In God we trust," they danced the jig.
Now we know the system is rigged.
This has created a schism
Regarding capitalism.
The markets know best some will say.
Their advice? Get out of the way.
Others see a vast infection,
Which suffers from late detection,
And from not applying a cure,
A part of which must be de jure.
Punish wrongdoers they aver.
That's how to malfeasance deter.
Reasonable folks disagree.
That much is plain to you and me.
Doing too much or too little
Credit markets now are brittle.
Government efforts to no avail
The true end of too big to fail.
The banks had the ability
To show us instability
About the system as a whole;
That has been their positive role.
Ignorance is bliss, it is said.
With knowledge there is only dread.
Yet here let's give the banks a hand.
It will work out though not as planned.
A corrupt system has a plus
For it shows the true problem's us.
Tuesday, July 03, 2012
Sweet Potatoes
I'm prone to experiment, not the scientific method type of experiment, just trying things out that are not standard, to see how they go. Last week a couple turned out pretty well, for a change. Several different friends from out of town were coming for the eTextbook UnConference. Each wanted to see me socially while they were here. So I invited them out to the house as a group. I said I'd grill out and have plenty to drink. We could all relax that way. That became the plan.
I chose that instead of going to a restaurant for a couple of reasons. One is simply that going to a restaurant is what we do when each of us is out of town, on neutral turf. In this case, however, we were on my turf, so I felt obliged to signify that in some way. The other was that one of the group, Steve Acker, was coming with his wife, Suzanne, and their dog, Jasmine. They were en route to their house in Colorado, using the conference as the first resting point during the trip. If we went out to dinner, either Suzanne and Jasmine wouldn't come, not a particularly friendly outcome, or Suzanne would come but Jasmine wouldn't, in which case Suzanne would be the only non learning technologist in the group and Jasmine would be alone in a hotel room. Neither of those were happy prospects. By having them all to the house my wife and kids would also participate - making it much simpler and more comfortable for Suzanne and Steve.
We have a dog too, Ginger, a golden doodle, now eight years old. Ginger is pretty set in her ways. She's calmed down a lot since she was a puppy, but when people come to the house she temporarily forgets her age and starts acting like a puppy again. She barks up a storm and might jump onto someone - not with the intention of hurting them, only to test the waters. Eventually the person starts to look familiar and Ginger calms down. That's been here routine for quite a while. But we haven't had another dog in the house for several years. So we didn't know how Ginger would react to Jasmine. Plan B was to put her in the basement and let one of the kids be down there with her. (When we have a thunderstorm, that's her hiding place.) We didn't need Plan B. Jasmine and Ginger got along okay. They even played a bit together outside. And Jasmine did one of Ginger's favorite tricks when it's hot out; climb into the pond to cool off.
The other experiment started with the decision to grill the night before and serve the food cold the next day. That way I wouldn't be all grotey when the guests arrived. With things prepared in advanced I'd feel a bit more in control. I grilled several different meats - pork tenderloin, brats, and turkey breast. (I thought the turkey breast would be for dinner the night of the grilling, but the rest of the family had some of the pork tenderloin, which was fine because there was quite a lot of that. I didn't serve the turkey to the guests. It wasn't fresh and it proved to be very salty.) I also did salmon. Who knew what the guests would like and whatever they didn't have we'd eat over the next several days. I did a few russet potatoes and some sweet potatoes in foil and let them cook slowly on the grill - for more than an hour. And near the end of the grilling I did some asparagus and red and yellow bell peppers.
Day of my wife had committed to making a regular salad. To round things out I made a pasta salad with peas that I planned to serve cold. There's a restaurant in town called The Great Impasta and they have a pasta salad with spinach pesto that I've always liked (I'm less fond of pesto made with basil), so I was trying to do something in that vein. I used both (whole wheat) rotini and bowtie pasta for contrast in color and shape. I had a big bag of frozen peas. After cooking the pasta for about six minutes I added the peas in and cooked another six minutes. It all came out pretty well. But I couldn't find spinach pesto in the store, so I bought tomato pesto instead. Sometimes necessity forces improvisation.
It occurred to me that was going to be a lot of food for dinner so I decided to serve the grilled vegetables and sweet potatoes as appetizers and a few of the brats as appetizers as well. I made a large platter with the vegetables. The bell peppers and asparagus I put out as is. For the sweet potatoes, I sliced them with slices thicker than when you slice an egg but not so thick as a slice of bread. I left the skin on. Color-wise, I thought it was quite an attractive platter. I made a separate plate with the brats cut up with a toothpick in each piece. That's a great appetizer with beer, though probably not great for you.
My wife was mildly horrified when she saw the platter. It's true that the asparagus was incredibly thin - the guests originally thought it was string beans. And the tips were bruised when I bought them. But I figured they'd eat the same no matter what. I knew these folks pretty well, except Suzanne, who proved to be a sweetheart, and I know what it's like at the end of a day after being in a conference the whole time. The real purpose of the food is so the alcohol doesn't feel lonely. This time around I was on the mark. The platter was a big hit and since it really was finger food, eating it helped everyone relax.
It was a delightful evening. I like it when my experiments work out. And after my wife complimented me for doing such a good job as host. That's incredibly rare.
* * * * *
I don't know if it's hanging around with learning technologists that encourages this, but having a good experience like this I want to know if I can replicate it. That's the obvious next step so I think I'll do something in this vein for the locals in the not too distant future. I don't know precisely when. It would be good for the heat to break first. But then from my point of view it could really be anytime.
If replication also gets a thumbs up, the next question is whether it can scale. The night of that party as I was trying to go to sleep I thought about a chain restaurant called The Sweet Potato that served an eclectic menu of the sort described above but featured grilled vegetables served cold. I told this to a colleague today and she was wildly enthusiastic about it. She said there would be great demand for something like this in town - a healthy alternative to the fast food places. Whether that can be delivered at a reasonable price, I have no idea. When I shopped at Schucks for the vegetables I used in that dinner, they were not cheap. (I think the bell peppers were 2 for $3.) So this could be a pipedream. I've done enough with learning technology in the past to know that even with wonderful small scale experiments, that's no guarantee something good will come when done at scale. Nevertheless, the idea is intriguing.
A few columns back, Joe Nocera wrote about Burger King, that it was teetering because the Private Equity folks who "run it" have been using it as a cash cow. If it goes belly up in the near future perhaps a new enterprise called The Sweet Potato can buy up the properties in a fire sale and convert them along the lines in the previous paragraph. I can imagine the IPO already.
I chose that instead of going to a restaurant for a couple of reasons. One is simply that going to a restaurant is what we do when each of us is out of town, on neutral turf. In this case, however, we were on my turf, so I felt obliged to signify that in some way. The other was that one of the group, Steve Acker, was coming with his wife, Suzanne, and their dog, Jasmine. They were en route to their house in Colorado, using the conference as the first resting point during the trip. If we went out to dinner, either Suzanne and Jasmine wouldn't come, not a particularly friendly outcome, or Suzanne would come but Jasmine wouldn't, in which case Suzanne would be the only non learning technologist in the group and Jasmine would be alone in a hotel room. Neither of those were happy prospects. By having them all to the house my wife and kids would also participate - making it much simpler and more comfortable for Suzanne and Steve.
We have a dog too, Ginger, a golden doodle, now eight years old. Ginger is pretty set in her ways. She's calmed down a lot since she was a puppy, but when people come to the house she temporarily forgets her age and starts acting like a puppy again. She barks up a storm and might jump onto someone - not with the intention of hurting them, only to test the waters. Eventually the person starts to look familiar and Ginger calms down. That's been here routine for quite a while. But we haven't had another dog in the house for several years. So we didn't know how Ginger would react to Jasmine. Plan B was to put her in the basement and let one of the kids be down there with her. (When we have a thunderstorm, that's her hiding place.) We didn't need Plan B. Jasmine and Ginger got along okay. They even played a bit together outside. And Jasmine did one of Ginger's favorite tricks when it's hot out; climb into the pond to cool off.
The other experiment started with the decision to grill the night before and serve the food cold the next day. That way I wouldn't be all grotey when the guests arrived. With things prepared in advanced I'd feel a bit more in control. I grilled several different meats - pork tenderloin, brats, and turkey breast. (I thought the turkey breast would be for dinner the night of the grilling, but the rest of the family had some of the pork tenderloin, which was fine because there was quite a lot of that. I didn't serve the turkey to the guests. It wasn't fresh and it proved to be very salty.) I also did salmon. Who knew what the guests would like and whatever they didn't have we'd eat over the next several days. I did a few russet potatoes and some sweet potatoes in foil and let them cook slowly on the grill - for more than an hour. And near the end of the grilling I did some asparagus and red and yellow bell peppers.
Day of my wife had committed to making a regular salad. To round things out I made a pasta salad with peas that I planned to serve cold. There's a restaurant in town called The Great Impasta and they have a pasta salad with spinach pesto that I've always liked (I'm less fond of pesto made with basil), so I was trying to do something in that vein. I used both (whole wheat) rotini and bowtie pasta for contrast in color and shape. I had a big bag of frozen peas. After cooking the pasta for about six minutes I added the peas in and cooked another six minutes. It all came out pretty well. But I couldn't find spinach pesto in the store, so I bought tomato pesto instead. Sometimes necessity forces improvisation.
It occurred to me that was going to be a lot of food for dinner so I decided to serve the grilled vegetables and sweet potatoes as appetizers and a few of the brats as appetizers as well. I made a large platter with the vegetables. The bell peppers and asparagus I put out as is. For the sweet potatoes, I sliced them with slices thicker than when you slice an egg but not so thick as a slice of bread. I left the skin on. Color-wise, I thought it was quite an attractive platter. I made a separate plate with the brats cut up with a toothpick in each piece. That's a great appetizer with beer, though probably not great for you.
My wife was mildly horrified when she saw the platter. It's true that the asparagus was incredibly thin - the guests originally thought it was string beans. And the tips were bruised when I bought them. But I figured they'd eat the same no matter what. I knew these folks pretty well, except Suzanne, who proved to be a sweetheart, and I know what it's like at the end of a day after being in a conference the whole time. The real purpose of the food is so the alcohol doesn't feel lonely. This time around I was on the mark. The platter was a big hit and since it really was finger food, eating it helped everyone relax.
It was a delightful evening. I like it when my experiments work out. And after my wife complimented me for doing such a good job as host. That's incredibly rare.
* * * * *
I don't know if it's hanging around with learning technologists that encourages this, but having a good experience like this I want to know if I can replicate it. That's the obvious next step so I think I'll do something in this vein for the locals in the not too distant future. I don't know precisely when. It would be good for the heat to break first. But then from my point of view it could really be anytime.
If replication also gets a thumbs up, the next question is whether it can scale. The night of that party as I was trying to go to sleep I thought about a chain restaurant called The Sweet Potato that served an eclectic menu of the sort described above but featured grilled vegetables served cold. I told this to a colleague today and she was wildly enthusiastic about it. She said there would be great demand for something like this in town - a healthy alternative to the fast food places. Whether that can be delivered at a reasonable price, I have no idea. When I shopped at Schucks for the vegetables I used in that dinner, they were not cheap. (I think the bell peppers were 2 for $3.) So this could be a pipedream. I've done enough with learning technology in the past to know that even with wonderful small scale experiments, that's no guarantee something good will come when done at scale. Nevertheless, the idea is intriguing.
A few columns back, Joe Nocera wrote about Burger King, that it was teetering because the Private Equity folks who "run it" have been using it as a cash cow. If it goes belly up in the near future perhaps a new enterprise called The Sweet Potato can buy up the properties in a fire sale and convert them along the lines in the previous paragraph. I can imagine the IPO already.
Subscribe to:
Posts (Atom)