Tuesday evening I got back from the Redesign Alliance Conference in Orlando . Since then we had a little family jaunt to Chicago – perhaps a sign of the times the family was clearly more engaged in seeing 300 at the IMAX theater on Navy Pier than it was in touring the Field Museum – and both the return home from Orlando and Chicago trip gave me with a lot of time for reflection about the conference. This post contains both direct impressions and these afterthoughts.
I had an enormous sense of déjà vu throughout and I believe some of that has relevance for what I want to comment about here regarding future directions, so I’m going to bounce back and forth between the present, the past, and what I’d like to see coming. In theOrlando airport (free Wireless! – almost an offset for the brutality of going through the security check in) I read an email message from Frank Mayadas, who heads up the Alfred P. Sloan Foundation’s Anytime Anyplace Learning Program, where he talked about the “retirement” of some of the pioneers: Burks Oakley (Burks got me started in online learning), Gary Miller (Gary is Executive Director of Penn State’s World Campus), and Roxanne Hiltz and Murray Turoff (authors of Network Nation), all big in the work of Sloan-C. That only accentuated the déjà vu and it made me want to comment about the role that external organizations play and should play in the future in addition to more about how we should do course redesign.
It’s now ten years since the conception of the SCALE Efficiency Projects, which I oversaw. As there is a reasonably good description of the history that led to those projects in that article, here I only want to add some consideration of the ethical dimension as a driver. Nowadays, we recognize that for course redesign to work there must be an embrace of it by the Central Campus Administration, and in this regard I was impressed by the remarks of Randy Smith of Ohio State in the session on Using Assessment to Achieve Other Goals. To a certain extent we had that in SCALE, with Susan Gonzo in the Randy Smith Role, George Badger as the CIO before our campus had a CIO, and John Ory director of our Center for Teaching all on our advisory board along with Burks. Nevertheless, the authority to implement was entirely delegated to me and so I retain the view that these projects emerged from an entrepreneurial, almost “Wild West” environment otherwise atypical of our campus, where there was a premium for quick implementation and less of a concern about how this would impact future directions. (The effort in Spanish is the notable exception that proved the rule.)
The key ethical issue was that we’d already taken the Sloan Foundation’s money and had spent quite a deal of it promoting and diffusing the approach we then called ALN, an acronym short for Asynchronous Learning Networks, a label coined by Frank. Diffusion was part of our mission, to be sure, but another part was that we had promised in the grant proposal to use ALN to lower the cost of instruction in an overtly obvious way. With the exception of my class and the class taught by my friend and colleague in Economics Larry DeBrock (he and I may not have been as expert as some of the other SCALE grantees in terms of teaching with the technology, but unlike the other grantees we did understand what it meant to lower the cost of instruction) SCALE wasn’t doing that and until Sloan started to put pressure on us to deliver in this dimension, nobody seemed to be particularly troubled about our failure to do so. Reducing the cost of instruction was not a campus goal at the time (this was a period of fiscal opulence) and further many took offense from even mention of this Sloan goal diverting us from our “true mission” – improving the quality of instruction.
On the one hand, I was uniquely suited to deliver something both do-able and sensible in the cost dimension, given my ALN teaching experience and being trained as an economist, while on the other hand I was a novice administrator and it seemed a bit unfair to have to internalize that ethical issue on my own. I didn’t really give much thought to the alternative – let Frank know we couldn’t deliver on cost and then renegotiate the activities to take place under the remainder of the grant. After all, I was delivering on the objectives in my own course. But had I been the PI at the start I would have insisted in more buy-in from the Campus on the cost piece or I would not have put in the cost dimension into the proposal, though I do recognize that putting it in made the proposal seem more appealing. (It also should be remembered that we’re the home of Plato. Plato cast a huge shadow on campus at the time. We’d been doing a lot of CBT pre-Internet and in many departments, notably Physics, some of the aims of Course Redesign had already been substantially achieved and already fully internalized and I was in the awkward position of not being able to use these past successes because there wasn’t a credible way to make them appear a consequence of our current efforts.)
My insight, not particularly profound but not something we had said going in, was to tie the cost issue to high enrollment courses and ignore it elsewhere. And with that there was the realization that a combination of CyberProf or Mallard quizzing and written work in FirstClass or WebBoard was something that would be useful in many courses across different disciplines. Indeed, as I got to know Diane Musumeci who used her pioneering approach in Italian to start up our Spanish Project, we remarked both about how our technology use seemed so similar and how it seemed there were many students taking both our classes – indicating to us that they liked what we were doing.
While Sloan was clearly the major external force pushing us with ALN and on the cost issue, Burks had arranged for the University to have a membership in NLII and there (I believe I first went to NLII in January 1997) I met Bill Graves and from that connection he became one of our featured speakers at the first Faculty Summer Institute. Further, I met some folks from Virginia Tech, notably John Moore and Len Hatfield, and became aware of their Faculty Development Institute, a program that I was jealous about at the time (still am). We ultimately had a bunch of folks from Virginia Tech come out for a workshop to share lessons learned with SCALE faculty and I attended one of their Learning Workshops in Roanoke; we had quite a collegial relationship that pre-dated the cross campus relationships I now have through the CIC Learning Technology Group. Further, NLII/Educom had a Library of thought pieces that provided fodder for me as I was conceiving the Efficiency Projects. So it was a very useful conference for me and that first meeting and future NLII meetings were nurturing.
Here is a bit more history before I return to the present circumstance. I met Carolyn Jarmon in a private meeting with a publisher somewhere inPittsburgh . George Badger and I went together to this so it likely was late 1997 or early 1998 and there may have been some folks from one other campus. I believe Carolyn was an Educom Fellow at the time. I have no recollection whatsoever about the subject of our conversation, but I do recall seeing her again at a Sloan Conference (then at the World Trade Center in New York) so by the time the Pew Grant Program in Course Redesign came into being, I knew her reasonably well from other interactions. While I had seen Carol Twigg speak at a couple of NLII conferences, I don’t believe I actually met her until Burks introduced us at the Sloan Conference in fall of 1998, at which point she got acquainted with the work on the SCALE Efficiency projects. Following that we had quite a dialog via phone and email about her planning tools and I believe I offered some editorial suggestions along the lines of whether what they were doing made sense economically.
Indeed, for a brief stint I was the poster child for Course Redesign with part of our JALN paper quoted in plenary session given by Carolyn at the 1999 NLII conference, where the Pew program was introduced. By this time the SCALE grant had been renewed and the Efficiency Projects continued to be important in the renewal but there was little visibility for these projects on campus because: (1) our Ed Tech Board was not involved at all, (2) we had an Interim Provost that was a signpost for the beginning of us having new leadership in all the key Campus administrative positions, and (3) efficiency was still a dirty word when discussing learning on campus. So nationally there was recognition for my work but on campus essentially none. Talk about cognitive dissonance!
* * * * *
Seeing Carolyn right before the cocktail party at the conference, I asked her perhaps an unfair question: What’s different now with course redesign as compared to 1999 when the Pew project started? It didn’t take long for her to answer. Then nobody believed it could be done. Now there are many believers. The attendance at the conference was a testament to that. It was quite heavily attended in spite of the fact that they were not giving out $200K a pop for redesign as they did in their grant program in 1999. It seems mostly that people are coming now to learn how to do this. And they have a clear reason to, both because of fiscal exigency and because, especially with the community colleges but also some of the four years, success rates are in need of improving and those performance levels are getting much heavier scrutiny. In the sense that this is about getting others to learn what some have already done, the Course Redesign Program is about diffusion of innovation, not about innovation itself.
But I do think the world has changed substantially since 1999 in ways that are relevant for redesign and I want to discuss how some of those changes might be brought into the redesign process.
Perhaps more importantly for this piece, I too have changed. Since the Pew Program was conceived I became the director of a hard money (under) funded unit called the Center for Educational Technologies and later witnessed that merge with the big campus IT organization, increasing my own visibility on campus but making learning technology look like it is more about IT than about pedagogy, with my boss the CIO, a good guy but himself not embraced with the learning issues. Under that I saw us implement an enterprise Learning Management System and watched how that system was eagerly taken up by the high enrollment courses but much less so by smaller upper level courses and also less so by innovative faculty, who want to embrace emerging technologies. I saw us have lukewarm forays with other campus support units, notably the Library and the Center for Teaching Excellence. And I saw how the campus viewed the presentation technology we associate with “smart classrooms” as a goodie to be leveraged on a per seat basis so that all our large general assignment classrooms have it while most of our smaller general assignment classrooms do not.
The big lesson learned from this experience is that at large public R1 institutions such asIllinois teaching is definitely a second fiddle enterprise at best. It therefore behooves those on campus who advocate for change with teaching and learning to speak with one voice, at the risk of being ignored entirely if they do not. But speaking with one voice is hard, very hard. It is much easier for each of us providers to pursue our separate agendas instead of coalescing on a few key notions. So we have one group that cares about learning and a different group that cares about using technology to promote learning. We have yet another group that cares about large Gen Ed courses and, of course, the departments who care about the major. Then we have interdisciplinary efforts to create minors according to a common theme, and on and on. This Tower of Babel approach makes it easier for the rest of campus not to listen at all (and hence for the Provost not to allocate cash to these efforts). One wonders in this regard whether Course Redesign can act as a unifier or if it will end up being just one more dialect.
I also changed in my understanding about learning and in my teaching experience. I was poorly read about learning in 1996, I had read Democracy and Education but not much else; hence I had all the earnestness of a novice learner in my own pursuit of teaching innovation. Since then I pursued a self-directed and quite eclectic set of reading on learning and teaching and now I find myself attracted to people who can offer me a new insight and an area unknown to me to explore. And in my own teaching I can implement novel ideas in a more nuanced way.
Starting in summer 2001 my appointment has been 100% time as an administrator. When I teach now it’s in an overload capacity where the Econ department gets a freebie and hence I’ve opted to take my compensation in kind – as a way to explore my current ideas about teaching. I last taught my Intermediate Microeconomics course in the style of large class redesign in spring 2001. My most recent teaching has been in very small classes, most recently with Campus Honors students.
It’s on this basis that I want to comment about the pedagogy I heard about at the conference. There were two core ideas – quizzing, a way to motivate students by tracking their performance and allocating points for correct response, and embedded assessment, situated feedback for students particularly when they are struggling on some aspect of the course.
The quizzing use was featured in the presentation by Gordon Hodge in one of the Disciplinary Showcase Sessions and which, in turn, was highlighted in the opening plenary that Carol gave. Professor Hodge talked about teaching the large intro psychology course with publisher test banks uploaded into the campus course management system and then administered to students in the form of a twenty question timed quiz, three quizzes per week, students could do them as often as they liked with their highest score determining the points they were allocated. The approach helped the students to put in the requisite time on task and thereby increase familiarity and understanding of the subject.
The embedded assessment idea, also highlighted in Carol’s keynote, was featured in a presentation by Candace Thille, director the of the Open Learning Initiative at Carnegie Mellon, who talked about and gave a demo of how students get assessment and then feedback in the OLI software either in junctures between presentation content or when they get stuck on other assessment. The approach helps the students to keep going and not get discouraged.
There are, however, issues with each of these. On the quizzing, about a year before the SCALE Efficiency Projects, I gave an analysis of the issues (this is an old document, none of the links work) based on the economic model of the Principal and Agent. That quizzing does such a good job of addressing the student agency problem is a triumph of extrinsic assessment, an antidote to the disengagement pact that George Kuh talks about. And I must say that I heard from Donna Charelvoix-Romine before the conference talking about her non-major students who take the intro to meteorology course that she teaches and from many others during the conference that students in these high enrollment courses won’t put in any effort whatsoever unless they get course credit for it. So there clearly is a reason for the quizzing.
But many educators, among them the psychologist Jerome Bruner and Ken Bain, author of What the Best College Teachers Do, emphasize an appeal to students via intrinsic motivation, to wit curiosity, a desire to understand the puzzles that real life circumstances pose from the perspective of disciplinary expertise. At the conference I didn’t hear anything at all about intrinsic motivation. That was disappointing. To me it’s a natural and important for Course Redesign to ask how one might bring in elements of intrinsic motivation and yet maintain the student commitment and a scalable approach that the quizzing offers.
In my way of thinking, intrinsic motivation enters into at least two aspects of instruction. It appears in what we have the students read and what topics we talk about in class. When I taught those Honors Students I had them read The World Is Flat, Freakonomics, and MoneyBall, in a course where the students were highly unlikely to take any more economics. We did cover the core models via modules I designed in Excel. But we didn’t rely on a textbook for readings on applying the models. In my view, that kills motivation because the textbook examples invariably seem artificial. My colleague Larry DeBrock told me recently that the thing his Executive MBA students liked best about his class is that he would email them pdfs of articles from the Wall Street Journal or the New York Times or other sources with one or two lines of annotation from him. This is a similar idea but with shorter pieces, ones that are quite topical. Take a look at the sidebar of this blog in the top item called Pieces I Enjoyed Reading (sometimes that chokes; it gives the most recent entries from my del.icio.us tag Good_Reads). This is an attempt to introduce something along these lines in a very light handed way.
Can this work in a large class setting and promote the outside reading we’d like to see by our students? I don’t know. Perhaps it would be better to take such articles or multimedia content from online repositories like NPR, The NewsHour, and elsewhere and then design interactive quiz content around that, with straight subject matter presentation (followed by more quizzing) aimed at illustrating this online content, which has greater production value than we’re likely able to deliver on our own and which should pique the student curiosity. I don’t know if this will work either, but it seems worth trying. Those repositories didn’t exist in 1999. Leveraging them would be something new.
Intrinsic motivation also enters via “clever assignments,” experiential learning, and classroom experiments. The first assignment I gave to those honors kids was for each of them to identify Principles of Economics textbooks that are in the top 10 by market share, with each student receiving 10 points of credit per book if they were the sole provider of the title and no credit at all if the title was offered up by another classmate as well. The assignment worked like a charm the first time I did this, when I had 15 students. The outcome was that they identified all books in the top 10 and then some, one student earned 10 points but otherwise all the titles that were submitted came in duplicates, and then they had to puzzle over why they put in effort but (except for that one student) got no credit for their travails. This assignment was my introduction to the core idea that economics is about incentives. It was a great introduction. I had them hooked for the rest of course. Is there a way to do something similar in a high enrollment course? Again, I don't know, but it seems worth investigating.
The approach that Professor Hodge discussed has a certain grimness to it – it’s grim in how it motivates students, it’s grim in the type of information he has the students learn, and it’s grim in the amount of effort supplied in addressing the learning issues. One would hope that we could do something more uplifting. A focus on intrinsic motivation is uplifting. Further, as David Wright brought up in a question during a session on Student Readiness for Course Redesign that I attended because my friend Steve Acker was a member of the panel, we really should be about the education of our students; training is not our primary goal. When I was about 10 and in fifth or sixth grade, two or three other students and I got to work apart from the rest of the class with a programmed book for learning grammar - first presentation of the rule, then a question on that, and then response – ring, rang rung; …………bring, brought, brought. A course based purely on quizzing conveys this notion of learning, a notion associated with training, even if the course focuses on more adult topics and in a subject suited for study at the college level. Education, in contrast, has as part a notion of self-directed inquiry reshaping the learner’s world view. Where is the self-direction in the quizzing?
One reason for the grimness, one I’m quite sympathetic with, is that the design and implementation of the content is enormously time consuming. Doing something more along the lines I’ve touched upon might be possible as a matter of principle, but somebody has to do the work and the people already engaged with redesign are working harder than they should be as it is. What we saw then in my Intermediate Microeconomics implementation and see now in Professor Hodge’s course is what can be reasonably expected to be implemented given the resource constraint. That answer would have satisfied me in 1999. I find it less convincing now. Isn’t there some way to do better? To this, an economist like me would naturally look to the market to solve the problem and to the extent that the authoring effort involved in making interesting online content represents a fixed cost, an economist would believe that by distributing that cost over more and more student users, one can implement a better solution.
My friend Sharon Pitt was quick to point out that the Redesign Alliance is quite different from ELI in its view of the role publishers might play (and in a host of other *cultural* matters that pertain to how information technology can enhance learning as well as the role that learning technologists should play in the equation). But apart from the scaling issue there is the question of whether the market demands the type of content I’m calling for and since I’m not yet sure it does my view on this is a bit different from that of Steve Acker and John Harwood, my CIC colleagues who have been most strong about working with the publishers to come up with a new model for distributing their content online.
My view is influenced heavily by the recent Economic Principles text by Krugman and Wells, which appeared with some fanfare, including an online homework partnership done in conjunction with Paul Romer’s company Aplia. The Krugman name should be familiar even to the non-economist. He is an Op-Ed columnist for the New York Times and the author of many interesting columns with an economic basis, such as this one on California and fighting Global Warming (now all users with a .edu email can access this content) and indeed a collection of his columns on economics issues would make for great reading in a Principles class. The problem with the Krugman-Wells textbook is that in spite of a very high minded approach articulated in the Preface, the discussion of real world examples in the book becomes subservient to the topic coverage and consequently reading the book one flits from one example to another that are connected not in themselves but only in that they support they underlying topic being considered.
Professor Hodge talked about topic coverage too in his presentation and there he said that one of the main goals was to cover every topic in the book so the students would be well prepared for the next course in psychology. In my view, extensive topic coverage is at odds with intrinsic motivation, which clamors for depth on the matter that is of interest, a full treatment on that and then some. Further, to promote intrinsic interest one should make the theoretical ideas that we instructors present subservient to the real world issues that provide the motivation. Textbooks are not written this way because they are chosen by instructors who are looking to cover topics. This is a vicious cycle that needs to be cut before the market can provide a solution we can use. In the meantime, either I’m too idealistic in my views and in fact we’ll never get there, redesign with intrinsic motivation an essential element is like the search for the Holy Grail, or we need to have pilot projects that themselves may not be sustainable in giving good return to the author’s time commitment but that prove the concept can work from the point of view of the student/reader to point to the place the market should be headed.
How can we move our thinking to consider that? I don’t know but I wish this is a question that would be taken up by the Redesign Alliance. Carol and Carolyn can’t do the work themselves, but they could jawbone on this point and encourage some of us to contribute in this dimension.
Let me turn to the other pedagogic idea, embedded assessment. And here I believe the key issue is that in projects showcased at the conference such as OLI these are 100% computer based instruction with automated assessment only. It has not been implemented in the course redesign model where some of the work students do is to construct online objects themselves, where the embedded assessment is in the form of coaching that the instructor gives to the students in the process of their completing this online work. In other words, at present the ideas Candace Thille was promoting are unlike the ideas that I’ve heard in Writing Across the Curriculum workshops on how to respond to student work, although in my view those should be two sides of the same coin.
Is it possible to have a scalable approach to student writing (or student created multimedia) that makes sense in the context of a course redesign? If it is, isn’t this the way to introduce some student self-direction to the learning? And isn’t it what we expect to happen in well taught courses that have much lower enrollments? So if we are to envision some continuity in approach between the courses that are targets for redesign and these smaller upper level courses, don’t the redesign courses have to embrace at least a bit of this? And if the answer is no, we really don’t need that continuity, then don’t we get stuck with course redesign as a niche, admittedly a niche that entails a large number of enrollments, but a niche nonetheless, one that either will get pushed down to the high schools eventually or which the kids who can get the AP courses will bypass?
These are conundrums. I wish I had the answers. But I don’t. All I’m convinced about now is that there should be some effort put into solving these puzzles.
I had an enormous sense of déjà vu throughout and I believe some of that has relevance for what I want to comment about here regarding future directions, so I’m going to bounce back and forth between the present, the past, and what I’d like to see coming. In the
It’s now ten years since the conception of the SCALE Efficiency Projects, which I oversaw. As there is a reasonably good description of the history that led to those projects in that article, here I only want to add some consideration of the ethical dimension as a driver. Nowadays, we recognize that for course redesign to work there must be an embrace of it by the Central Campus Administration, and in this regard I was impressed by the remarks of Randy Smith of Ohio State in the session on Using Assessment to Achieve Other Goals. To a certain extent we had that in SCALE, with Susan Gonzo in the Randy Smith Role, George Badger as the CIO before our campus had a CIO, and John Ory director of our Center for Teaching all on our advisory board along with Burks. Nevertheless, the authority to implement was entirely delegated to me and so I retain the view that these projects emerged from an entrepreneurial, almost “Wild West” environment otherwise atypical of our campus, where there was a premium for quick implementation and less of a concern about how this would impact future directions. (The effort in Spanish is the notable exception that proved the rule.)
The key ethical issue was that we’d already taken the Sloan Foundation’s money and had spent quite a deal of it promoting and diffusing the approach we then called ALN, an acronym short for Asynchronous Learning Networks, a label coined by Frank. Diffusion was part of our mission, to be sure, but another part was that we had promised in the grant proposal to use ALN to lower the cost of instruction in an overtly obvious way. With the exception of my class and the class taught by my friend and colleague in Economics Larry DeBrock (he and I may not have been as expert as some of the other SCALE grantees in terms of teaching with the technology, but unlike the other grantees we did understand what it meant to lower the cost of instruction) SCALE wasn’t doing that and until Sloan started to put pressure on us to deliver in this dimension, nobody seemed to be particularly troubled about our failure to do so. Reducing the cost of instruction was not a campus goal at the time (this was a period of fiscal opulence) and further many took offense from even mention of this Sloan goal diverting us from our “true mission” – improving the quality of instruction.
On the one hand, I was uniquely suited to deliver something both do-able and sensible in the cost dimension, given my ALN teaching experience and being trained as an economist, while on the other hand I was a novice administrator and it seemed a bit unfair to have to internalize that ethical issue on my own. I didn’t really give much thought to the alternative – let Frank know we couldn’t deliver on cost and then renegotiate the activities to take place under the remainder of the grant. After all, I was delivering on the objectives in my own course. But had I been the PI at the start I would have insisted in more buy-in from the Campus on the cost piece or I would not have put in the cost dimension into the proposal, though I do recognize that putting it in made the proposal seem more appealing. (It also should be remembered that we’re the home of Plato. Plato cast a huge shadow on campus at the time. We’d been doing a lot of CBT pre-Internet and in many departments, notably Physics, some of the aims of Course Redesign had already been substantially achieved and already fully internalized and I was in the awkward position of not being able to use these past successes because there wasn’t a credible way to make them appear a consequence of our current efforts.)
My insight, not particularly profound but not something we had said going in, was to tie the cost issue to high enrollment courses and ignore it elsewhere. And with that there was the realization that a combination of CyberProf or Mallard quizzing and written work in FirstClass or WebBoard was something that would be useful in many courses across different disciplines. Indeed, as I got to know Diane Musumeci who used her pioneering approach in Italian to start up our Spanish Project, we remarked both about how our technology use seemed so similar and how it seemed there were many students taking both our classes – indicating to us that they liked what we were doing.
While Sloan was clearly the major external force pushing us with ALN and on the cost issue, Burks had arranged for the University to have a membership in NLII and there (I believe I first went to NLII in January 1997) I met Bill Graves and from that connection he became one of our featured speakers at the first Faculty Summer Institute. Further, I met some folks from Virginia Tech, notably John Moore and Len Hatfield, and became aware of their Faculty Development Institute, a program that I was jealous about at the time (still am). We ultimately had a bunch of folks from Virginia Tech come out for a workshop to share lessons learned with SCALE faculty and I attended one of their Learning Workshops in Roanoke; we had quite a collegial relationship that pre-dated the cross campus relationships I now have through the CIC Learning Technology Group. Further, NLII/Educom had a Library of thought pieces that provided fodder for me as I was conceiving the Efficiency Projects. So it was a very useful conference for me and that first meeting and future NLII meetings were nurturing.
Here is a bit more history before I return to the present circumstance. I met Carolyn Jarmon in a private meeting with a publisher somewhere in
Indeed, for a brief stint I was the poster child for Course Redesign with part of our JALN paper quoted in plenary session given by Carolyn at the 1999 NLII conference, where the Pew program was introduced. By this time the SCALE grant had been renewed and the Efficiency Projects continued to be important in the renewal but there was little visibility for these projects on campus because: (1) our Ed Tech Board was not involved at all, (2) we had an Interim Provost that was a signpost for the beginning of us having new leadership in all the key Campus administrative positions, and (3) efficiency was still a dirty word when discussing learning on campus. So nationally there was recognition for my work but on campus essentially none. Talk about cognitive dissonance!
* * * * *
Seeing Carolyn right before the cocktail party at the conference, I asked her perhaps an unfair question: What’s different now with course redesign as compared to 1999 when the Pew project started? It didn’t take long for her to answer. Then nobody believed it could be done. Now there are many believers. The attendance at the conference was a testament to that. It was quite heavily attended in spite of the fact that they were not giving out $200K a pop for redesign as they did in their grant program in 1999. It seems mostly that people are coming now to learn how to do this. And they have a clear reason to, both because of fiscal exigency and because, especially with the community colleges but also some of the four years, success rates are in need of improving and those performance levels are getting much heavier scrutiny. In the sense that this is about getting others to learn what some have already done, the Course Redesign Program is about diffusion of innovation, not about innovation itself.
But I do think the world has changed substantially since 1999 in ways that are relevant for redesign and I want to discuss how some of those changes might be brought into the redesign process.
Perhaps more importantly for this piece, I too have changed. Since the Pew Program was conceived I became the director of a hard money (under) funded unit called the Center for Educational Technologies and later witnessed that merge with the big campus IT organization, increasing my own visibility on campus but making learning technology look like it is more about IT than about pedagogy, with my boss the CIO, a good guy but himself not embraced with the learning issues. Under that I saw us implement an enterprise Learning Management System and watched how that system was eagerly taken up by the high enrollment courses but much less so by smaller upper level courses and also less so by innovative faculty, who want to embrace emerging technologies. I saw us have lukewarm forays with other campus support units, notably the Library and the Center for Teaching Excellence. And I saw how the campus viewed the presentation technology we associate with “smart classrooms” as a goodie to be leveraged on a per seat basis so that all our large general assignment classrooms have it while most of our smaller general assignment classrooms do not.
The big lesson learned from this experience is that at large public R1 institutions such as
I also changed in my understanding about learning and in my teaching experience. I was poorly read about learning in 1996, I had read Democracy and Education but not much else; hence I had all the earnestness of a novice learner in my own pursuit of teaching innovation. Since then I pursued a self-directed and quite eclectic set of reading on learning and teaching and now I find myself attracted to people who can offer me a new insight and an area unknown to me to explore. And in my own teaching I can implement novel ideas in a more nuanced way.
Starting in summer 2001 my appointment has been 100% time as an administrator. When I teach now it’s in an overload capacity where the Econ department gets a freebie and hence I’ve opted to take my compensation in kind – as a way to explore my current ideas about teaching. I last taught my Intermediate Microeconomics course in the style of large class redesign in spring 2001. My most recent teaching has been in very small classes, most recently with Campus Honors students.
It’s on this basis that I want to comment about the pedagogy I heard about at the conference. There were two core ideas – quizzing, a way to motivate students by tracking their performance and allocating points for correct response, and embedded assessment, situated feedback for students particularly when they are struggling on some aspect of the course.
The quizzing use was featured in the presentation by Gordon Hodge in one of the Disciplinary Showcase Sessions and which, in turn, was highlighted in the opening plenary that Carol gave. Professor Hodge talked about teaching the large intro psychology course with publisher test banks uploaded into the campus course management system and then administered to students in the form of a twenty question timed quiz, three quizzes per week, students could do them as often as they liked with their highest score determining the points they were allocated. The approach helped the students to put in the requisite time on task and thereby increase familiarity and understanding of the subject.
The embedded assessment idea, also highlighted in Carol’s keynote, was featured in a presentation by Candace Thille, director the of the Open Learning Initiative at Carnegie Mellon, who talked about and gave a demo of how students get assessment and then feedback in the OLI software either in junctures between presentation content or when they get stuck on other assessment. The approach helps the students to keep going and not get discouraged.
There are, however, issues with each of these. On the quizzing, about a year before the SCALE Efficiency Projects, I gave an analysis of the issues (this is an old document, none of the links work) based on the economic model of the Principal and Agent. That quizzing does such a good job of addressing the student agency problem is a triumph of extrinsic assessment, an antidote to the disengagement pact that George Kuh talks about. And I must say that I heard from Donna Charelvoix-Romine before the conference talking about her non-major students who take the intro to meteorology course that she teaches and from many others during the conference that students in these high enrollment courses won’t put in any effort whatsoever unless they get course credit for it. So there clearly is a reason for the quizzing.
But many educators, among them the psychologist Jerome Bruner and Ken Bain, author of What the Best College Teachers Do, emphasize an appeal to students via intrinsic motivation, to wit curiosity, a desire to understand the puzzles that real life circumstances pose from the perspective of disciplinary expertise. At the conference I didn’t hear anything at all about intrinsic motivation. That was disappointing. To me it’s a natural and important for Course Redesign to ask how one might bring in elements of intrinsic motivation and yet maintain the student commitment and a scalable approach that the quizzing offers.
In my way of thinking, intrinsic motivation enters into at least two aspects of instruction. It appears in what we have the students read and what topics we talk about in class. When I taught those Honors Students I had them read The World Is Flat, Freakonomics, and MoneyBall, in a course where the students were highly unlikely to take any more economics. We did cover the core models via modules I designed in Excel. But we didn’t rely on a textbook for readings on applying the models. In my view, that kills motivation because the textbook examples invariably seem artificial. My colleague Larry DeBrock told me recently that the thing his Executive MBA students liked best about his class is that he would email them pdfs of articles from the Wall Street Journal or the New York Times or other sources with one or two lines of annotation from him. This is a similar idea but with shorter pieces, ones that are quite topical. Take a look at the sidebar of this blog in the top item called Pieces I Enjoyed Reading (sometimes that chokes; it gives the most recent entries from my del.icio.us tag Good_Reads). This is an attempt to introduce something along these lines in a very light handed way.
Can this work in a large class setting and promote the outside reading we’d like to see by our students? I don’t know. Perhaps it would be better to take such articles or multimedia content from online repositories like NPR, The NewsHour, and elsewhere and then design interactive quiz content around that, with straight subject matter presentation (followed by more quizzing) aimed at illustrating this online content, which has greater production value than we’re likely able to deliver on our own and which should pique the student curiosity. I don’t know if this will work either, but it seems worth trying. Those repositories didn’t exist in 1999. Leveraging them would be something new.
Intrinsic motivation also enters via “clever assignments,” experiential learning, and classroom experiments. The first assignment I gave to those honors kids was for each of them to identify Principles of Economics textbooks that are in the top 10 by market share, with each student receiving 10 points of credit per book if they were the sole provider of the title and no credit at all if the title was offered up by another classmate as well. The assignment worked like a charm the first time I did this, when I had 15 students. The outcome was that they identified all books in the top 10 and then some, one student earned 10 points but otherwise all the titles that were submitted came in duplicates, and then they had to puzzle over why they put in effort but (except for that one student) got no credit for their travails. This assignment was my introduction to the core idea that economics is about incentives. It was a great introduction. I had them hooked for the rest of course. Is there a way to do something similar in a high enrollment course? Again, I don't know, but it seems worth investigating.
The approach that Professor Hodge discussed has a certain grimness to it – it’s grim in how it motivates students, it’s grim in the type of information he has the students learn, and it’s grim in the amount of effort supplied in addressing the learning issues. One would hope that we could do something more uplifting. A focus on intrinsic motivation is uplifting. Further, as David Wright brought up in a question during a session on Student Readiness for Course Redesign that I attended because my friend Steve Acker was a member of the panel, we really should be about the education of our students; training is not our primary goal. When I was about 10 and in fifth or sixth grade, two or three other students and I got to work apart from the rest of the class with a programmed book for learning grammar - first presentation of the rule, then a question on that, and then response – ring, rang rung; …………bring, brought, brought. A course based purely on quizzing conveys this notion of learning, a notion associated with training, even if the course focuses on more adult topics and in a subject suited for study at the college level. Education, in contrast, has as part a notion of self-directed inquiry reshaping the learner’s world view. Where is the self-direction in the quizzing?
One reason for the grimness, one I’m quite sympathetic with, is that the design and implementation of the content is enormously time consuming. Doing something more along the lines I’ve touched upon might be possible as a matter of principle, but somebody has to do the work and the people already engaged with redesign are working harder than they should be as it is. What we saw then in my Intermediate Microeconomics implementation and see now in Professor Hodge’s course is what can be reasonably expected to be implemented given the resource constraint. That answer would have satisfied me in 1999. I find it less convincing now. Isn’t there some way to do better? To this, an economist like me would naturally look to the market to solve the problem and to the extent that the authoring effort involved in making interesting online content represents a fixed cost, an economist would believe that by distributing that cost over more and more student users, one can implement a better solution.
My friend Sharon Pitt was quick to point out that the Redesign Alliance is quite different from ELI in its view of the role publishers might play (and in a host of other *cultural* matters that pertain to how information technology can enhance learning as well as the role that learning technologists should play in the equation). But apart from the scaling issue there is the question of whether the market demands the type of content I’m calling for and since I’m not yet sure it does my view on this is a bit different from that of Steve Acker and John Harwood, my CIC colleagues who have been most strong about working with the publishers to come up with a new model for distributing their content online.
My view is influenced heavily by the recent Economic Principles text by Krugman and Wells, which appeared with some fanfare, including an online homework partnership done in conjunction with Paul Romer’s company Aplia. The Krugman name should be familiar even to the non-economist. He is an Op-Ed columnist for the New York Times and the author of many interesting columns with an economic basis, such as this one on California and fighting Global Warming (now all users with a .edu email can access this content) and indeed a collection of his columns on economics issues would make for great reading in a Principles class. The problem with the Krugman-Wells textbook is that in spite of a very high minded approach articulated in the Preface, the discussion of real world examples in the book becomes subservient to the topic coverage and consequently reading the book one flits from one example to another that are connected not in themselves but only in that they support they underlying topic being considered.
Professor Hodge talked about topic coverage too in his presentation and there he said that one of the main goals was to cover every topic in the book so the students would be well prepared for the next course in psychology. In my view, extensive topic coverage is at odds with intrinsic motivation, which clamors for depth on the matter that is of interest, a full treatment on that and then some. Further, to promote intrinsic interest one should make the theoretical ideas that we instructors present subservient to the real world issues that provide the motivation. Textbooks are not written this way because they are chosen by instructors who are looking to cover topics. This is a vicious cycle that needs to be cut before the market can provide a solution we can use. In the meantime, either I’m too idealistic in my views and in fact we’ll never get there, redesign with intrinsic motivation an essential element is like the search for the Holy Grail, or we need to have pilot projects that themselves may not be sustainable in giving good return to the author’s time commitment but that prove the concept can work from the point of view of the student/reader to point to the place the market should be headed.
How can we move our thinking to consider that? I don’t know but I wish this is a question that would be taken up by the Redesign Alliance. Carol and Carolyn can’t do the work themselves, but they could jawbone on this point and encourage some of us to contribute in this dimension.
Let me turn to the other pedagogic idea, embedded assessment. And here I believe the key issue is that in projects showcased at the conference such as OLI these are 100% computer based instruction with automated assessment only. It has not been implemented in the course redesign model where some of the work students do is to construct online objects themselves, where the embedded assessment is in the form of coaching that the instructor gives to the students in the process of their completing this online work. In other words, at present the ideas Candace Thille was promoting are unlike the ideas that I’ve heard in Writing Across the Curriculum workshops on how to respond to student work, although in my view those should be two sides of the same coin.
Is it possible to have a scalable approach to student writing (or student created multimedia) that makes sense in the context of a course redesign? If it is, isn’t this the way to introduce some student self-direction to the learning? And isn’t it what we expect to happen in well taught courses that have much lower enrollments? So if we are to envision some continuity in approach between the courses that are targets for redesign and these smaller upper level courses, don’t the redesign courses have to embrace at least a bit of this? And if the answer is no, we really don’t need that continuity, then don’t we get stuck with course redesign as a niche, admittedly a niche that entails a large number of enrollments, but a niche nonetheless, one that either will get pushed down to the high schools eventually or which the kids who can get the AP courses will bypass?
These are conundrums. I wish I had the answers. But I don’t. All I’m convinced about now is that there should be some effort put into solving these puzzles.
No comments:
Post a Comment