As this is another very long post, I've made a Google Doc version here.
This is an odd post. While it will mostly be about developments in ed tech during my personal history with it (roughly from 1995 to 2010), where those developments had at most mild influence on the profession as a whole, and quite possibly no influence whatsoever, it is also meant as an extension of a critique of Learning Innovation and the Future of Higher Education by Joshua Kim and Edward J. Maloney. I know Josh through the Educause Learning Technology Leadership Institute. He attended it when it was held in Burlington Vermont. summer 2009. I was a faculty member at that institute, the last of three times I did that. Apparently what I said made some impression on Josh at the time. On a few occasions, he subsequently wrote kind things about me in his blog and then in his Inside Higher Ed column. I feel some obligation to return the favor, although I've been retired for upward of 10 years and am more than a little outside the current conversation.
The pandemic manifest soon after the book was released. In the interim, Josh sent a message to the Leading Change Institute list (formerly Frye Institute list), in an effort to promote the book. Josh had previously sent me a personal message about the book. I responded to the personal message as follows. I did order the book and read the introduction (first chapter). Based on that I sent Josh extensive comments. I think they still are appropriate. But I said I wouldn't read the rest of the book for a while. I felt I wasn't part of the intended audience, because of what I wrote at the end of the previous paragraph. I have recently picked up the book again and am about halfway through it now. My reaction follows. Before getting to that, however, I want to note there was no further discussion of the book on the LCI list. Josh may have gotten private emails from individual LCI members. I have no way of knowing that. My sense of things is that list members who were working full time became so overwhelmed with the prospect of the pandemic that they had no mental bandwidth left for considering the book. We may now have reached the other end of the tunnel. I suspect that most will be thinking along the lines - what from before should be retained and what adjustments made during the pandemic will be part of the new normal? My guess is that this would give short shrift to what came before. Reading this book is a way to give that its due.
Let me begin with what I liked most in what I read, which challenged my prior assumptions about why innovation with learning happens in higher education. The authors argue that it is the maturity in the science of learning, with all agreed on the fundamentals of what is needed to advance learning on campus, that makes the mission of Centers for Teaching and Learning clear, to catalyze and support innovation in teaching and learning on campus, and brings that mission in line with campus administration as well as with the various academic departments. This is an argument not just about what should be done now. It is also an argument that this effort can sustain, because there is agreement on the fundamentals. In contrast, I believed that it was novelty in the technology that inspired innovator and early adopter faculty to embrace learning innovation, which was considerable on my campus, particularly in the mid to late 1990s. But once the technology itself became more ho-hum, additional innovation would be more of a struggle and perhaps peter out entirely. Further, I believed and continue to believe that - if it ain't broke don't fix it - so that innovation is an explicit or at least tacit indicator that some things weren't working as well as they should have been with teaching and learning prior to the innovation. But campus administrators were loathe to admit that, especially if it seemed like what was broken was specific to their campus. Doing so would generate bad press that they did not need. If, as the book argues, campus administration now will truly embrace learning innovation, perhaps they've solved this issue of bad press by casting things forward rather than in the past. That really should be considered a major accomplishment.
I did want to note that maybe it is easier to do this at Dartmouth and Georgetown (the home campuses of the book's authors) than it is at Illinois, where I spent my career. Private universities may be under less scrutiny this way and the smaller scale at which those campuses operate may enable a more coherent centralized approach. We are very decentralized at Illinois, a virtue and a curse simultaneously. I would expect that the main hypothesis of the book does not yet apply to Illinois, though that's more a guess than anything else and is biased by my prior experience, which is dated. I know far less about the current situation, so could be quite wrong in my assessment.
Next, and I mean this paragraph to be a bit tongue in cheek, reading this book made me feel very old. Most if not all the references I will give below are from an earlier time than is contemplated in the book. I think many of those references still have relevance, and might be read for that rather than merely as historical curiosities. Further, when I did campus ed tech, many of my peers in the CIC Learning Technology Group (the CIC is now called the Big Ten Academic Alliance) were previously regular faculty members who then embraced ed tech administration, while other peers did not have this faculty background. As for me, I started by running a small soft money organization, SCALE, that later became part of a still small organization called the Center for Educational Technologies. Though an administrator, I felt entrepreneurial and innovative in this role. After CET merged with the larger IT organization, that feeling gradually eroded till it was pretty much gone. I could still be a strategic thinker about ed tech matters. But regarding getting things done, where before I could be nimble, after it felt like walking in glue. My perspective is informed by this background. Those who are junior to me, who have quite different formative experiences, are better able to see the possibilities that lie in front of us.
Let me turn to two topics that appear in the book, the Learning Management System (LMS) and cost reduction in instruction that might manifest from innovation and effective use of educational technology. I'll lead off with this post from long ago by Leigh Blackall.
Die LMS Die! You too PLE!
The post exemplifies that many ed tech professionals at the time were against the LMS, because it seemed antagonistic to real learning. I will explore that some below, but first it is worth asking why the LMS prevailed in spite of these criticisms. Here is a partial listing of such explanations.
- Legal Restrictions - In the U.S., the two biggies are copyright and FERPA (student privacy). If you read Blackall's post, there is a distinction made between closed systems, of which the LMS is one, versus open systems, which learning technologists of Blackall's ilk preferred. On big campuses and/or campuses with deep pockets, Illinois is one of those, there is a great fear of liability and thus efforts are taken to reduce exposure. In this arena, the LMS is a safety play. Going with an open system is taking on some liability risk (nobody knows how much). Further, some instructors are ignorant of these risks, while others who are innovative are more willing to take them on than the campus is. (And with FERPA, the penalties for noncompliance fall on the institution, not the offending faculty member.) Thus, institutions have a preference for closed systems, while innovative faculty members might opt out and do some alternative on their own.
- Back-End Technical Considerations - The learning system, whatever it may be, needs to be integrated with the student information system, which records who is registered for what classes. At the beginning of the semester, registration information needs to flow from the SIS to the learning system. At the end of the semester, there might be grade information that flows in the other direction. These flows need to be highly reliable. From an IT perspective, it is better to have one enterprise learning system for this than to have a host of alternatives, each which would require the same type of integration with the SIS. This too favors the LMS.
- Political Economy Considerations - Funding for IT units at universities largely follows the business cycle. During a recession the CIO, like all other campus administrators, will be asked to engage in what is euphemistically called "belt tightening." In this situation, the CIO will prefer to be supporting mainly applications that have a very large audience, so service cuts there would be very unpopular. In turn, this creates a preference for large applications even when budgets are more friendly. It also means that services that are targeted for termination, then become something of a political football and users lobby hard for such services to be continued for a while rather than be terminated (what they deem as) prematurely. This too favors one large LMS.
And, of course, many students and instructors who currently use the LMS will point to the convenience benefit it affords. In case its not evident, however, the convenience benefit and the above are factors that taken together support lock-in to the LMS but are orthogonal to teaching and learning considerations. Given that, we should inquire about what teaching and learning considerations embrace of the LMS might block. It's a large question to ask and I will take it on in several steps.
Perhaps surprisingly, the first step will be to take a look to the past. In the SCALE Library, there was a paper by William H. Geoghegan called Whatever Happened to Instructional Technology that might still be worth reading. I recall that it introduced me to the ideas of Everett Rogers on diffusion curves and the various players who make diffusion of an innovation happen - the innovators, the early adopters, the majority, etc. Yet the paper itself was about why instructional technology didn't seem to be diffusing faster than it actually was. It was one of the first papers I read about ed tech, and it took a historical view. There is a lesson in that.
At Illinois when I started as a SCALE administrator, in spring/summer 1996, roughly a year after I had started to play with online learning, quite a different thing then from a functional viewpoint, there was a very large past that cast a big shadow on what we were trying to do, though I didn't realize that immediately. Plato had been a huge success at Illinois and gave the university an international reputation for computer assisted instruction. (I believe that Arthur C. Clarke made the HAL computer in 2001 come from Illinois as a consequence of real world Plato system.) Plato was internalized in the instructional practices of many STEM departments at Illinois. It therefore may not be surprising that with the advent of the Web some of the the early programs for instruction that came out of my campus, notably CyberProf and Mallard, were in some sense Plato derivatives. I became the first instructor at Illinois outside the College of Engineering to use Mallard in my teaching, where before I had no experience with Plato. The Internet opened up this smart homework with auto-grading functionality to audiences that hadn't experienced it before. It caught on and became quite popular on campus and on other campuses too.
Illinois was not the only place where such tools developed. At Michigan State a system called CAPA also came out of the Physics Department. (The CyperProf developer was a professor in Physics.) CAPA too spread to other campuses and became one of the first systems, to my knowledge, where objects designed for the CAPA system could be reviewed and then shared by instructors across the campuses that supported CAPA. With that the system got a name change to LON-CAPA. As for CyberProf and Mallard, their histories were determined largely by the level of support these applications received, but also the Tech Transfer policy in play at Illinois during the late 1990s. The campus had bungled the licensing of Mosaic, and mistakenly looked to the new Web tools as potential money makers, so wouldn't let the developers bring them to market. But further, the CyberProf tool had a lot of functionality the the science departments wanted, yet from a performance point of view it was not very good. During sessions near to when homework assignments were due, students would experience a lot of slowness in the system and that produced frustration. Eventually, Physics built an alternative with similar functionality but which was more efficient under the hood. That alternative is called Tycho and is still in use today, but as far as I know it is only the Physics department that uses it. (An interesting aside is that the original developer of Tycho was previously a Plato developer.) Both Biology and Chemistry at Illinois moved to LON-CAPA, which is now supported by ATLAS, the IT group withing the College of LAS.
In other words, at Illinois none of the big science departments use the campus-supported LMS. (That is now Canvas, where previously it was Blackboard, and before that WebCT.) And this is because from their point of view the quiz/homework functionality in the campus-supported LMS is rinky-dink.
The Mallard history is a little different because while it did start in the Department of Electrical and Computer Engineering, it eventually got substantial use in non-science LAS departments. For example, it was heavily utilized in foreign language courses, particularly the large introductory classes. So an argument could be made that the campus should have supported Mallard indefinitely. But that didn't happen. The programmer who was keeping Mallard afloat eventually came to work for me in CET, at the request of the CIO. He didn't want to provide funds to what had been a faculty development project. The arrangement with the programmer worked for a while, but eventually she wanted to move on to other work, so she could continue to learn about new developments in educational programming. I had no resources to bring in yet another programmer to replace her. So she wrote a version of Mallard that would seem bug free and that would remain fixed thereafter. Mallard survived that way for many years, even after we had moved to an enterprise LMS. But it couldn't sustain this way forever. Eventually the instructors who used Mallard had to move to the LMS. They were none too happy about this, but there really was no alternative.
One more point should be made before completing this discussion of the first step. A vision arose somewhere in the mid 2000s about having smart tools like Mallard be able to plug into the LMS as long as each followed the standards as specified by the IMS Consortium. This vision was an attempt to get the best of both possible worlds. The vision should have also helped in migrating from one LMS to another. But, to my knowledge, this vision failed. The incentives to obey the standards were weak and applications developed prior to the standards had little reasons to engage in substantial redevelopment to meet the standards.
The tool set within the LMS meets the needs of most instructors, but there a niches of instructors, perhaps discipline based, perhaps who have early adopter mentalities, who find these tools limiting and unimaginative. Developments with more interesting functionality might have occurred if the LMS narrowed the toolset it did support, so the interesting tools could work in concert with what the LMS did. For homework tools, in particular, that didn't happen.
Regarding the next step, let me assert that instructors often learn to use ed tech in their own teaching by imitating other instructors who have already implemented that bit of ed tech. Seeing the actual implementation is a wonderful way to transfer the practice. Here I will give two examples of practice being transferred this way. I was directly involved in these. Readers, I'm sure, will be aware of many other examples.
About a year and a half into writing posts for this blog I came across Barbara Ganley's blog at Middlebury College, bgblogging. We had substantial online interaction and became friends that way. Later I invited Barbara to campus to talk about her teaching experience. Barbara taught writing at Middlebury and used student blogging as an integral part of what she did. I was able to look at her course site with a Mother Blog, that was a way of aggregating the output from the individual student blogs. I found it fascinating, but because I worked 100% as an administrator then and only occasionally taught a class, and then it was done as an overload, I couldn't immediately put in place what I had learned from Barbara. Then in fall 2009 I taught a CHP class, the only time I taught a non-economics undergraduate class. I used student blogs in that class and have since used student blogs in teaching the one economics class I've taught in retirement. Here is an example from a class that performed reasonably well. Note that the student blogs can be found in the left sidebar. I've used Blogger for this teaching and utilized their blogroll tool for the student blogs. Barbara, I believe, used TypePad when she was at Middlebury. The two blogging platforms had somewhat different functionality. So I didn't replicate the Mother Blog concept in my teaching. But what I did was to take this idea from Barbara and then retrofit it to another platform and to teaching economics rather than writing. This, it seems to me, is an interesting way for novel teaching practice to diffuse.
I don't believe that Barbara was concerned with FERPA when she was teaching with blogs at Middlebury. There students blogged under their own names. I did have at least some concern for FERPA, which is why in my econ class students blog under an alias. A friend with whom I'm still in touch in Facebook and who used to work for me back in the 2000s remarked that what I do might not be giving the students sufficient privacy protection. So there is some risk with the approach. Is there a more than offsetting learning benefit? I'm convinced that there is as many students have told me that the blogging was the best part of the class, while also admitting that before the class they thought of themselves as poor writers.
In this next example, I play the role as innovator and somebody else in higher education embraced my innovation. Scott Sinex became a maven and the main promoter of using Excel as a visualization tool across many disciplines to illustrate some elementary dynamics or comparative statics in the subject matter. I learned of Scott in this email exchange from back in 2008.
From: SCOTT
SINEX <sinexsa@pgcc.edu>
Date: Sunday, April 20, 2008 at 6:49 PM
To: Arvan, Lanny <larvan@uiuc.edu>
Subject: RE: Who coined the term "Excelets"?
Thanks, Lanny. This
works for me. I'm working on making it a household
word!!
Scott
Scott A. Sinex, Ph.D.
Professor and Chair
Department of Physical Sciences and Engineering
Prince George's Community College
Largo, MD 20774-2199
301-341-3023
http://academic.pgcc.edu/~ssinex
e-mail: ssinex@pgcc.edu
>>> "Arvan, Lanny" <larvan@uiuc.edu> 04/20/08 9:58 AM
>>>
Scott - Perhaps I did. I know in my own use (summer of 2001) I was
futzing with graphs in the Web version of Excel, not happy with that,
then learned about the "spinner" and the "spin
button" (the latter is
the activex control) and stumbled on the fact that with the latter the
rendering of the graph changed continuously as the parameter that the
button controlled changed. So for my own purposes I thought why not
used the full Excel rather than the html version. Then I produced a
variety of those type of animations and made the Web page called the
Excelets Page. A little bit later I wrote some of these for the
Intermediate Microeconomics Textbook by Besanko and Braeutigam.
That's what I know. There very well could have been an earlier use of
the term that I was ignorant of when I made my Web page.
Lanny
________________________________________
From: SCOTT SINEX [sinexsa@pgcc.edu]
Sent: Sunday, April 20, 2008 8:08 AM
To: Arvan, Lanny
Subject: Who coined the term "Excelets"?
Lanny-
Do you know who first coined the term "Excelets"? I've always
assumed
it came out of economics. Got any history on it?
THANKS,
Scott
Scott A. Sinex, Ph.D.
Professor and Chair
Department of Physical Sciences and Engineering
Prince George's Community College
Largo, MD 20774-2199
301-341-3023
http://academic.pgcc.edu/~ssinex
e-mail: ssinex@pgcc.edu
It is conceivable that Scott learned about Excelets from my Webpage about them, which dates back to 2001. If so, that is quite a lag from when the page appeared to when Scott contacted me. I do want to note that I did nothing outside of my own campus to promote Excelets (I did give a lunchtime brown bag presentation on campus about them) but then nothing further, as I was a full-time administrator and had other things to occupy my time. Nevertheless, Scott found out about them. I have subsequently used them in my own economics teaching, such as here, and made a tutorial to show others how to make them. The tutorial has received enough hits that it seems likely some others who watched it then went to make their own Excelets, though the analytics on the tutorial video show the typical result - most viewers don't watch the video all the way through.
I hope these two examples are sufficient to illustrate the general idea. If a novel teaching approach is available out on the open Web, those who find that page might very well implement the teaching idea in their own courses. With a closed system such as an LMS, even those that allow some course pages to be made public, this particular mode of diffusion of teaching practices is either not available altogether or is very difficult to achieve. The innovative instructor either can't show her methods to outsiders from within the LMS or the effort in doing so is more than she wants to put in. This is an argument for open systems and gives some of the reasoning behind that Leigh Blackall post linked above. Therefore, I would be curious about instructors who regard themselves as innovative as to whether they use the LMS in their teaching. While I mainly use a blog for my class Website, I do use Moodle for the grade book, a little bit of the quiz tool, and for posting documents I deem should not be made public. The rest of my content (PowerPoint files, Excel files, and some PDF files), however, is openly available in Box.com, an alternative service the university provides. People outside the class can have access to these files by following the links to them on the course site.
Let me turn to the third step, which is about sharing learning objects rather than sharing ideas about teaching. And with that I want to note that Kim and and Maloney do talk about OERs in their book, with reference to MIT's Open Courseware Initiative and specifically with regard to teaching innovation at Cal State Channel Islands. But they don't talk about the production of learning objects at Dartmouth or Georgetown where these objects might be then reused, elsewhere in higher education, perhaps in AP classes or regular high school classes, and perhaps in colleges in LDCs that are desperate for content of this sort. At this broad strokes level, this idea might appeal more to Land Grant Universities, which have outreach as a significant component of the university mission. But really, if you view the education sector as a whole and wanted to consider how to achieve cost reduction throughout the sector, learning object production and sharing might be a big part of the story.
Now I'd like to talk about some concomitant developments. The Merlot Project began in the late 1990s. I'm not sure when I became aware of it, but I know that happened through the CIC Learning Technology Group and the the instigator was Carl Berger, then of Michigan. Merlot is a referatory, not a repository. The former has information about the learning objects and gives the links to where those learning objects reside, which may be anywhere on the open Web. My sense of Merlot is that it was an interesting concept, but it didn't quite work because the information about it didn't diffuse sufficiently to non-insiders. Also, Merlot has a novel idea about how others would evaluate learning object quality - through the use of peer review. But who would evaluate the reviewers? Where for journal publishing scholars in the field develop a sense of taste about what is publishable research, making online learning objects was too new to get anything close to consensus on what makes a learning object effective.
In my own thinking, I developed a concept called Dialogic Learning Objects, and was beginning to produce these for principles of microeconomics, in Excel (of course). The dialogic idea was confirmed for me a couple of years later when I became aware of the Open Learning Initiative at Carnegie Mellon, where they referred to the idea as embedded assessment. The core insight is that our ideas of in class instruction and homework had been formed when the homework was done on paper, students had to turn it in during the class session, and then expected the graded homework to be returned at a subsequent class session perhaps a week later. This thinking was retained even as homework moved online. But the reality was that this separation between presentation and assessment no longer was necessary. A bit of presentation could be given online, followed by a question to see whether the student understood the presentation. The student would receive immediate feedback about the answer given to the question. If the answer was correct the student could proceed to the next bit of presentation. But if the answer was wrong, the student could try again with a different answer. If still stuck there might be additional help for the student to understand what was going on. This is a more natural way to learn online, though authoring content with embedded assessment was more difficult than simply writing the presentation content and the homework questions, which in turn was more time consuming than using such content as supplied by the textbook publishers.
Within economics, there was another development that caused excitement at the time. The economist Paul Romer founded the company Aplia, which instead of marketing textbooks marketed smart homework that operated within the Aplia system. Good assessment content is scarce and writing it is laborious. Romer's ideas grew out of his own frustration with the supplementary online content provided by textbook publishers. Instead, have the assessment content with its own market and allow it to be used with a variety of textbooks on the subject matter (which increasingly the students weren't reading). Alas, managing Aplia was too much of a burden for Romer and he sold Aplia to Cengage. To my knowledge, there has not been another experiment of this sort, where the market focuses on the assessment content only and ignores the presentation content.
Use of the LMS, with presentation and quiz content uploaded there, discourages re-use of such content as learning objects, because there is effort involved in moving the content to where it can be re-used and coming up with ways for others to find the content. But much of what is done probably wouldn't warrant re-use, even if it was publicly available. There is a question of whether having an open container instead of an LMS would "embarrass" the instructor into producing better and more re-usable content. I really don't know. I will simply go back to the point that apart from MIT, it doesn't appear that elite private schools have made much of an effort in the OER space. Then again, neither has Illinois.
There is a different way to consider this from the point of of the academic department monitoring instructor effort in teaching. I don't recall ever having the department head come to my live classroom to observe a class session. The closed container approach with the LMS likewise precludes any monitoring of this sort for the online part of instruction. In my experience, outside of the course evaluations the sole way the academic department gets information about instructional quality is via a group of students going to the department office and complaining about the class or, at the other extreme, having such a group of students wanting to nominate the instructor for a teaching award. Maybe in smaller departments with fewer students there is more scrutiny about instructional quality. With the norm being very little scrutiny, it may be that both students and instructors get used to that and develop a passive preference for it. That's something to keep in mind in what I say when discussing the "Disengagement Compact."
Let me close this section by noting that in my own teaching I've since combined the smart homework tool with the dialogic learning object approach. If you'd like to experience this you can find the Excel files here. They must be downloaded to be usable. I would try the Tutorial first. You need to enter something for the NetID (at least 3 characters) and then choose some alias from the pulldown menu. I believe the Tutorial is pretty self-explanatory, although I've experienced some students who have completed the Tutorial but don't seem to have mastered the lessons there. After the Tutorial I suggest trying the Math of Risk and Risk Preference. This is not the first real homework, but this particular homework may best illustrate the approach. Again the file should be downloaded and the login part is needed. You should also hide the Ribbon to give you as much vertical space as you can. Note that as you answer the questions correctly more of the graph is plotted. So an additional benefit of the approach is that students get to understand the graphical approach. I do have a video that explains the concepts using an algebraic approach. Between doing the calculations for the random variable x and then some more more calculations for the random variable y, there is discussion about why x is more risky than y and what that means in comparing the expected utility of x to the expected utility of y. So there is some presentation along with the assessment content, although less than in other homework. Also note that for some questions it is necessary to use the calculations from previous questions. The exercise builds on itself that way. In the LMS, each question in a quiz is independent of the other questions.
Undergraduate Student-Produced Online Content That Is Re-Usable
For my very first class Website, I hired an undergraduate in Engineering to write the HTML while I provided the content for the site, which judged from the perspective of now was quite primitive. At the time, I didn't know HTML commands at all and wouldn't have been able to produce that Webpage. Subsequently, I learned enough of FrontPage to manage that Website myself. With the homework content I put into Mallard, I likewise hired former students of mine to put the content into Mallard, with me as the content designer but not requiring me to master Mallard's syntax for course developers, which was more than a little arcane. At the time I had an internal to campus grant from SCALE to support my own instruction and this seemed a sensible way to spend the grant money.
What I want to pose here is a question, one for which I never got a real answer. It is this. If students were given the designer role as well could we then treat the activity as educative enough that it would be compensated with course credit and possibly with waiving some course requirement(s)? I want to briefly review a few different cases in mind.
- Students do learning object design and development in lieu of writing a term paper for a course. Students also agree that their learning object work can be readily shared with others, without concern for copyright. Then, much like in open source software development, subsequent iterations of the course allow the projects to be refinements of previously created learning objects, rather than starting from scratch. After a few iterations of this the resulting learning object becomes of sufficiently high quality that it is re-usable. With this and the subsequent approaches in this list, a public archive is maintained so that all versions of the learning object can be accessed. The identity of the author of a particular version can be released or concealed at the author's request.
- A special topics class is conducted where one of the main purposes of the class is to produce learning objects in the subject area. Otherwise the case is similar to before. If in such a course, one might expect students to write two or three term papers, then this approach would have more intensive learning object design as compared to the previous approach. It might then be that each student works on more than one learning object, for example, initiating one while adding polish to another.
- Learning object design is done as an extracurricular activity within a Registered Student Organization. There is a faculty advisor who helps with directing the work, but here the students themselves have more control regarding what learning objects should be made and how those learning objects should perform.
I have no experience with any of these approaches, so this is pure speculation on my part, the type where I'd like to see experiments done this way with others leading the way. The last time I taught I did give the password, which is the same on each worksheet of my Excel Assignments, to a mathematically inclined student. With that password he could unlock the worksheets and then, presumably, reverse engineer how I constructed them. I think this reverse engineering approach might be a good way to learn how to build similar assignments. But I never heard back from the student and before too long the pandemic was here. So I don't know if this can work or not. I maintain the belief that in certain circumstances it is better to have smart functionality on the client side rather than on the server. (For example, if used at the high school level some schools might not have the resources to support server side functionality.) The approach with Excel does that. I also want to note that what I have done requires no programming whatsoever. It relies entirely on built in functionality in Excel. That, I would think, should make it easier for students to learn how to make these objects. Of course, I'd also guess that somebody with a lot of programming knowledge would pooh pooh this approach and want to develop something else. I'm quite okay with that, if the alternative approach this person comes up with is sustainable.
On Achieving Cost Reduction In Instruction Via Effective Use Of Online Technology And A Different Role For Undergraduate Students
I took over for Burks Oakley in running SCALE about 3 or 4 months after I had joined SCALE to help out, with no specific portfolio of projects in mind. As things turned out, my first activity after joining SCALE was to assist with the SCALE evaluation, interviewing SCALE Instructors along with Cheryl Bullock. Though I didn't realize it till later, what we learned there was quite important for talking about cost reduction.
Burks had promised the Sloan Foundation that ALN (what we then called online learning) would produce cost reduction because an instructor could answer a student's query once in an online bulletin board and therefore not need to answer the same sort of query over and over again during office hours. There was some logic to this although I wondered to myself whether those other students who came later to the bulletin board and read the student query as well as the instructor response would get the same learning benefit as the original student did. In any event, precisely one instructor whom Cheryl and I interviewed reported that their time in instruction went down as a consequence of the mechanism that Burks outlined. Many more reported the opposite result, that their time devoted to instruction increased, as did the quality of their course offering. The reason, which we later referred to as the "shy student problem," is that these instructors had great difficulty in getting students to talk up in class. The instructors reasoned that it might be easier for the students to open up online. That turned out to be true. But then what the students wrote required responses from the instructor, and since the students didn't write about the same thing, the instructor needed to respond to each post individually. Eventually instructors found strategies where students responded to other students, somewhat lessening their own time commitment, but at that early date with online learning, spring 1996, those strategies had yet to play out.
Let me point out something else. Some of the faculty whom Cheryl and I interviewed were merely using FirstClass, the online conferencing system that SCALE supported at the time. Other faculty we interviewed had received large internal grants from SCALE, in the amount of tens of thousands of dollars, and most of those faculty were hoping for a renewal grant from SCALE. Yet with the exception of my colleague in the Economics Department, Larry DeBrock, and me, where our SCALE projects were explicitly designed to achieve cost reduction, none of the other projects were designed for that purpose and the faculty who ran those projects seemed entirely unconcerned with whether cost reduction would be achieved or not. So there was an ethical conundrum for me. Sloan was not happy with us in not showing cost reduction yet the faculty receiving these internal SCALE grants seemed entirely unconcerned with that. That ethical issue wore on me some.
I also felt under prepared to run SCALE, though in retrospect I was the right person to do it, both because of my temperament and because of my economics orientation. Regarding the latter, I understood that measuring faculty time was a red herring when talking about cost savings. While there is a percentage allocation of faculty time to research, teaching, and service, we don't measure faculty time on an activity basis. Further, this approach meant that the person who would capture the cost savings would be the faculty member teaching the ALN course. We needed an alternative approach where the savings could be captured by the Department, or the College, or the Campus as a whole. That much was evident to me at the outset. And under prepared or not, it's also true that necessity is the mother of invention. There was a need for an alternative approach. I came up with the SCALE Efficiency Projects for the fall 1997-98 academic year, where most of internal grant money that year was spent. This proved successful. Sloan was happy with us and we were able to obtain a renewal grant from them as a consequence. And I learned a lesson that I probably shouldn't have learned - there is a creative aspect to ed tech administration. I continued to feel some of that with CET, but much less of it after that.
Here I want to remark about where the profession was at that time regarding online learning and cost reduction. Much of the work came out of NLII (National Learning Infrastructure Initiative) which was the predecessor to ELI (Educause Learning Initiative). However, most of this work was entirely theoretical. The only actual implementation I learned about from NLII was the CUPLE Physics Studio at RPI. It was interesting to learn about, but RPI is at least an order of magnitude smaller than Illinois, and it seemed the approach at RPI wouldn't translate to Illinois. I already knew about the work at Michigan State with CAPA. And I would soon learn about the Math Emporium at Virginia Tech, but not in time for that to appear as a reference in our paper Further, I learned that the science departments at Illinois had already achieved substantial cost reduction under Plato, but the savings had already been captured and wouldn't count for the Efficiency Projects. The examples there were all new, since the SCALE grant started.
The timing of the publication of the Efficiency Projects paper was fortuitous. It would be less than a year later that Carol Twigg would form the National Center for Academic Transformation and initiate their program in Course Redesign, which did a lot to popularize the idea that effective use of learning technology can lead to cost reduction while keeping quality of instruction intact or even improving it. This program got many more institutions involved with implementing this idea. Burks introduced me to Carol (I think at some Sloan Consortium meeting, but of that I'm no longer sure). She read the Efficiency Projects paper and showed a great deal of interest in it. She used me as a consultant on documents for the Course Redesign program, which I was happy to do. And I was pleasantly surprised that at the January 1999 NLII meeting, Carolyn Jarmon delivered a plenary session where the Efficiency Projects were featured. In one fell swoop, my name became known to those in attendance.
Yet there was a surreal aspect to all of this. The larger economy was in the middle of the dot.com boom (bubble). Money was flowing very nicely on campus. While there may have been strategic reasons for course redesign in specific courses, why do this across the board when the economy is going gangbusters? The mature answer to that question is that the economy won't be going gangbusters forever and engaging in course redesign is then good preparation for when the next recession hits. Nobody, however, is that prescient. At Illinois, the Efficiency Projects didn't generate the attention that they did at the NLII conference. In the background there was "The Spanish Project" and one of the Efficiency Projects was redesign of SPAN 210, a mid-level course. The campus had made the foreign language requirement more stringent and the expectation was that the demand for introductory Spanish would go through the roof as a consequence. The issue was how to meet that demand without putting a huge drain on available resources. The SPAN 210 redesign was a pilot for what would subsequently happen in the introductory course. This is an example of a strategic reason for course redesign. When the recession did come after 9/11, the campus wasn't prepared for it. In some courses that had been lecture-discussion, then became straight lecture. Cost reduction was achieved, but at the expense of the quality of instruction. That is the grim reality of what happened.
I want to change perspective now and talk about my intermediate microeconomics class, which was one of the efficiency projects. While I introduced a variety of innovations over time, pretty much from the outset the main innovation was to use undergraduate TAs, who held online office hours during the evening, 7 - 11 PM, Sunday through Thursday. They interacted with students initially through the class conference in FirstClass. Over time that segued more and more to using the chat function, which the TA would then copy and post to the class conference. And after a few years I switched from FirstClass to WebBoard. But the mechanism was largely the same. The students had homework that they would submit online, end of chapter problems from the textbook, and submit that on a problem by problem basis. The students were grouped into teams. I allowed re-submission if the original didn't receive the maximal grade and if the re-submission was done by another team member. And they could re-submit multiple times, as long as it came in before the deadline. While some of the contact with the online TA happened before the initial submission, I believe the bulk of it happened after that.
I had written up solutions to the homework problems that the TAs had. I coached them not to give away the problem answer but rather to help the student think through how to solve the problem. Likewise, the requirement that another student on the team had to send in the re-submission was meant to encourage the team members to discuss the homework among themselves. This mechanism was far from perfect, but it did engage at least those students who took the course seriously.
As with the smart homework, I took the idea of undergraduate TAs from some engineering classes I learned about. And I took the idea of resubmitting homework problems from the Writing Across the Curriculum folks, who treated revision as a critical part of writing and who regarded response as the essence of good teaching. Doing an economics homework problem is not like writing an essay, but why not try this idea about revision with the homework problems?
This use of undergraduate TAs providing online office hours was unusual in teaching economics, and I suspect in teaching many other social science subjects. One colleague told me that it shouldn't be publicized, because if the Chicago Tribune found out about it, we might get harsh criticism for doing it. I do want to note that during that time the standard practice for intermediate micro was to give the instructor a grader, usually an international student whose English wasn't good enough to be a TA, but there was no discussion section and no graduate student TA. Some other departments, Chemistry for one, were using undergraduate TAs to run discussion sections. That is more problematic.
The efficiency that enabled this unusual use of undergraduate TAs is that the size of the lecture tripled, from about 60 students to about 180 students. No doubt lecture quality went down as a consequence. (I was still doing chalk on blackboard lectures then.) I know I couldn't read what was on the blackboard from the last row of seats. And since the auditorium where I lectured had more seats than that, plus attendance was far from perfect, I encouraged students to move closer to the front. Nevertheless, it was harder to make a visual connection with many of the students this way. Eventually I produced PowerPoints with audio narration to compensate for this deficiency.
I want to make one more observation about this from the the TA perspective. I paid them an hourly wage, perhaps a dollar over minimum wage, so they felt they were treated fairly. Some of them surprised me by letting me know that being an online TA like this proved to be an attractive credential when they went for job interviews. It made me wonder whether the TAs were getting a learning benefit that should be investigated further. I didn't do that at the time, but I speculated a lot about it later.
In the Kim and Maloney book there is a lot of respect paid to "active learning." I've always struggled with this some, first on the question of how time enters into the equation, second on the question of whether discussions that promote active learning might depend quite a bit on who participates in those discussions. On the first question, my experience is that many students expect ahead of time for learning to be a snap. But in reality, learning takes as long as it takes. Thus, one of the meta-lessons students need to master is to put in the requisite time for learning to happen and not let the dictates of schedule determine their time input. On the second question, if in a discussion with a pair of students where one student is reasonably well prepared while the other student is poorly prepared, is there learning benefit created or not? When active learning activities occur in the live classroom, there are implicit assumptions that (a) the time allotted to the activity is ample and (b) the students who argue through the matter are more or less equally prepared. The mechanism I used with online TAs in intermediate microeconomics allowed for the time to be more open-ended, since it was happening during the evening, and for there to be a clear asymmetry in the participants. The online TAs had taken my class previously and were chosen because they had done reasonably well in it. The current students knew that. Participation in the online office hours was opt in. This asymmetry encouraged the current students to do just that.
I last taught that large intermediate microeconomics class in spring 2001. After that my appointment as administrator became 100%. After that when I did teach as an overload occasionally, it was in a small class that was conducted more as a seminar than as a lecture. So I had no further experience with utilizing undergraduate TAs. But the potential benefits from that stayed with me and in August 2005 I wrote a series of 7 posts for this blog on Inward Looking Service Learning, which allowed me to speculate about scaling up the idea to the entire campus and then deploying these students in other ways than I had deployed them in intermediate microeconomics. I want to note here that in talking about innovation with teaching and learning it is quite common to reference some technology that might be useful for the purpose, but it is highly unusual to consider changes in the social arrangements in how we go about things. If someone reads those posts on Inward Looking Service Learning now and finds them not entirely convincing, might it still be true that other modifications in the social arrangements on campus could have a big beneficial impact on learning, particularly at the undergraduate level? It is something to ponder.
Is Learning Really Happening in College?
Kim and Maloney cite Arum and Roska's Academically Adrift for a counterpoint to the argument about the movement toward learning. They want to acknowledge this counterpoint, it is the intellectually honest thing to do, but they don't want to drill down into analyzing it. I will do a little of that here.
First, I want to mention one piece prior to Academically Adrift that influenced me quite a bit. It's by George Kuh, appeared originally in Change Magazine in 2003, and is called, What We're Learning About Student Engagement From NSSE. NSSE is the National Survey of Student Engagement, which Kuh led at the time. This particular paragraph, which appears near the end of the piece, is worth pondering some.
And this brings us to the unseemly bargain, what I call the "disengagement
compact": "I'll leave you alone if you leave me alone." That is, I won't make
you work too hard (read a lot, write a lot) so that I won't have to grade as
many papers or explain why you are not performing well. The existence of this
bargain is suggested by the fact that at a relatively low level of effort, many
students get decent grades--B's and sometimes better. There seems to be a
breakdown of shared responsibility for learning--on the part of faculty members
who allow students to get by with far less than maximal effort, and on the part
of students who are not taking full advantage of the resources institutions
provide.
It makes you ask, why would an instructor be disengaged and why would a student be disengaged? There are apt to be multiple answers to this. One has to do with time commitment. A faculty member with tenure who has a heavy research agenda may view undergraduate teaching as necessary evil, nothing more. Putting effort into it won't enhance the professor's reputation as a researcher. Similarly, the undergraduate student who has lots of extracurricular activities or paid work that is unrelated to school, might simply not have the time to put in to be really engaged in learning. Alternatively, the undergraduate student who wants to party like mad before entering the world of work, assuming that will be a grind, would prefer taking a course that is a "gut." This makes disengagement seem like what economists call shirking. In this case the disengagement compact should be interpreted as mutual shirking that goes undetected, because there is no monitoring of it external to the class.
A different explanation might make sense for teaching faculty who don't have tenure. They want to keep their job and minimizing student complaints is a safety play that way. Grade inflation is a result. Further, they understand the lesson that behavioral economist Richard Thaler has taught us. Even if the class is graded on a curve, the raw scores students obtain on exams and other assignments matter regarding their satisfaction. Higher raw scores generate higher satisfaction. So the instructor has incentive to teach an easy course. On the student side, the student might be more of a grinder than one who gets nurtured from the intellectual stimulation the course aims to generate. The latter type of student, increasingly scarce in my experience, is engaged via intrinsic motivation. The former remains uncertain that the course material can be penetrated, so stays with the tried and true method of memorizing the lecture notes.
The last time I taught, fall 2019, I learned that there can be a different fundamental cause that makes the student appear like a shirker to the instructor. This is poor student mental health, which the instructor won't know about unless the student is forthcoming about it. Student mental health became a national issue in 2019 (and perhaps earlier) but almost always was discussed in terms of the inadequate number of mental health professionals students would try to access. In my very incomplete reading of pieces about this, what has yet to be considered is whether the way we go about teaching and learning is a source of poor student mental health.
The story I have in mind, which I don't think is so hard to believe, is that students come to realize fairly early in life, perhaps in high school or even earlier, that they must market themselves to be able to get to the next step on the educational ladder. This need for marketing oneself makes school and co-curricular activities as well seem like one big game of paper chase. Credentials are it. Actual learning is of secondary importance. Students take it for granted that they want to play this game, but its artificial nature eventually catches up to them and the mental health issues ensue. If they have substantial college loan debt, that only adds to the stress to make the situation worse.
The way Kim and Maloney seem to argue, either they are right or Arum and Roska are right. It's one or the other. In my way of thinking, there are some segments on campus where the move to learning is evident and elsewhere where the Disengagement Compact holds sway. This would allow a somewhat more complex view of matters, which I think would be helpful.
I know that Illinois participated in the NSSE in the 2000s, but declined to make the results public then. The results did impact the thinking of campus leadership. There was a big push in programs to encourage undergraduate participation in research. But, to my knowledge, there was no direct confrontation with the Disengagement Compact nor, indeed, even an admission that it might be present in some places on campus. That's how it has always been. The better students have great learning opportunities at Illinois and that's what is marketed to the public. The average students, not so much, but we don't talk about it openly. I'm ignorant at how this goes elsewhere. It would be delightful to learn that it really is across the board engaged learning at Dartmouth and Georgetown. But I've been trained to be skeptical about this sort of thing and wonder what evidence might be amassed to show this.
Wrap Up
This post is more an unbundling of my idealistic thinking on teaching and learning, centered on my own teaching experience, than it is a reasoned argument of where teaching and learning should be headed, based on a firm understanding of where we are at present. I lack that firm understanding. I'm not sure I had it even 15 years ago. The generalizing from personal experience can get one in trouble. But here, I'm just talking things through. I have absolutely no influence on current implementations. Perhaps there is value in talking things through, to encourage others to do likewise. And, if not, it was still good therapy for me to write this piece.