pedagogy, the economics of, technical issues, tie-ins with other stuff, the entire grab bag.
Friday, July 29, 2005
More on Hysteresis/What the Future May Bring
- - - - -
Yesterday's post on the Academic Calendar was an example of institutional inertia and how historical effects can have durable impact. The technical term is hysteresis, which apparently was first used in the context of physical systems but which also can be applied to social systems. There are many other aspects of the university where one can whether the historical basis for the current structure is still approximately optimal given the current situation. If the answer is no, then new alternative structures may be viable and if given the opportunity might trump the current ones.
Here is another area in which to consider hysteresis in higher ed. The course is now the preeminent unit of learning. Clearly learning continues for research faculty after they have attained their degrees. Does that happen in courses? What about for professionals who work outside of higher education? If the focus is on the learning (and not the certification thereof) is the course the right way to organize the learning.
My experience is that research faculty learn in two ways that I think are appropriate to treat as separate. There is a very deep and narrow learning from participating in their own research projects. This is a specific type of learning by doing, where the word investigator is often substituted for the word researcher. The methodology of the discipline is applied in a manner where the investigator learns a la Sherlock Holmes. The other way is by being a member of a seminar or workshop. This is a somewhat broader form of learning where the researcher stays abreast of current developments in the field by reading papers and seeing presentations by others who are doing research and occasionally presenting their own work.
This may be somewhat unfair as a characterization but I would describe the work of professionals outside of academe also as research, in the sense that much of the work is investigation, but of a much more applied type aimed at getting more specific answers. There may then be structures analogous to the seminar to review the work and share it with peers. The commonality then is learning by doing and sharing of results.
While the Boyer Commission recommended bringing faculty like research into the undergraduate curriculum, bringing the practitioner type of investigation into undergraduate education might be more relevant for the vast majority of them. More to the point for this discussion, the course is in conflict with the investigation. The course ends at a time known in advance. The end time for the investigation is not predictable. It takes as long as necessary to pursue all the leads to solve the puzzle. If one wanted to emphasize investigation in undergraduate education, one would have to do this outside of courses (or, perhaps, have a series of courses that serve as an umbrella to house the investigations).
The current emphasis on inquiry-based learning will either be reduced to trivialities by restricting attention only to those subjects where investigation can be completed in a very short period of time or it will confront this inherent contradiction between the course and the investigation. Harder questions take time to resolve and after considerable thought and work may remain unresolved.
The current divide in undergraduate education is between "general education" on the one hand and "the major" on the other. The focus is on broad (what everyone should know) education early and specific disciplinary education late. This division is increasingly becoming non-functional because we are changing our view about our job as educators from teaching them "stuff" to teaching them "how to learn."
So my "prediction" and I base this on the type of reasoning above rather than on any trends I see emerging is that undergraduate education will cleave and become more like graduate education - - - courses for a while, then investigative research. The Ph.d. dissertation in my experience is more of a solitary quest. The undergraduate alternative likely will be done in project teams. The course part of this will be taught by non-research faculty. The investigative part might be taught by research faculty or perhaps by clinical faculty who have done more of the applied type of research that is appropriate for undergrads. The remainder of the research faculty will be engaged in graduate programs only and the size and the composition of the faculty overall will eventually adjust to this new structure.
The part of the current structure that would cease in this new world is to have research faculty teach stuff. This also points to the primary force that would block this vision from becoming reality. If the research and graduate education is being subsidized by the undergraduate teaching (my belief is that this is happening in departments which are not obtaining a lot of external funding) then the researchers clearly have an incentive to preserve the old way.
Thursday, July 28, 2005
On Calendars
- - - - -
One of my favorite papers to teach is Paul David’s AER paper on QWERTY. Steven Jay Gould has reproduced that paper in his book Bully for Brontosaurus and as it turns out there is quite a bit of similarity with the dynamics of increasing returns studied in economics and the dynamics of evolutionary biology – at the outset there are many viable paths and it is impossible to predict which path will win. Then chance events make one path more viable and the head start creates a cumulative effect that results in lock in so the path becomes a de facto standard. The wonderful thing about the QWERTY example is that the keyboard was initially designed to make people type slower but the lock-in is so strong that attempts at using “more efficient” key layouts have not worked.
This is good background to consider the situation of Academic Calendars on our various campuses. Many others have noted the asymmetry with respected to summer sessions have a legacy back to the nineteenth century (or earlier) where students needed to go back to the family farm for planting and harvest. That need has disappeared. But the asymmetry continues. At my campus our fall and spring semester are each fifteen weeks of classes and then one week for final exams. We offer two different summer sessions, one for four weeks (very intensive instruction) and one for eight weeks (moderately intensive instruction). Of course some campuses are on the quarter system instead of the semester system. And even here we have some courses that are only half semester.
While I have seen discussion of the inefficiencies of summer scheduling, I have not seen discussion about Academic Calendar and the issues of primary concern in this blog – student engagement, institutional commitment, and the cost of instruction. Instead of taking five courses in parallel per semester, envision that those courses are taken sequentially in intensive mode in 3 week blocks. Ask how that would effect student engagement. (I think it would promote it.) Ask what that would do for scheduling classes into classrooms. (I think it would be much easier to schedule because there would be fewer offerings in any one block and they’d be offered in larger chunks of time.) Ask whether the faculty would prefer to teach in a couple of these blocks in intensive mode and then have the rest of the time when they are not teaching at all. (My guess is that they would overwhelmingly prefer the block approach.)
Does changing to the sequential approach make sense? I think so. Are we likely to see it? Well, places like University of Phoenix do the sequential model (although I believe 3 weeks is too intensive, even for them). But what about non-profit higher ed? Will it ever make such a reform? Perhaps it will. I wonder who will take the lead on this.
Wednesday, July 27, 2005
Speech
RealPlayer – Free Version
Audio Version of this Post (5.27 MB)
Don’t continue reading this post until you downloaded the audio version. In order to listen to the audio version you need real player. I provided a link for the free version in case you don’t happen to have real player on your computer. It should play fine once you’ve got a player installed.
Yesterday I trashed Microsoft at the end of the post. Today I wanna actually congratulate them for their voice to text software which is available inside Microsoft word. I was inspired to play with this software after reading racial orders Blog posting which I saw on the Sloan see web site about a week ago. It’s interesting how 1ft. rigors and other I didn’t play a little bit with the speechexpert- software that he talked about but I decided that trying while Microsoft had to offer was probably going to be more useful and long run.
The novel thing that I’m doing here with a voice to tax software is that I’ve simultaneously recording a real media stream using the free version of real producer and also transcribing that voice stream two text using the word speech software. To the extent that we can get something of decent quality from the transcriptions point of view one now has something that will be accessible for all and actually I think something that a lot of other but should be trying.
On and off I played with voice recording for at least five or six years and putting streaming voice with either web pages or PowerPoint or something of that, and. The two issues I’ve had in the past with this our first how long would take to produce the contents and second whether in fact what I was doing was really excess of all for all. This stuff is getting much closer to actually doing the trick. As an aside it also might be useful as a way to actually produce first drafts of fundamentally printed material.
I don’t envision everybody doing this and media only but I think it is sufficiently easy that some people should try it and their experience might actually allow others to try it as well.
Let me explain what you’re hearing a part from my voice. Every time word here is a pause from the tape takes what is grayed out of my screen and converts it to printed text. At the same time it gets a little beeped to note that it has accomplished that task. So this is perhaps not as listenable as if I was just doing the voice recorder. You need to be the judge to find out whether it not you can envision listing to stuff like this on a regular basis.
You will also note that I am talking with a date with the punctuation commands. And, actually I didn’t do it correctly so it did the punctuation rather than say the words I need to get along better at that. My point is that if I’m going to be this as a first draft to any further Abell helped me a lot to have the punctuation already there. Consequently I am speaking with pauses rather than continuously. That means it may not be as listenable as it would otherwise the ultimate goal is for me to speak continuously and put the text to actually be created continuously.
We’re still somewhat far from their ultimate goal but where a lot closer than we were a few years ago.
What you see as far as text on this page is exactly as it was converted by word. I have not made any edits at all I have simply copied the word document into the Blog editor. With that copying one can then publish directly to the web. It may not be perfect but I thought you should see what the exact output of the word speech program was.
Yesterday I spent about ten or fifteen minutes training word two of my voice. Today I spent about another fifteen minutes with more training. Presumably the accuracy increases as the amount of training goes up. They also say that you need to be in a quiet place to do the recording. My office is on the street so there are street noise and this a mother noise as well. I think it’s not bad. I’m going to try to use this a little bit more and see if I can get proficient at
Tuesday, July 26, 2005
All “Features” Great and Small
All this makes sense for the greater good of the order and I understand that. But I didn’t want to move. It’s a pain. Two days in a row I’m all sweaty from carrying boxes (and this is with the help of movers) and I don’t know where my stuff is. I screwed up an important budget document because of the move and I’m mad at myself for that. I still fell disoriented so I wouldn’t be surprised if I make more blunders in the next couple of days. Why couldn’t I stay in my old office?
And why, you might ask, am I bringing this up? It’s uncouth to talk about such things --- chin up and all that rot. The answer in this case is to use it as a way to frame the issues. Most all of us have, upon occasion, endured a move. Some of you probably managed it better than I have, but it is an inconvenience for all of us, one we can recognize as such. I want you to keep that thought in mind but now consider another setting, where your campus has upgraded its version of the course management system it supports, or even worse, switched to another platform.
Many of us who support educational technology don’t go through agony when systems are upgraded. In fact, we may be positively disposed to playing with the new version of the software – it’s fun learning the stuff and checking out the new or improved features. But it is really different for our faculty. Their reaction, in the main, will be like mine has been in making the physical move of offices. They won’t like it. They’re apt to be resentful. They’ll want to know whether they can stay in the old version, first off, and then use that to make us feel guilty when they ask what will break in their course site when they have to upgrade.
This may seem melodramatic, but I don’t think I’m overstating the situation. Once a piece of software seems to work and the user has adjusted to it, most users don’t want the software to change. Why should they? The benefits from the change are amorphous to them. The costs are quite obvious. And most faculty are incredibly time squeezed that contributes to their frustration during the upgrade period.
So why is it that in educational technology we support providers push so hard with the vendors or our own developers on coming out with additional features of the software and improving on existing ones? Are we doing it for the benefit of our instructors? Or are we doing it to please ourselves and indulge our too idealistic beliefs about using the technology as a vehicle to improve instruction?
Certainly more frequent change in the software that is supported raises the total cost of ownership. There is more work needed for providing documentation, more training and more follow up consultation for instructors. There are more chances that the configuration of the production system gets screwed up or becomes inappropriate or that some particular unsolvable bug is uncovered.
Here are a set of arguments for why I continue to ask for new features on the instructor side of the product.
Our User Base (To Every Thing Churn, Churn, Churn.)
I don’t think it is commonly recognized that there is a lot of churn in the use of course management systems, at least there is a lot of churn on our campus. We have many instructors who are graduate students and adjunct faculty and those people turn over quite quickly. Also, tenure track faculty instructor rotate between graduate courses where they likely won’t use the CMS and undergraduate courses where they will.
This means we have a substantial population who are new to our system each semester or who have been away from the system for a while and are coming back for a fresh start. While most of these people don’t want to spend a lot of time learning something new, they do want to be assured that the product they are using is current and up to date.
Staying Current with Other Software (Blame it on the other guys.)
Even if we don’t upgrade our CMS, the browsers, plug-ins, and other software our users rely on will upgrade by processes that we don’t control very well. Our stuff has to work with what they have. We can’t guarantee that it will if we get too far behind in what we are supporting. Thus we need to keep up.
Moving Down the Technology Learning Curve (Teaching the Old Dogs New Tricks.)
As instructors get comfortable with using the CMS some of them begin to see possibilities for improving their course and they want to try new things. (We really do need to know how many are in this category and what type of things might encourage this. The ECAR study by Glenda Morgan from a couple of years back suggests that this population is not as large as we might like.) At the point where they are ready to try something, we should be in a position to provide a set of possible offerings. We want new features so we can do just that.
Improving Instruction (I’m the guy your mother warned you about.)
This is why most of us got into the business. We are there to make the learning better. Any improvement in the software that has a chance of doing this we should champion. Here what we really want to do is promote the teaching idea. We don’t expect the software to do it all by itself. We expect the software to serves as a catalyst for the teaching idea and as an instrument to implement the teaching idea.
Let me explain the parenthetic remark in the header, which might otherwise seem obscure. This one is the purest of motives. And clearly it drives us. But perhaps we place ourselves in the role of teacher too much rather than focus on the typical instructor. Keep in mind that the track record on innovation with learning technology by tenured track faculty is spotty at best. So we push for innovation that may be anti-utilitarian. We raise expectations and costs this way and may produce a lot of disappointment. In terms of less uptake than anticipated. Is this the way a twenty first century manager should show leadership?
Competition with Peers (Satisfying our Insecurity Complexes by Keeping up with the Joneses.)
This is a less pure motive but it may be more primal in explaining our behavior. Over there at Peer U, they’re supporting a different CMS than we have here. It has some really cool features that their tech folks say their faculty absolutely love. We better get those, and soon, or we’ll look like slouches and our faculty will complain that we’re falling behind.
- - - - -
Taken together, this is not a bad list of reasons. It is not entirely uplifting to be sure, but most of the reasons are defensible. The thing is, if you reflect on them for a while you likely will conclude that while some innovation in the software is a good thing, the pace of change that we advocate for is probably too rapid. We know Microsoft turns over its product offerings quite regularly, perhaps on a biennial basis. And we know why they do it to. (Non-economists who want to impress their friends can refer to the Coase conjecture about how a durable goods monopolist will end up pricing at marginal cost.)
But we aren’t Microsoft and we should know better. That the vendors push innovation on us, might be understandable. But that is not my experience. We, the informed users of these products, are incredibly demanding in what we want to see in them. This is something for us to ponder.
Monday, July 25, 2005
The Money Gods
On the way back from San Francisco last week I started The Testament by John Grisham and finished it last night. It’s not a great book, by any means, but it is different from his formulaic stories about lawyers in Southern towns. As I was getting into the book, I realized that one of the draws of the story was the black and white morality play embedded within. Money, and there always seems to be fabulous amounts of the stuff in Grisham stories, provides the road to perdition. All the characters who pursue it, by making undeserved claims on the gigantic estate that provides the source for the title of the book, are thoroughly awful people, leeches on society, making no positive contribution whatsoever. The only pure characters in the story are the heroine, a missionary serving the Indians in the outreaches of Brazil who happens to be the bastardized daughter of the billionaire who has left the entire estate to her, and the recovering alcoholic lawyer whose job it is to find her and get her to sign the will. It’s easy to identify the good guys and the bad guys in this story. Then reading the book becomes a repeated exercise in asking what’s next and in trying to piece together the jigsaw puzzle plot.
It occurred to me over the weekend, in between reading sessions, that we should reflect on the role of money in the minds of our students. Quite seriously, the way the pursuit of money acts on their psyche might very well be the determining factor in how these students perceive the college experience. It might not be there quite as overtly as in the Grisham story, after all the money we are talking about comes down the road, after the education has been completed. But its presence is surely felt for example, to get a good job get a good education, choose the major by noting the starting salaries in the field, and measure the return to college by the difference in lifetime earnings between a grad and a non-grad.
Of course, real life is not as simple as a Grisham novel. Many of our students come from families of modest means. The students’ interest in money reflects an aspiration for improvement, to have a better life than their parents have. Certainly that is not bad in itself.
But as I’ve indicated in previous posts, the alienation that many students feel has at least part of its roots in chasing the money gods. If we are to overcome the alienation, we need to offer something else in its stead. Recently I’ve heard frequent mention of Maslow and “self-actualization,” a somewhat old but certainly not outdated notion. (Jack Wilson mentioned Maslow in his plenary address at the WebCT conference.) I think that an emphasis on self-actualization is a good thing and requires going beyond the courses students take to a look at the full student environment. (And that is why, contrary to what Jack Wilson argued, I don’t think we should encourage students to assemble a portfolio of courses taken from many different institutions to provide evidence that all degree requirements have been fulfilled.)
How does a campus encourage self-actualization in students? One way is to promote student clubs or groups where the work is the essence of what these organizations are about and where the work is perceived as leading edge, requiring creativity, and of some consequence to others, but also where doing the work fills a social need of belonging to something bigger than oneself. We have several examples of that on my campus, one of best is ACM. Until they built the new Seibel Center, I used to walk by the ACM office a couple of times a day in the Digital Computer Lab where my office is located. The ACM office would always be crowded with students who seemed to be interacting with each other. For those who self-select into this environment it seems a marvelous opportunity. But what about other students who choose not to participate in such clubs?
In the documentary Declining by Degrees, there was some mention of Living and Learning Communities, where the students take many classes in lock step and get to form a bond with each other and with their teachers as a consequence. This may be an especially good thing for first year students, particularly at large public universities such as Illinois which can seem impersonal and distant. But Living and Learning Communities are impractical as a total solution because campuses such as ours don’t have sufficient housing stock to keep upper class students on campus and, in addition, as students get further into their major the ability to stay lock step with other students becomes increasingly difficult to maintain.
My view is there needs to be a systematic, shall we say campus requirement, to a “service learning” activity that would in part be aimed at filling the student need for self-actualization. While most service learning programs on a campus such as mine reach outwards toward the less fortunate in society, it is my view that the bulk of this work should be directed inward, toward the learning of other students on campus. This is why I’ve advocated for having more senior students serve as mentor/teachers for more junior students, under the direction of faculty members who regard the student mentor/teachers as extensions of themselves in instruction. It is really the only way that I can see for the majority of our students to get serious mentorship themselves from faculty members, who otherwise are too busy to have such relationships with students. And it is probably the only across the board approach to providing self-help for campuses facing tight budgets .
Does this mean the student chase of the money gods will end and we’ll all become socialists? We’re not even close to that now and I don’t think it will be likely in the future, even with a vigorous program of service learning on campus. But if we were to go down this path it would mean that will have an overt strategy for countering the student alienation and for offering the students a broader sense of self and how to achieve their own personal growth. In the meantime, let’s not be taken for money god chasers ourselves and let’s continue to discuss how we might make Maslow more relevant in the pursuit of undergraduate education.
Sunday, July 24, 2005
How do alumni learn?
This sounds good. And, of course, it has a message that we want to hear inside higher ed because it means the demand for our services will increase in the future even if population growth and immigration are limited. Moreover, it seems to jive with current predictions about the growth in e-Learning enrollments, which are quite bullish. Are we to have the renaissance in e-Learning that was predicted in the late ‘90s but didn’t materialize then? If so, how do we position ourselves for it?
A very long time ago when I was a graduate student, this is when the Econ department at Northwestern was till situated mostly in houses that bordered Sheridan Road, I can recall talking with Frank Brechling, then a Macroeconomics Professor. He argued that if there was a relationship between variables then it would invariably show up in the regressions, no matter how the model was specified. Is there a relationship between alumni and e-Learning? If so, how is it showing up? I note here that there are more than a few similarities between the University of Massachusetts, with Boston the dominant city in the state, and the University of Illinois, with Chicago likewise dominant. If Jack Wilson is right, we should be seeing the symptoms too, shouldn’t we?
Where would those symptoms show up? I don’t know but here are some guesses.
1) Faculty would hear from their former students who are working or looking for work about additional education needs.
2) Likewise, the Alumni Association would hear the request from alums and pass that along to appropriate members of the administration.
3) Since we have much fund raising done within the colleges, the various fund raisers find similar requests.
4) Gifts to the university are targeted to support online education aimed at alumni students.
5) Early adopter schools that initiate online programs aimed at alumni experience surprisingly high uptake.
6) Among the current wave of online students, there is a substantial number who have self-identified as alumni.
Quite possibly there are other important indicators that I haven’t mentioned.
I bring this up for a few reasons all that make me skeptical. I’ve no doubt about the need for retooling in the labor market. But I’m unclear on the role of higher education in that retooling. Doesn’t the bulk of it have to be employer supplied training that is quite specific to the particular job? I’m also unclear on the tightness of the alumni bond. With distance not a barrier online, why not attend the program with the best reputation for its online offering? Certainly, if there is a bond it is to the campus, not the university. But if the bulk of the online offerings are coming from a different campus of the university, that would seem to be only a very weak bond.
Jack Wilson argued further that each campus in the system would hire additional faculty according to its current profile. The Amherst campus has mostly tenure tracked faculty, while the Boston campus has many adjuncts. My assumption is that it is easier to expand online programs with adjuncts. This has me scratching my head.
There may be some easy wins here and if so we should see some rapid progress. I’ll be paying attention to UMass for the next couple of years to see how this plays out there, both the growth of online programs in general and things targeted at alumni in particular. If this area does grow disproportionately it stands to have a substantial impact on the character of the university.
Saturday, July 23, 2005
What does interoperability mean?
At my level, the issues are a little less abstract. The list includes the following:
(1) If the tool is not part of Vista, does it reside on its own separate server or on the Vista cluster?
(2) If the tool is not part of Vista, who administers the tool? Do we do that or does some other campus unit do that?
(3) If in (2) above the answer is some other campus unit, do we start to allow other campus units to administer within Vista itself?
(4) How are upgrades managed? Do we upgrade the tool at a different time than we upgrade Vista proper?
(5) Do standards matter if we are using other Vendor supplied tools? If the Vendors are willing to make their tools PowerLink compatible, do we care? In other words, will this be a criterion by which other vendor tools will be selected in the future?
(6) What things should we ask WebCT to develop within the Vista environment? Will we care in the future whether things are interoperable but separate tools versus fully integrated into the basic environment?
(7) Even if the tool is fully interoperable, does the integration of the tool with Vista entail substantial effort and cost?
At the same conference WebCT announced its approach to ePortfolios, with the first release coming out next spring. Seemingly a major selling point is the tight integration with Vista. So, obviously, we're not yet there with full interoperability.
When might we get there? That is the really big question.
Friday, July 22, 2005
The back door into the CMS
I don't normally try to speak for the profession as a whole, but at the conference here I heard this point made multiple times from representatives of many different universities. Of course, it wasn't made directly. It was made about "back door" paths into the CMS.
One that got plenty of discussion at the sessions I attended was automatic final grade submission from the CMS into the Student Information System. This is a biggie. The thought is that if faculty don't have to enter final grades one by one on a per student basis that it will encourage use of the CMS grade book. (That is probably a reasonable assumption.) The related thought is that once the faculty use the the CMS grade book, they will start to use other features of the CMS - for instruction. I don't know about that, perhaps it's true.
A related thought I heard expressed is for the Registrar to advise the Provost to mandate CMS use for final submission of grades. In other words, don't allow the faculty to directly submit grades into the SIS but instead have it go through the CMS grade book. I'm really not sure how that adds to the integrity of the grade process, but perhaps it does by having only one path for the submission of grades. When I heard that suggestion, however, all I could think of was that we learning technologists are enlisting the Provost to help us achieve the 100% CMS use that we so covet.
Another back door path is via student ID photos linked to their entries in the grade book. Faculty would like to be able to match the name with the face and this is a ready way for that to happen. This is one I've been hoping we can deliver. I've had requests for this type of functionality and I'm quite sure it would increase usage.
The more we learning technologists focus on these back door paths, however, the less we have our eye on the teaching and learning. At my campus, where budgets will preclude us from supporting other applications in the foreseeable future, it would be good to get some leverage from the CMS on the teaching and learning front. Perhaps we should be paying more attention to the front door path and figure out more ways by which using the software benefits learners.
Thursday, July 21, 2005
On journal articles and weblogs
At the conference here I've chatted with a few people who say they occasionally read my posts and like them. Hearing that is gratifying. Yet I write for myself, definitely. And I like to write informally because the informality helps to keep up the flow and get the ideas out. That others might read the writing and enjoy it is some form of validation, not the type that will give a raise or a promotion, but validation nonetheless.
I believe I had much more intellectual energy when I was younger and could keep more balls in the air. So I think it's possible I could have done a substantial amount of blog like writing and a decent share of the the more formal research stuff too. But my willpower is nil. Keeping some balance on that requires discipline. I recall enough of my work habits as an Assistant Professor to know it happened in cycles, intense periods of productivity and then fallow periods, watching TV, playing pool, hanging out.
Blogging is not the same. When I write the blog, I'm focused on ideas in the piece. It's not time to let other ideas percolate. The writing is fun. But it is work too. Getting the argument coherent is work. Answering why it is interesting to me is also work. It is not the same work as writing an econ journal article where to figure out the model might take months and one has to stay with it and dig deeper to get the meaning. The blog writing is shorter. I've written in earlier posts about the need for pre-writing. There is some probing in that. But it is not as in depth and I don't have to document the thinking in the same way.
One economist who does both is J. Bradford Delong. He is prolific in both domains. His blog gets a wide readership. I wonder how many Brad Delong's there are in other disciplines.
I now want consider journals in the educational technology arena, with the Educause periodicals as perhaps the quintessential examples. Writing for them is different than writing for the International Economic Review (an Econ journal where I have a co-authored piece from way back when). The division between the writing is less sharp to me. To publish in Educause the writing has to be more formal, but the nature of the argument is similar.
This morning I had a look at Stephen Downes Principles for Evaluating Websites. The first principle is --- There are no authorities. I must say I found this disturbing. I wonder if there are degrees of authoritativeness and if publishing in an Educause periodical gives more authority to the writing than if it resides on the author's site. I also wonder if that can measured in some way. I readily admit I have bad ideas, lots of them. Some find their way into my blog. I don't want to instantaneously give authority to my ideas for this very reason.
So I wonder if my approach, throw out a whole bunch of stuff, one piece a day, and see what sticks, is a good one or if I should try to elevate some of the ideas that on re inspection I think are worth promoting and then bringing those to the journals.
But that is me now. As an assistant professor, would that be a good research strategy? For getting tenure, I doubt it is. A better strategy is for the next paper to in some ways be a derivative of the previous one and in that way carve out a niche and reputation for a certain type of work. Of course, this deep and narrow approach comes at the cost of less breadth. The blogging allows the author to dance all over the place. That is part of the attraction. One wonders if it is a good way to keep the research mind fertile.
Wednesday, July 20, 2005
Growth and further development of Course Management Systems
I spent a good part of today at meetings where the discussion was about further development of CMS. I want to make only one observation about it here. Schools that are large and were on the bleeding edge a year or two ago seem like they will be slowing down in their plans to implement current versions of the software. Schools who are new to the software want the most current version, of course, but for the schools that hae been there for a while, don't want to make changes unless they are quite signficant. I wonder if an annual release cycle continues to make sense.
Tuesday, July 19, 2005
Forces Affecting Change in Higher Ed
Today's Chronicle has a piece on the revised Higher Education Bill and the role of Accreditation. Apparently those representing Higher Ed, such as Terry Hartle of the American Council on Education and those representing the accrediting bodies, such as Judith Eaton of the Council for Higher Education Accreditation, like the revised bill in that it has toned down the publicness of the accreditation process. The article has a quote from Eaton to the effect that if the process is too open then the universities, such as mine, won't be forthcoming with the accreditors about internal issues for fear they will turn into political footballs. I saw Terry Hartle present when I was at the Frye Institute and he impressed me as a thoughtful and sensible advocate. So I take the point seriously. But my idealistic half is chagrined that we can't make the issues more open. Higher Ed, after all, is supposed to champion debate and differences in point of view. Perhaps we can do that for the situation in the middle east and affirmative action, etc. But apparently we can't do it with regard to our own business practice because those outside higher ed may react in pernicious ways (cut funding, impose unrealistic further regulation, etc.).
Another outside force is competition. It can exert pressure both directly, students have alternative institutions that are viable for them to attend, and indirectly, by providing alternative models to our approach. My campus seemingly is not suffering from direct competition at present. We are swelling with new students and haze been the past few years. If anything, our goal is to cut back on the size of the entering class, since our state subsidy does not vary with the number of students and our facilities are aimed at a certain sized student population. This, of course, is a nice position to be in because it suggests that tuition can be raised without pushing enrollments below target.
But simultaneously we are suffering from tight budgets and here looking at the competition for alternative approaches is instructive. Take a look at this page from the University of Phoenix describing its program offerings and its course offerings in the humanities. To get a better sense of what information is being conveyed contrast that page with this one from my campus which shows a much larger set of programs indicating a far greater variety of offerings. Phoenix has been accused of practicing a cherry picking strategy - concentrating on programs for which there is high demand by students and for which they think they can produce a degree that has value in the marketplace. Their undergraduate offerings focus on business areas and information technology. It is interesting to see that they have nothing at the undergraduate level in Education, but do offer programs in Education at the masters level.
As we know, Phoenix has a model akin to the model of the large Open Universities where a course is produced and then instructors are trained to teach that course, staying pretty much on script, and where the instructors get a substantial amount of preparation in how to teach. Obviously, that is quite different from what we do. I'm not ready to argue that we should start imitating Phoenix but it certainly serves to contrast with what we do and focusing just on general education, perhaps we need to take a closer look.
There is considerable reason to expect higher ed in dealing with undergraduate education, particularly at the public research university level, to stay in a holding pattern for some time to come. None of the forces for change appear particularly strong at present. I wonder if I'm reading the tea leaves properly.
Monday, July 18, 2005
Teaching as Modeling
I tried to encourage him to explain what it is that we do in these training sessions. The one on Vista basics (which is the intro) is two hours. There is much more to show about Vista than can be done in two hours. Things that are absolute musts concern how to request a course site, browser configuration and java, our policy for uploading rosters, and how they can get additional help. That takes a chunk of time.
The trainers know from a bunch of evaluations they have done previously that most people attending these sessions come to learn about document distribution, navigation within the system, and user management. So the bulk of the time is spent on the file manager and linking with organizer pages, navigation, and an overview of the Vista grade book, all nuts and bolts stuff.
We talked about different views on how to deliver the training, where this trainer differed from the other staff who work for me and who also do training. The issue amounted to how structured the approach would be versus how flexible it might be to accommodate the specific needs of those in attendance. They have found a middle ground by leaving about 15 minutes at the end for Q&A and the possibility of showcasing other tools in Vista in response to questions posed by attendees of the sessions.
Our training is now voluntary, but it used to be mandatory. And it used to have a first piece that had the instructors play in a student role so they could get a sense of the environment from the student perspective. The teaching goal is admirable but both because the training was required and because instructors getting this training are extremely instrumental in their view of the course management system, this part has been excised from the current approach.
I suggested that they bring it back in as follows. At the very end of the session they have the attendees do a paper based evaluation - for lessons learned and also to get the general impressions of the attendees. Why not do that survey online, instead, using the Vista survey tool? And why not show some other slick features of Vista from the student's perspective, like selective release.
In other words, put the attendees in the role of students for tasks that would be done in the training session anyway. So they take the survey online and after the survey is done some documents are released to them that show how prior attendees had completed the survey. The whole thing would take 5 0r 10 minutes. It would give an inkling of what these features are like from the perspective of the students. And for those attendees who put two and two together, it might suggest something they should try in their own teaching.
This, of course, is how I think we all should teach. Many of the lessons are en passant. That doesn't mean they are without forethought, quite the contrary. But they are not an explicit object of the lesson. The problem with making things explicit is the problem that all novices encounter. There is too much to process. But there may be a lot of flavor to what is being presented. So from the instructor's perspective, make overt a few specifics and then let more of the nuance in en passant.
The reaction from the student that you are trying to evoke is, "that was interesting, and useful." In fact, I'm going to go back to review to see what I didn't pick up on the first pass. The reaction that you're likely to get if everything is made explicit up front is, "This is too hard. There is so much to learn. I'd be better off doing something else."
So there is a risk in being too structured in the teaching. There can be a risk in erring in the other direction too. The only way I know to find the right balance is try it out and then tweak.
Sunday, July 17, 2005
Non-Denial Denials in Higher Ed
That's why the stakes are so high: this scandal is about the unmasking of
an ill-conceived war, not the unmasking of a C.I.A. operative who posed for
Vanity Fair.
This is why the documentary Declining by Degrees is so interesting. Higher Ed has had a free pass, since as long as I've been on the scene. It has been a secure monopoly, the gateway to the good life. Now it is being challenged in that role, both on the cost and on the quality front. Rather than face that challenge squarely, we're getting a head in the sand approach.
The University of Arizona was featured in that documentary, an exemplar of the large public research university. It is telling that the day after the documentary aired, the Chronicle ran a piece saying that administrators at Arizona were upset with how the piece depicted their university. The producers of that documentary were not out to get the University of Arizona, not at all. They were out to illustrate some of the problems Higher Ed is facing today with undergraduate education and that the large research universities are struggling here. The denial in the Chronicle was both predicatable and comical.
It may not be human nature to face serious problems head on. And in this case especially, since these campuses must continue to recruit students and promise parents that they are doing well on the social contract to educate their children, it is hard to imagine a forthright endorsement of the themes in Declining by Degrees coming from within Higher Ed Administration.
Those in Information Technology more broadly and educational technology more specifically live this non-denial denial on a regular basis. This plays out in our budgets and the services we are expected to provide; never the twain shall meet. We need leadership willing to openly express what is really going on. But don't hold your breath.
Although the economy is rebounding nicely now so we may get a temporary restbit, I fear we're headed for worse problems in Higher Ed over the next five to ten years and we're not doing enough to get ready for that now. It is this lack of positioning, which is the real problem. But if you don't acknowledge that things are tough, why should you change the way you do things?
Saturday, July 16, 2005
On Warts, Farts, and Gaffes
Partly for that reason and partly as I've written in earlier posts about becoming comfortable living inside my own skin, I'm interested in the uglier parts, the "warts" so to speak. The recent posts on large class instruction provide an example. Some other posts about why I'm ok with software provided by the market (when everyone else seems to be talking about open source) give additional examples.
A couple of evenings ago after a long day I watched parts of Good Will Hunting. That movie has been on TV too much and may have lost some of its power as a consequence. But it is a good one to veg out with and does make some interesting points. One of the key scenes is when the psychiatrist Sean, played by Robin Williams, finally earns the trust of Will, played by Matt Damon. Sean does this by talking about his deceased wife, she farted while she slept, and that was ice breaker. It was the mundane and personal, not the profound and universal, that did the trick. Sometimes I think we in educational technology try too hard to show what we know and that inadvertently ends up being off putting to the instructors we support. Maybe we should spend some time talking about our screw ups, not for self-flagellation, just to bring the conversation down a couple of notches and to make it personal.
Now I'm going to be cranky. That's happening more frequently, but most of the time I don't express it in writing. This past week or so, I've gone to Wikipedia several times to look things up. Earlier in the week I checked out an econ idea, efficiency wages. The entry was not bad. Then I check out another econ idea, the hold-up problem. The entry was more spartan and the sole reference was to a book by Luis Cabral. I know Luis and I like him, we even have a short paper together with a former student of mine, Vasco Santos. But Cabral is not the right reference for the hold-up problem. Oliver Williamson needs to be cited. I mentally noted that and then forgot about it for a few days.
This morning I looked up the word "gaffe." The entry in Wikipedia got me mad. I learned that word when I had a subscription to The New Republic, perhaps twenty years ago. The critical idea, as I recall, is that some politico makes an error by speaking the truth. The Wikipedia entry didn't include the truth part and seemed to equate gaffe with blooper. That really bothered me. I went to Dictionary.com and looked up gaffe. I got the same bad definition. Aarghh!! So who is the authority writing these pearls of wisdom? I went back to Wikipedia and added a sentence to the definition. (I didn't log in but was able to do this.) The change was duly recorded and attributed to my IP address (which is meaningless since I'm at home and the ISP uses DHCP). Is this a way to build an encyclopedia? Or is this a way to get near and far approximations thereof? What if I hadn't known the word gaffe from an earlier period in my life? Would I then have accepted the definition Wikipedia offered up? There are some people who believe that usage rules. Perhaps. But spreading misinformation and turning that into usage should not. If this is what community software produces, count me out.
Friday, July 15, 2005
On Data, Schlock Social Science, and Big Brother
This past spring at one of the Campus Ed Tech Board meetings here in Urbana, we discussed the monitoring capabilities of WebCT Vista. I was interested and surprised to learn that even faculty whom I thought were active users of that environment were nonetheless entirely ignorant of the Vista reporting capabilities available to instructors. So I showed some of these off with screen shots. (I’ll review those below.) We had a representative from the Provost’s office at this meeting and in discussing the NCA accreditation visit a few years hence he said they’ve moved away from across the board evaluations and now do something much more targeted and so perhaps Casey Green has been making the bring data argument in too melodramatic a way. But he also said that perhaps the accrediting associations in the various disciplines might want broader evidence of student engagement and student learning. That said, I’m fairly confident that most college level or departmental administrators are even less informed than the Ed Tech Board faculty about the data reports that Vista can generate.
Vista does indeed offer various types of usage reports and someone with the appropriate privileges can get those at the class level aggregated or disaggregated, as well as at the individual student level. To see what these look like, I’ve done a report on my Campus Honors Class from spring 2004. This is a full semester’s worth of data for a class with 15 students. There may be some information in the relative tool usage. (My use of assessments was comparatively high because I did used content surveys.) This same set of data can be viewed disaggregated by student (and here I changed the names for privacy reasons.) And one can view individual student data disaggregated by session.
While looking at these reports may already arouse in you thoughts about Big Brother, let’s suspend those for a bit and ask this straightforward question. What is being measured? My sense of this is that mostly, it is measuring the students “clicks” inside the course management system. So, for example, in the Discussion boards the student can open a thread as a scroll with one click. There might be 10 or 15 messages in that thread. I’m thinking that is counted as one message read (but I’d like to be enlightened on that point). Certainly, the student could have other applications open and toggle between the CMS and say an Instant Messaging session. The durations reported must be either till logout (for the application as a whole) or till the work has been done, in the case of an assessment where completing the work means the assessment has been submitted. In an absolute sense, I would put very little stock in the duration numbers. Those numbers may be useful as a comparative across students.
As an instructor I would use these type of data in two different ways (but in a regular class not the honors class). First, for a kid who complains that the virtual dog ate the bytes (of his homework), or for a kid who seems like a complete screw off but claims to have put in a lot of effort into the class, such reports are a ready way to either support the students and give them credit for the work or contradict the students and call their bluff. I’ve used Mallard in the past this way. It does help and it cuts out some shenanigans that any large class instructor I know would rather do without. Second, I would look at the outlier students, both the very good and the very bad, and see if they look different in the way the report measures. Who is online more? Who posts more to the discussion board? etc. This information might help me consider how to modify the course the next time through. (If the better students are online more, I’d urge the other students to do likewise. If it is the poorer students who are spending a lot of time with the quizzes, perhaps some of those need to be rewritten, but coming to that conclusion would require more investigation.)
Beyond this, I’m not sure about the value of the information content. We can run aggregates of these and at one point I thought that might be useful to provide instructors with benchmarks for comparison (but when there are many students over a long time period the report can take quite a while to generate and doing this is server intensive). Consequently, until there is some demand for these reports from elsewhere on campus, we’re not likely to do this on our own.
Do legitimate students have reasons to be concerned that these type of data are gathered in such reports and should others on campus be concerned on behalf of these students in the same way that there is concern about providers of Web applications who use their applications to monitor individual online behavior? My answer (and that is not the official one ) is that as long as the instructors don’t post these reports with the student identities to a public Web site then there is no harm. Aggregate reports, such as the first one I listed, can be displayed without hesitation. Again, this is only my opinion, not the campus endorsement.
Let’s take a leap of imagination and imagine that others wanted these reports. Further let’s suppose that they want to use the reports to consider how the technology affects student learning. Imagine that! This, more than the student privacy issue, is where the real trouble begins.
Whoever has initiated the request, although probably not a social scientist, likely has an implicit idea of a model such as the following.
Now the real fun starts. It’s easy enough to list the variables but do we have a clue on how to measure any of them? Student is probably the easiest because the campus is in the business of measuring this already. Between SAT scores and GPA in prior semester, there are two continuous variables that should work reasonably well. I’ve said in earlier posts that I’m no great fan of the SAT, but really, this is the least of the data problems, so let’s just push on.
How in tarnation does one measure Instructor? If it is a categorical variable used in a small study so that to each instructor can be attributed their individual effect, fine. Then one doesn’t have to get into the causality from instructor to learning, but simply view each instructor as a separate regime. I’ve been told by folks who do education evaluation for a business that in such small studies if one does control for both Student and Teacher, there is usually insufficient variation left over to worry about any other variable (let alone to worry about if the effect is linear or interactive). In a larger study there might be such variation but then we’re back to square one on the definition of Instructor. Should salary be used? What about prior course evaluation ratings? How about teaching experience? And is there a need to account for the computer aptitude of the instructor? Say, let’s be clever and build an index based on several of these variables. But what justifies that?
There are similar issues with the variable, Technology – categorical or continuous, but let me turn to the last variable, Learning. Basically, no way Jose. You can talk pretests and post tests all you want but in the main we don’t do it and in the few cases when we do there is a lot of concern about sand bagging on the pretest (which doesn't count in the final grade). So then you decide to go for the post test only. But the real problem is not that it doesn’t measure value added. The real problem is the “teaching to the test” phenomenon. We know about students who do well on the final but don’t know a lick about the subject. Parsing out real understanding from doing well on the test is, in essence, intractable in a within-course study.
So you through your hands up and say, ok, Learning is too hard to measure. Let’s do something easier, like measuring student engagement. And let’s use the time spent online, something that is measured readily, as a proxy for engagement. Surely we can do that, can’t we? Well, yes we can, but as I said earlier in some classes it will be the best students who are online the most while in others it will be the worst students and when you pool those ….
So let’s give up. But it sure is tempting to try.
Thursday, July 14, 2005
More on engagement in large classes
One of the other significant issues with using technology in a high enrollment setting is that it is hard to experiment. The requirement for manageability of the class cuts against the desire to try out new things in the course. It makes sense to do pilots in a smaller class setting first before ramping up to the new way of doing things in the large class. In some cases we offer the same course in both a large class and a small class venue. Then the smaller class can be used as the “lab” for the big class. We also teach many of these courses during the summer and typically enrollments are less in the summer. Summer implementation is another way to find a place to test out teaching approach ideas.
The issue here is that often it is not the same persons who are doing the teaching and when that is the case there must be coordination among all the principals to get the requisite knowledge transfer from the lessons learned as a consequence of the teaching experiment. Perhaps one mechanism for diffusion of such practices would be online write ups and perhaps blogs would be a good way to disseminate the practice. I’m going to try to model that here.
While I haven’t taught the big course recently, I have tried in my own teaching some things that might transfer. I’ve written about some of this elsewhere, so here I want to talk specifically about ideas to concentrate on based on what I learned in my own teaching experience. This was in the campus honors class, where there were only fifteen students, but I think some of this does scale. And much of this should be broadly applicable.
First and foremost, the class needs to have some exercise or assignment or what have you that features metaphor. In my particular case, I did these content surveys and the first one was on Maps. This was in an Econ course, with no particular interest in geography. The point was to use the idea of maps, something the students are familiar with already, to argue for the idea of a mathematical model, such as the model of supply and demand. I explicitly had them think through whether maps were an exact replica of reality or if there were distortions (yes there are). And then I had them answer whether maps are useful in spite of the distortions (of course they are useful). I also had them contrast maps with directions and have them consider when each is appropriate and the strength of each. (After I had written up this exercise on Maps, I was told that Paul Krugman had written a rather nice paper making essentially the same argument, but more elegantly than I had made. Maps are a good metaphor to begin with when considering economics.)
This overt reference to metaphor in the class needs to happen fairly early on in the semester, when the students are still forming their impressions about the course. And it obviously has to be relevant to the class, or it will achieve the opposite effect from what is intended – and that is to convince the students of the importance of reasoning with metaphor and coming to understand new ideas via construction of metaphors to more familiar ideas.
Later in the course the students have to do some research on their own, probably in teams of three, or four students, on some open ended topic that should be of interest to the class. My suggestion here is that the teams do different research projects, but depending on class size perhaps there could be some duplication. In my class, the students were to write up their research as content surveys rather than as term papers, but the format is not important for the point I want to make. There are two key ideas. First, in some way other students must be put in a position to react to this research and finding the right mechanism might depend on the particular course. Second, the students have to understand that they have a contribution to make.
This is the tough one. They are novices. They are reading works by professionals in the field, perhaps not fully grasping the meanings of those works. What of novelty do the students have to offer? The answer, it shouldn’t be a surprise now, is metaphor. The students are more expert than the professor on how they first come to think of the idea, what experiences have they had that they use as tie into the idea, and what metaphors they use to represent the idea. In other words, the student research is in part translation. It is a translation about how to get started on the idea. Part of what they do is discuss the idea itself and part of what they do is the translation.
These translations may not be good and they may not be what experts would use, but at the least now the students have something to contribute and they can be evaluated on how good their translation is. And in doing such an evaluation other students can be used as evaluators because they will be in a good position to comment about whether those translations helped them being to penetrate the idea.
This also means that the instructor can now be more explicit about her task, to add additional layers to this way of thinking about the idea, to suggest other possible experiences that the students likely have had that might tie into the ideas, and to bring out other metaphors, that might be more helpful in thinking about the ideas.
Timing-wise, this means the student research must be completed well before the end of the term. There is just no way to get the response to this research unless there has been time allocated for it. So the students must start on these projects fairly early in the term.
If that is to happen and the students are to make progress at the time they start, then they have to already have an idea what to do. The very beginning of the semester must prepare the students for this research work. In my view that means model for them what they are to produce. As I said, in my class the students were to produce content surveys. I modeled that for them by having them take content surveys that I designed and we did discuss their responses to the surveys in class.
Here is one other suggestion, and this one I got from the Discovery class I taught earlier, where the projects were not quite as successful. Give the topics to the students and give them some initial places to look for scholarly references on the subject. They can waste a lot of time trying to come up with a do-able topic. At some later points in their career choosing a topic may be an important skill to acquire. But in a large class setting (and even in small classes with freshmen or sophomores, the students shouldn’t be expected to have that skill. And having them initiate is incredibly important. They need to get some draft down on their research so they can make some real progress.
How much of this can be done in a large class setting? I don’t know. But I hope to have at least piqued the interest of ed tech folks that there are some interesting research questions for them to answer in this arena.
Wednesday, July 13, 2005
On engaging students in large classes
Let me give some background to explain what I mean. Because our budgets are tight and we are trying hard to sustain our funding, we've done more looking into our usage than we would have done otherwise. This past spring we had 170 sites for regularly scheduled classes inside our WebCT Vista instance that had at least 100 students (with the biggest at about 950 students). Course coordinators can choose to have one aggregate site when there are multiple sections of the course. So these sites don't necessarily mean the face to face part of the class was done in large lecture. But certainly that is the case for some of them. And I'm guessing these are all undergraduate courses.
That 170 number represents about one tenth of all the undergraduate course offerings. But the registrations in these 170 courses count for about one third of all the possible undergraduate registrations and about two thirds of the registrations we had on our Vista server last spring. So from the perspective of the campus course management system, large class use really is the predominant use. We have small class use, to be sure, and some of that is with tenured faculty who like to innovate with technology in their teaching. But that is not the norm. The norm for use is large class and the norm for non-use is small class (probably for reasons articulated by Howard Strauss in his Chronicle piece from late June).
Given that, it seems reasonable to me to ask how the newer collaboration technologies might help in the large class setting. Certainly, Carol Twigg's work on course redesign needs to be recognized here. Yet to my knowledge none of the projects that Carol sponsored involved the recent wave of communication technologies. So it is interesting to inquire how those technologies might be deployed for large classes. Before getting to that, however, let's note the obvious. If a class has 200 students and one instructor and if all the communication initiated by the students goes through the instructor ---- well, that won't work. Some things must be done to make the large class manageable. That is where innovation and effective teaching practice are needed. But if you think I'm asking for miracles, you're wrong.
I've done a substantial amount of large class teaching myself. In the late '90s as my SCALE project, I introduced a significant online component to my intermediate microeconomics course and simultaneously tripled class size, from about 60 to about 180 students. By far the most significant innovation, and of course I stole the idea from someone else, in this case Burks Oakley, was to use undergraduate peer tutors who had already taken the course. They held online office hours from 7 - 11 PM, Sun - Thurs. I selected these students from among the pool of students who did well in the course and they expressed their willingness to do this type of work by accepting my invitation and attending an orientation session held near the end of the term before the class would be taught. They were paid something like $6 per hour and getting paid to do this work mattered. But I think it also mattered that they got some coaching from me and they were using online communication tools in an obviously constructive way.
Of course, on the novice to expert continuum these students, bright as they were, nevertheless were closer to the novice end than to the expert end. So some care needed to be exercised to put these students into a situation where they could succeed and be helpful. By and large I was able to achieve that. In the process I took a good deal of the instructional burden off of myself. So in that sense the system I came up with worked well.
But that course, like many high enrollment courses, had closed ended content in the main. There was a textbook and (part of ) the homework was working problems from the end of the chapters. Textbook problem solving can have benefit in the study of economics, but especially for students who don't otherwise do this type of problem solving on a regular basis, the work may seem opaque and not especially enlightening. More open ended content should be more engaging for the students and then the econ might fit in better with what else they are learning. Indeed, there has been a push of late to do more inquiry based approaches even in large classes. But this is not straightforward. The question how to proceed down this path raises a host of issues.
One might guess that to keep things manageable there still will have to be some closed ended activities and quite possibly computer grading of some of student work. This should allow other parts of the students work to be open ended. How is that done to make a coherent whole? Do our undergraduate peer mentors assist on the open ended work and if so what is their role? How does the open ended work get assessed? Can blogs or wikis be used to make for ensemble course projects? Can the instructor feel good about what the students are learning yet not feel overwhelmed by the venture? Exactly what constraints does teaching a large class place on the instructor? And are there ways for the large class to be empowering and enable certain good pedagogy that can't happen in the smaller class setting? (For example, undergraduate peer tutors are not affordable in the small class setting.)
Will instructional designers and other IT experts who concentrate in the teaching and learning area help us to answer to these questions? I hope so, but I have my doubts. I know that here, those and related questions are the most important ones to answer. To keep our funding we need to be broadly useful and that means being relevant to large class instructors. I wish the profession as a whole would pay more attention to these concerns.
Tuesday, July 12, 2005
What do students actually know on entrance to college?
In the for profit sector, it is easy to understand the incentive to lower standards to admit more students because head count and revenue correlate so strongly. What is the situation at the not-for-profits?
One metric, of course, is median S.A.T. scores. I'm no great fan of those tests, but given that they publish the scores, why not give the full distributions for enrolled students and not just median scores. I believe the median at Illinois that I've seen reported is 1240. But let's say (and I am making this up) that 25% are at 100 points or higher. It sure would be nice to know if those students seem fundamentally different, in smarts or GPA or what not.
What about in student's ability to make a public presentation in class (or even to ask a coherent and interesting question from their seat)? Do we now anything whatsoever about how students speak? A few years ago I learned that they don't do interviews for most of the students who apply here. As a matter of scale, I can understand that. But if we want our students to be articulate upon graduation, do we have any clue where they are on that when they enter?
And then there is the issue of disciplinary preparation. The rule of thumb is that if coming from a suburb of Chicago with a reasonably high average income, the high school education will be pretty good. But if coming from a rural school with a small graduating class, there may not have been sufficient enrichment or challenge for those students in high school. I believe that is also true for inner city schools, though it is not small graduating class that is the cause. We certainly don't want to blame bright, but underprepared kids. But do we know about there situation beforehand?
This info is a few years old, but I know in Physics they have students self-identify their lack of preparation for the Engineering-Physics course, via poor performance on the first hourly exam. Only that triggers invitation into a remediation effort and it is voluntary on the part of the student to go for that assistance.
I'm not aware of anything similar in writing. We just don't have the resources. The kids with AP credit or just as they opt out of the Foreign Language course. Do we have a clue about how these kids write? I doubt it. (We probably don't know about their foreign language skills either, but except for a study abroad or advanced course in the language, those skills won't be called into question.)
A lot of what is being preached by instructional technologists is group activity aimed at promoting collaboration and a sense of community. I think that is great for those students who are ready for it. But I have not seen anyone ask what it takes to be ready to participate in such activities. What about the speaking? What about the writing? What about the general sense of civility? And on our side of the equation, if a student is lacking in one or more of these areas what do we say?
Here at Illinois we're swelling with entering students. I believe the first year class will have 7600. Not that long ago, 7000 was a high number. Maybe we should use performance in some of these other dimensions as a screen. (But I bet we'll just bump up the median S.A.T. score.)
Monday, July 11, 2005
On Using Email or Bulletin Boards for Teaching Writing
I spent the first fifteen years or so of my professional life interacting professionally with fellow economists and my taste in writing certainly was influenced by that. Probably the biggest influence came from various co-authoring experiences. The experience that stands out in my mind the most is when I wrote two papers with Jan Brueckner. Jan is a senior colleague, now at Cal Irvine, and in the early 80s we used to read each other's papers, a practice that was not all that common in the department. I was the theory guy and he the applied guy, so he would come into my office looking for a way to model some situation and we'd talk about it for a while and then maybe come up with something. Or sometimes I'd be wandering the halls stuck on something I was working on and walk past Jan's office to see if he was in. If his door was open, then I'd be the one to initiate the conversation.
I can't really recall, but I believe that is how we got started on the papers about Adjustable Rate Mortgages as Insurance Contracts. We fought like crazy over that first paper. We argued about the model and when that got worked through we argued about the writing up of the paper. Jan was concerned mostly, at least it seemed that way to me, with producing a slick presentation. He wanted one idea to flow into the next. Everything seemed to be about connection; the sentences had to tie together and likewise for the paragraphs. I didn't see much value in that. I was much more concerned about making sure the interpretation of the model was correct. It didn't bother me that there might be a jump shift in ideas going from one paragraph to the next.
My biggest immediate lesson was that I shouldn't continue to co-author with Jan - it was just too much of a strain. (We did write one more shorter paper together, but that was it.) It was not until ten years afterwards or so where some of the things Jan had been saying began to sink in. By that point I had gone through the Writing Across the Curriculum workshop and some of the themes about good writing seemed more generic. The reader shouldn't do more work than necessary; it's the writer who should have done all the heavy lifting in the re-writing process so as to make things easy for the reader.
In economics, particularly the academic version where there is a formal model, the author is entitled to expect the reader to work through the model. I suppose that is where I went astray. I assumed that if the reader is working through the model, then the reader can work through the prose. But it doesn't mean that at all. The author still has the obligation to make the work readable. Jan's desire for slickness was right. I just didn't see it then.
For the last 10 years or so, in my administrative role in learning technology, I've interacted with a wide variety of folks who have differed in their abilities to write and where my role, frequently, would be to make the writing better while trying hard to keep the ideas intact. In essence this was one of my primary tasks when I first started with SCALE in 1996 via my work on the SCALE evaluation. I suspended discussion on what the evaluators might study in order to make sure that in their write ups of what they did study it would make sense to those in the ALN community. (See http://www.business.uiuc.edu/~l-arvan/SCALEevalf97/Intro-JALN-Efficiency.doc. The SCALE evaluation documents themselves are no longer online, but the linked document does make mention of them. ) I continue to do this type of job today, certainly on internal documents intended for my boss or his boss, but also for documents for my EdTech unit’s Showcase.
The last time I taught, spring 2004, I made a point of interacting with my students in the same way I interact with my line of reports. We have a dialog at our one on one meetings and much of that is aimed at uncovering meaning, albeit in a practical rather than philosophical manner. I tried that dialogic approach with my CHP class. They liked it very much. I think there is much to commend it.
On the writing, I tried again to mimic what I do in my work. My reports write something. Then we have an email thread on what they’ve written. Sometimes I will revise their piece. Sometimes they do that. We talk about it in our one on ones. We use all modes to critique and improve the writing.
Finally I’m in a position to address the questions I posed at the beginning of this post. As a faculty member only, I would have said that no, if you can’t write well, then you don’t know the subject you are writing about. But I’ve had the experience, multiple times actually, where one of my staff give a clear and cogent explanation aloud, only to find what they have written is bland and not descriptive. Further, I’ve found that in that situation the person can’t critique their own writing. It reads fine to them. This is disturbing. And it is vexing from the teacher’s point of view. What should the teacher do?
If you the teacher care about the writing then I do think the first approach is a written critique on the ideas and that can be via email or as a response to a post in a bulletin board when the student’s writing appears there. And, indeed, in this way the critique can happen as dialog and be quite natural. So on this, the technology is very good.
There are then two possibilities. The first is that as the writing is revised and morphs to something that the student and instructor both find more pleasing it reaches a level which they both are comfortable with. It would be much easier as an instructor if this were the only case. Then coaching students to write better would be a largely pleasant activity. But there is a second possibility and it is clearly the less happy one. The student may respond yet the writing may appear mired in muck. It doesn’t read well and has a constipated, overworked feel to it. The student is trying. The problem is not that the student is shirking. But we’re not getting there. Now what?
The answer, I’m afraid, is that the instructor has to model good writing for the student and that means doing the rewrite, or at least part of the rewrite, on behalf of the student. Personally, I hate doing this and it seems possible that even after this the student won’t understand how to get from here to there. So I certainly don’t offer this up with an unconditional guarantee. But it seems a necessary next step to keep the student from concluding that the writing is “good enough.”
I find that editing other people’s writing in this way is extremely unpleasant. And they are apt to take it as a personal attack, a rerun of Lanny versus Jan. So apart from the poor writing the instructor likely has to deal with the emotional baggage that goes with it. Neither are fun taken individually. Together, who wants to confront this?
But for the student to learn, is there an alternative?
Sunday, July 10, 2005
Teaching Online to Inner City School Kids
The article itself hints at but doesn't focus on the fact that the fraction of African-American students here is low and I believe it has been declining recently. The current rate is 5%. It was once as high as 9% or 10%, not stellar, but better. Our sister campus in Chicago has a much higher rate of African-American students. At the graduate level we are truly an international campus. Historically, we've been much more of a regional campus at the undergraduate level, with the in-state rate over 90%. There are efforts afoot to make the undergraduate population more diverse, but currently the focus there is on recruiting international students (who will pay the full out of state tuition). We should be doing more to attract minority students from within the state.
The News-Gazette article talks about a mentoring program between minority graduate students and minority undergrads. Undoubtedly this is a good idea. But what about something similar where the reach is down into the high schools and the focus is on getting ready to attend school here, both on the cultural side and the academic side. If the high schools these kids attend don't offer a rigorous enough curriculum (the article talks about one kid interested in science who found his preparation inadequate and hence it was hard for him to compete in the classroom). Why not push online alternatives that are taught by folks here in conjunction with a mentoring program (some face to face, say the summer before, and then some online) to make sure these kids are ready. I believe the campus has a program during the summer after high school before the freshmen year starts. Why not push that back a year or two and get the kids when they are in 11th grade or even 10th grade.
I know budgets are tight, but the technology is certainly good enough now that this should be do-able. The hard part is not there. The thing to ask is how to get institutional commitment for this sort of thing. I wish I knew the answer for that.
Saturday, July 09, 2005
How Essential is Undergraduate Learning?
Here, of course, Schindler was the focus. The movie depicts him as undergoing a personal transformation. Initially he was out to make lots of money, running a factory staffed with interned Jews who provided slave labor. Somehow the barbarism of the time and the insanity of the situation changed Schindler, at least in Steven Spielberg's version of the story. Saving Jews by having them work in his factory became his raison d'ĂȘtre. Stopping the German war machine was his secondary goal. Making money had been a false idol to worship. The ultimate scene has him break down in front of Itzak Stern, the accountant who ran the business, and others about in the factory, full of remorse that he had earlier spent money as if it was water and thus couldn’t save even more of the Jews when his money ran out.
Whether the story is completely true, the Schindler Juden are real enough but Schindler’s personal odyssey is largely unknowable and hence the book upon which the movie was based is considered a work of fiction, it is nonetheless gripping, at least as much for the idea that dire circumstances can make people reevaluate what in life gives it real meaning and that the human response can be decent and uplifting as it is for the other reality, that Schindler was a member of the Nazi party yet he was a man not a beast.
This type of film is a cause for reflection. In my case, the reflection might have been about my ancestors; on my mother’s side both of my grandparents were killed at Auschwitz. But instead I started to reconsider my current vocation – promoting good teaching and learning at a research university, using technology as a catalyst and an instrument. I’m continuing on with this reflection, especially in light of the recent documentary, Declining by Degrees. (Incidentally, my posts since this past Tuesday can be taken collectively as my explanation of the dual interrelated problems in College education today: the “Disengagement Compact” and the “Hyperinflation in Tuition.”)
I had not asked myself this before, but is it possible that attempts to save the entire system are foolhardy, even vain. Most of the Arizona students who showed up on camera in Declining by Degrees seemed comfortable enough, perhaps even too comfortable. Sure there seems to be a terrible waste in how little they are putting into their own education (and in how little some of their professors are challenging them). But perhaps they should be left to their own devices. Perhaps, instead, the focus should be on only those students and teachers who are eager and ensuring they are done right by how and what we teach. If the rest are, to borrow a phrase from Pink Floyd, comfortably numb, maybe we should leave them there.
The Sloan Foundation, which has been one of the big advocates for online education, has made their issue about access. Declining by Degrees also made access a big issue, but for them access and costs were essentially two sides of the same coin; access is restricted for lower income students who have to pay high tuition. There is nobility in promoting access. But it has never seemed like the right issue here. We have more students at the undergraduate level than we know what to do with.
The news is awash with stories about religion and fundamental belief. Today’s NY Times has an article about a Viennese Cardinal’s view that Darwinian evolution is not the right story and that there must have been Divine intervention. The Sandra Day O’Connor resignation and the jockeying for her replacement has brought out the beliefs of “Original Intent” by folks such as Robert Bork, who can not stomach other Justices like Anthony Kennedy, who has been known to change his mind. And, of course, the London bombing have brought out the faith and resolve of Prime Minister Blair and President Bush.
I am driven by doubt, not faith. All the expression of faith makes it seem to me a world that is crazy, defying reason. So I express my alternative. And with that I have to wonder if what I’ve been putting my efforts into is all that important.