I typically get “conferenced out” after a couple of days at an event so start spending more time in casual conversation than in going to sessions. On the third or fourth day of this VPI conference I recall having some beers with more junior types, a chain smoking junior faculty member in English from NC State and if memory serves the son of one of the conference organizers, Len Hatfield. We talked for quite a while about how students learn and whether there is a good model for that which we could all embrace. Somewhere during that conversation, the guy from NC State said there was a really great book, The Reflective Practitioner by Donald Schon, and it had the type of theory we were looking for.
Reading Schon’s book was good for me. It confirmed a lot of my own thinking about how we learn and how we know things and how we are seemingly smart within the situation and can come up with things in context, but can’t recall and don’t reason well in the more abstract setting. Though I haven’t looked at the book for quite a while, I believe Schon uses terms like “knowing in action” and “learning in action” to indicate this highly nuanced and localized knowledge that professionals possess and that apprentices aspire to. Schon also has a notion of an experiment - not an experiment in the sense of the scientific method and hypothesis testing, but rather in the sense of an exploration into something unknown where the choice of what to try and what might be expected as the outcome is driven by the knowing in action. This is the type of experimentation that I think should be a regular part of our teaching.
There was a piece in Educause Quarterly a while back that embraced the Schon approach and tried to use it as the basis for a framework of experimentation with educational technology, though it uses the term “Plan” where I would much prefer the term “experiment” for reasons that I’ll explain as I go on.
In the current Educause Quarterly there is an an interview with attendees of the recent Instructional Technology Leadership Institute, held last summer at Penn State. There is a discussion of what Instructional Design means and in my way of thinking, it was cast in a very “instrumental” light. Consider these remarks from Larry Ragan, one of the instructors at the Institute.
The designer’s role is to craft a learning experience so that you achieve an
outcome, and the technologist’s job is to create the environment for that to
happen in. The technologist is more defined and delineated in approaching tasks,
addressing hardware, support systems, and the technologies needed to get
something done. The designer brings in the art. Think of an interior designer.
He isn’t the one doing the building; he tells you where he thinks the lighting
should go and how things should be arranged. He doesn’t build the furniture. The
technologist says, “You want a chair, I’ll build you a chair.” The designer is
the one who has to think about placing it.
I have some trepidation about this type of instrumental approach in support of instruction and for similar reasons I have concerns when it is applied elsewhere, as in the continuous improvement model that underlies the accreditation process and in the project management process that is how our respective IT organizations go about doing their work. In the rest of this piece I want to explore whether my fears are rational or paranoid and how I think one might reconcile a Schon view with this instrumental instructional design view.
So let me bring in my own theoretical background, which is of course economics, not teaching and learning. During my first year in graduate school, I learned continuous time decision models under certainty - Calculus of Variations and Optimal Control Theory. Some time later I learned the discrete time alternative, Dynamic Programming. And then I was exposed to but didn’t become expert in the stochastic alternatives. I did spend some time on what is called Markov Decision Processes. (How is that for a lot of jargon in one paragraph?)
Let me summarize what one learns from this type of study. A solution to such a program is a “policy” that says what to do at each time and, in the Markov case, as a function of the “state” of the system at that time. So the policy is a function that takes state and time and maps into actions. At a theoretical level, what an instructional designer should be doing is coming up with a policy, according to the articulated goals of the instructor.
However, this notion of a policy is a complex beast and it would be extremely hard to communicate, let alone derive. So a designer might solve a simpler problem by
(1) ignoring the uncertainty altogether and
(2) making the time intervals each sufficiently long so that are not too many of them in total.
This type of abstraction is natural to do for the designer. Indeed, part of Schon's point, though it is tacit in his book, is that abstract problems of this sort are not natural to pose so we don't solve them in their full complexity that way. In other words, the problem the designer solves is a simpler problem. But it means that when it is applied that one will almost surely go “off the plan” as time passes and then it will be be necessary either to re-calibrate the plan in mid stream or to stubbornly proceed with the original plan although it may seem doing so is not working.
An alternative is to encourage the instructor to be more robust in the instructor role, more robust in the sense of Schon, and try to know in action what is the appropriate thing to do but not to try to encode that ahead of time and instead let the appropriate action be determined by the instructor on the spot. I think good teachers do this all the time and to the students it makes the instructors seem responsive, a desirable in my view.
But if the instructor does this type of knowing in action while teaching, then a detailed plan probably gets in the way. Some higher level scaffolding may be fine, but putting a lot of flesh on this is counter-productive. So a significant part of my concern is how the instructor handles surprise as the course is taught. In lecture classes, there will be little surprise in what the instructor presents. But there is potentially a lot of surprise in how the students will react (and then maybe some surprise in how the instructor reacts to the student’s reaction, etc.) In courses that are seminar based, presumably there will be surprise throughout the discussion and then the instructor has to decide whether to bring the class "back on track" or to go with the flow of the conversation.
My sense is that the instrumental approach, in practice if not in theory, short changes consideration of surprise. Thus it is best at training of known skills (like learning the rules of the road for a driver’s license) and it is less good where the learning is more open ended, as with doing investigative research. The latter is what seems critical for learning to learn and higher order thinking. That is where my bias lies.
The other issue I have, specifically with instructional design and not, for example, with project management is the agency notion and how that should be managed. The instrumental approach implies that the expertise stays with the designer; the instructor can remain ignorant of the design process and as long as a plan is produced, proceed apace. The instructor doesn’t learn to teach in this approach (and in particular doesn’t get exposure on how to know in action about what to do when confronting a particular teaching issue in class).
This is why a mentoring approach between an experienced instructor and another instructor earlier in his career or a shared experience approach where instructors openly discuss their own teaching issues and comment on what other instructors have said may be preferable to the instructional design approach, because the agency issue is not there with mentoring or shared experience. The information sharing becomes a form of collective problem solving rather than knowledge transfer from expert to novice.
Does instructional design have to be this way? I don’t think so. I think it can be more collegial and less instrumental. But in too many cases I believe the designers cling to the instrumental approach for pretty much the same reasons that very junior faculty end up teaching “the best graduate course a freshman ever had,” to establish their credentials and justify their role in the consultation. Unfortunately, the consequence is for the designers to seem pedantic, inflexible, and quite possibly irrelevant.
Who will let them know?