Tuesday, January 31, 2006

They’re Changing Guard (with apologies to A. A. Milne)

I’m at the ELI conference in San Diego and have been learning both from my peers and from the various presentations about how the world has changed within the world of learning technology. There are three different areas that I’d like to comment about.

Speaking personally, this May will mark my 10th year anniversary as an ed tech administrator. That is a long time. Since for the prior 16 years I was more or less an ordinary faculty member in Economics, this no longer feels like the beginning of a journey for me, as it did in 1996. I come to conferences like ELI now not expecting to learn a vast amount of new things but rather to get colleague’s perspective on issues, perhaps to pick up an idea or two, and to visit with friends. Among my peers who are senior to me, there is a feeling that the changes we are seeing are not for the better. (Some elaboration below.) There is some nostalgia for the NLII which seemingly paid more attention to the to the large R1’s like mine and had more direct discussion about the return on investment to learning technology, something that is not really being discussed now.

But there are fewer attendees of my generation and more in the next generation of learning technology leaders. They need their own issues. And they need to make their own mistakes. God only knows, that we made our fair share of blunders. So let me move to the next issue.

The word “pedagogy” is deified by many of the participants, but frankly there is too little discussed about it: what it means, how to promote it, what learning we are after.

I stopped writing the above to get breakfast and am now in the plenary session where we are talking about “learning design” instead of “instructional design.” This is the demonstrating the issue. We have to get past this. We have to drill down much deeper into the practice and in that sense we need to abandon these categories and just describe what we’re doing and what the students are doing.

Two examples of this are in the measurement area. I heard a presentation from NC State about their campus evaluation effort and their current efforts with evaluating the effectiveness of technology in the live classroom. While during the presentation there was specific mention of the “dark side” – death by PowerPoint – there was no mention of good practice with the technology that might promote learning. There was nothing about Just In Time Teaching, about showcasing student work, about using the computer to record the data from in class experiments, about encouraging students to make presentations with the technology, about showcasing information outside stereotypes that students wouldn’t find with their Google searches, or about any other practice where we might a priori agree, “this is helping.”

Instead there was just a discussion about the coarse correlation between the technology and the learning, manifest primarily between student and faculty perceptions, as measured by survey evidence. The problem with this, unfortunately, is that as we linger on this coarse correlation, we’re inadvertently communicating with the rest of the world that we really don’t know why the technology should be helping so, consequently, we can’t possibly know whether it is helping.

Suppose, in fact, that the technology does make the instructors life easier but actually is worse for the student – not a bad hypothesis in my view. How do we distinguish this from the case where it is really helping the students? What evidence are we looking for? I heard nothing about that.

In a different conversation I had a similar discussion with a colleague about the course management system. Again, there was the coarse correlation, but this time on whether student use correlated with student performance. Is this the technology at all? Or is it simply a proxy of the more general observation that students who spend more time on task learn more and those who spend more time on task will, on average, spend more time in the course management system? I tried to ask about drill down, but then one loses power in the statistical test. We’re enamored by proving this with “hard evidence.” But what are we proving.

Let me turn to the third issue, which is how the student is being represented. This really disturbs me. There has been much presented, for example in the opening plenary session by Mark Prensky, about hours spent playing games, reading, etc. These are generational averages from which conclusions seem to be drawn about all students. I’ve polled my own campus honors students this semester. Only two out of twelve students read blogs regularly. Only four out of twelve had ipods. The vast majority are engineering students. What should be concluded?

One possible conclusion, I’ll admit it is premature and I don’t have enough data but I might hypothesize anyway, is that serious students are unlike the students characterized by Prensky. Let’s say for the moment this is true. (And, truthfully, my students this time around don’t see so different form the students I had two years ago who didn’t seem so different to me to the my peers when I was an undergrad.) So should I abandon my teaching approach to what is being advocated because of these generational differences. Or should I be trumpeting loudly that teaching to these (statistical) mean characteristics of students is the path to ruin, not success.

Actually much of what Prensky said was not too shocking to me or out of my own way of viewing things. But on one key point there seems to be a difference. This is on the issue of deferred gratification and how one can do that and maintain engagement. Presnky (and others at this conference) seemingly argue that there needs to be immediate response that progress is being made and that must come from external sources (moving from one level to a higher one in a video game). This I think is wrong and pernicious and will actually be quite limiting for the generation if it becomes the norm in behavior. Kids need to learn stick-to-it-ness and they to learn to take their own personal voyage into investigation where the feedback comes internally as they make progress. How else will they learn to write? How else will they learn to work through hard problems? How will they deal with the disappointment of frustration of getting stuck?

These things are seemingly outside the agenda of the current generation of ELI leadership or, at a minimum, it is not center stage. Engagement is the focus. Engagement is not bad in itself, but it is not an end, or at least it is not the sole end. I hope people wake up soon, but if I were betting I’d bet against. Perhaps this is evidence that the only true learning is from making the mistakes for oneself rather than from absorbing the lessons of the prior generation.

2 comments:

Jeremy said...

I wasn't at the conference, but it sounds like you're asking great questions here about the role of engagement.

"How else will they learn to write? How else will they learn to work through hard problems? How will they deal with the disappointment of frustration of getting stuck?"

I think you've nailed it with these three. Traditionally these questions haven't been addressed in any substantial way in education, but the answers might be that they will learn these important skills by listening, repeating and being exposed to information from many disciplines (dictated by the curriculum and delivered by instructors). The students' level of interest in the skills or topics has never been relevant, because the whole point has been to make sure everyone is jumping through the same hoops at the same time.

I think what Prensky and others are envisioning is a system where learning takes place within a context the learner cares about. A kid who loves kites is going to learn more about writing and solving hard problems by building, flying, simulating, analyzing and reporting on experiences with kites (shallow example, but you know what I mean). Maybe even communicating with others around the world who share that passion and using technology of all kinds. Perhaps I'm extending this way too far, but that's what I think when engagement becomes the focus, and I love it.

You mentioned some skepticism about Prensky's generalizations about kids. I think he's focused on teens -- picture your "average" 10th Grade kid (maybe even limit it further to 10th Grade boy if you're focused on games). Your honors engineering students couldn't really be considered representative of the wider population of young people...they're older, geekier, more academically focused, etc, etc...

Lanny Arvan said...

Jeremy - thanks for your comments. Just to refine the point on where I take issue with what Prensky said, my concern is not with engagement at all but in relying on "games" as a primary means of promoting engagement and therefore introducing some artifical aspects in sustaining the engagement - particularly the quick feedback idea which is ever present in games but is not ever present otherwise.

In talking about the honors studeents, I may have confused things by mentioning also that they were engineering students. (Though if you come to Illiois and are a very good student, you likely will major in Engineering.) The issue is whether very good students, engineering or otherwise, have different profiles about technology use than more typical students.

For the sake of argument let's assume the profiles are different. That doesn't say anything about the direction of causality, but it at least raises the question of whether what is being recommended is what we should do for very good students.

Then the next step of the syllogism is that since we want our students to be very good....