I’m at the ELI conference in San Diego and have been learning both from my peers and from the various presentations about how the world has changed within the world of learning technology. There are three different areas that I’d like to comment about.
Speaking personally, this May will mark my 10th year anniversary as an ed tech administrator. That is a long time. Since for the prior 16 years I was more or less an ordinary faculty member in Economics, this no longer feels like the beginning of a journey for me, as it did in 1996. I come to conferences like ELI now not expecting to learn a vast amount of new things but rather to get colleague’s perspective on issues, perhaps to pick up an idea or two, and to visit with friends. Among my peers who are senior to me, there is a feeling that the changes we are seeing are not for the better. (Some elaboration below.) There is some nostalgia for the NLII which seemingly paid more attention to the to the large R1’s like mine and had more direct discussion about the return on investment to learning technology, something that is not really being discussed now.
But there are fewer attendees of my generation and more in the next generation of learning technology leaders. They need their own issues. And they need to make their own mistakes. God only knows, that we made our fair share of blunders. So let me move to the next issue.
The word “pedagogy” is deified by many of the participants, but frankly there is too little discussed about it: what it means, how to promote it, what learning we are after.
I stopped writing the above to get breakfast and am now in the plenary session where we are talking about “learning design” instead of “instructional design.” This is the demonstrating the issue. We have to get past this. We have to drill down much deeper into the practice and in that sense we need to abandon these categories and just describe what we’re doing and what the students are doing.
Two examples of this are in the measurement area. I heard a presentation from NC State about their campus evaluation effort and their current efforts with evaluating the effectiveness of technology in the live classroom. While during the presentation there was specific mention of the “dark side” – death by PowerPoint – there was no mention of good practice with the technology that might promote learning. There was nothing about Just In Time Teaching, about showcasing student work, about using the computer to record the data from in class experiments, about encouraging students to make presentations with the technology, about showcasing information outside stereotypes that students wouldn’t find with their Google searches, or about any other practice where we might a priori agree, “this is helping.”
Instead there was just a discussion about the coarse correlation between the technology and the learning, manifest primarily between student and faculty perceptions, as measured by survey evidence. The problem with this, unfortunately, is that as we linger on this coarse correlation, we’re inadvertently communicating with the rest of the world that we really don’t know why the technology should be helping so, consequently, we can’t possibly know whether it is helping.
Suppose, in fact, that the technology does make the instructors life easier but actually is worse for the student – not a bad hypothesis in my view. How do we distinguish this from the case where it is really helping the students? What evidence are we looking for? I heard nothing about that.
In a different conversation I had a similar discussion with a colleague about the course management system. Again, there was the coarse correlation, but this time on whether student use correlated with student performance. Is this the technology at all? Or is it simply a proxy of the more general observation that students who spend more time on task learn more and those who spend more time on task will, on average, spend more time in the course management system? I tried to ask about drill down, but then one loses power in the statistical test. We’re enamored by proving this with “hard evidence.” But what are we proving.
Let me turn to the third issue, which is how the student is being represented. This really disturbs me. There has been much presented, for example in the opening plenary session by Mark Prensky, about hours spent playing games, reading, etc. These are generational averages from which conclusions seem to be drawn about all students. I’ve polled my own campus honors students this semester. Only two out of twelve students read blogs regularly. Only four out of twelve had ipods. The vast majority are engineering students. What should be concluded?
One possible conclusion, I’ll admit it is premature and I don’t have enough data but I might hypothesize anyway, is that serious students are unlike the students characterized by Prensky. Let’s say for the moment this is true. (And, truthfully, my students this time around don’t see so different form the students I had two years ago who didn’t seem so different to me to the my peers when I was an undergrad.) So should I abandon my teaching approach to what is being advocated because of these generational differences. Or should I be trumpeting loudly that teaching to these (statistical) mean characteristics of students is the path to ruin, not success.
Actually much of what Prensky said was not too shocking to me or out of my own way of viewing things. But on one key point there seems to be a difference. This is on the issue of deferred gratification and how one can do that and maintain engagement. Presnky (and others at this conference) seemingly argue that there needs to be immediate response that progress is being made and that must come from external sources (moving from one level to a higher one in a video game). This I think is wrong and pernicious and will actually be quite limiting for the generation if it becomes the norm in behavior. Kids need to learn stick-to-it-ness and they to learn to take their own personal voyage into investigation where the feedback comes internally as they make progress. How else will they learn to write? How else will they learn to work through hard problems? How will they deal with the disappointment of frustration of getting stuck?
These things are seemingly outside the agenda of the current generation of ELI leadership or, at a minimum, it is not center stage. Engagement is the focus. Engagement is not bad in itself, but it is not an end, or at least it is not the sole end. I hope people wake up soon, but if I were betting I’d bet against. Perhaps this is evidence that the only true learning is from making the mistakes for oneself rather than from absorbing the lessons of the prior generation.