Wednesday, April 20, 2005

Evaluation of learning technology

I've expressed a lot of opinion about how to teach with learning technology and also some opinion on how the technology itself should evolve. Where does that opinion come from? What is its basis in fact? To the extent that I speak for the larger profession one can ask: how do we know what we know? To the extent that I've gone down my own private path one can ask: is there any reason for someone else to follow?

As an organization we gather data from surveys. Just yesterday I was talking with the staff who support the classrooms about their spring survey. And the entire organization has had a survey the past two years. We learn something from these things and especially reading instructor comments has value. Nevertheless, the learning from this mode of data collection is limited.

I've thought for some time that we might learn something from within the tools that we support by tracking usage. I commissioned a very simple study about how our tool "Netfiles," which is the campus branding of the Xythos software, is being used by students. I need to follow up on that more, and then maybe commission a more full flung study of the same sort. The WebCT Vista software that we support offers some tracking information of tool use. We have not publicized any of that. Perhaps we should do so.

Early on I had a nice discussion with Chip Bruce about evaluation and he left me with a thought that sticks. We need to do more in an antrhopological approach to learning technology - pure description, no model of behavior, no hypothesis testing. In that vein a while back we did interviews with groups of students and then replayed some of those before public audiences. One of the things that was interesting was where students did their work and how they communicated with their classmates. In that regard AIM is both a blessing and a curse. Students, at least the few we talked with, apparently can't extract themselves from their social lives at their place of residence and simply say, "I'm working." They need another place for that. It was an interesting lesson. Another lesson was how hard pressed they were to come up with examples of good use of learning technology. It's much easier for them to talk about courses that don't work well.

Of course, I've had lessons over time from my own teaching and my conversations with other faculty. I've been doing this for a sufficiently long time that most ideas I'm exposed to no longer seem new and whether I've been organized in doing this or not, these ideas have somehow found their way into my world view about instruction. While I'm a theorist in my formal economics training and in my published economics journal articles, I consider myself empirical in the approach to learning technology in the sense that I try to resist imposing models of behavior on what I observe and try to make a point of driving some of the things I do and my staff does by those observations that I think are especially important. Our professional organization, Educause, has a research arm, ECAR, and they do studies that rarely tell us something new but do gather a lot of data to confirm what we know intuitively. These are quite useful in communicating the knowledge to others. So, for example, ECAR has done work showing that while students spend a lot of time on computers prior to college, they still have substantial gaps in their knowledge for them to be considered IT literate. Moreover, the technology knowledge that the students pick up in college is mostly driven by their academic courses requiring some competency in the technology. The students don't typically learn new (to them) technology use outside of the course setting.

I close here with asking whether we can do better than this in evaluation during these relatively tough budget times. This is a question we need to raise repeatedly both to rationalize the funding we do have and to redirect it towards its best use.

No comments: