Below is my reaction on the use of certain buzzwords that are in current vogue among IT practitioners, particularly at the level of campus CIO. Many of these words bother me, both in their use and in the implicit ideas behind their use. This particular critique is based on a recent piece from Educause Review on the top-ten IT issues. Following my critique there is a snip from that piece that lists the issues and a backlink to the article.
Change and Change Agent
Change is a neutral word regarding consequences. Some change is for the better. Other change may be of no real consequence. Still other change may be detrimental. A blanket embrace of change signifies an inability to tell what will happen as a result or an indifference to unintended pernicious consequences. Here is a specific example to illustrate.
The physicists at Illinois who developed iClicker (hence they are certainly not against the use of technology in instruction) have adopted a practice where cell phones are forbidden in their classrooms. Their view, informed by quite a bit of experience, is that students get distracted from class activities when their phones are out. Personally, I think the physicists' solution is draconian. The point here is not to identify the "right answer." The point is that reasonable people will likely disagree on that. I have seen IT pros have no tolerance for the position the physicists have articulated. That is a problem, a big one.
This Ted Talk by Peter Doolittle on Working Memory gives the conclusions from the psychology research that support the physicists' position. In a nutshell, working memory is limited in capacity and a student on a cellphone while in class therefore has less working memory to devote to what the class is doing. Further, the possible distraction created means that other students bothered by this cellphone use also have less working memory to devote to the class. My disagreement with the physicists isn't on this point. It is with using a hard rule to get students to put away their phones. Implicitly, that is telling the students it is okay to have their phones out in other classes where there is no hard rule present. That's the wrong message.
Student Outcomes, Learning Outcomes
This is the same sort of rhetoric one associates with No Child Left Behind, Teach to the Top, and the associated emphasis on standardized testing. Do the CIOs really want to promote a teach to the test mindset in college instruction beyond what is already happening? Do the CIOs know about the "Disengagement Compact" that George Kuh described 10+ years ago? Do they understand that adjuncts are under more pressure than tenure track faculty to uphold their side of the bargain in this unholy agreement - because their jobs depend on students being satisfied. So do the CIOs understand it is quite possible for outcomes to show improvement but actual learning to be less?
Last week I was at a symposium for folks in writing studies. The keynote speaker, Steve Lamos, was making just this point. I am not making it up. (See also below in the section on Analytics.)
I've been railing about this one for years. (For example, see my essay in EQ from 5 years ago, Dis-Integrating the LMS.) Technology is a facilitator, at best, and a pernicious influence at worst, such as when students have their cell phones out for personal use while in class. Technology coupled with effective practice can deliver good consequences. The rhetoric CIOs use should be peppered with the expression, "effective practice."
This is of no small consequence, even within the IT organization. The primary job of the Academic Technologies unit is not to be a second help desk, one specific to those applications used in instruction. The primary job of the Academic Technologies unit is to promote effective use and thereby encourage better teaching and learning.
Given that mission, there is then the question of whether Academic Technologies should also provide some help-desk support for instructors, because instructors are more comfortable dong one-stop shopping, or if for comparative advantage reasons all the help-desk function should be in a single place. (See items 3 and 4 of the top-ten issues list.) The answer to this might vary by the nature of the institution, its history, and the mechanism by which the faculty view gets factored into IT decisions. So I don't want to presume the answer here. But to discuss the issue, the rhetoric needs to include effective use and to make it an important goal.
There are two different uses of this term, one specific to learning, the other to institutional research. I will take each of these in turn.
We should be encouraging students to write more in the courses they take. (That I believe this to be true is why I get invited to attend a symposium on writing studies.) Writing online greatly eases the ability to share the written work as well as to share feedback on the written work. So there is certainly a role to be played by technology in promoting writing. Yet comparisons across pieces of written work - this one shows more maturity of thought than that one - are inherently analog in nature and hence lie outside the scope of analytics. We should be about encouraging growth in student thinking over the duration of a course. Privileging quantifiable information - scores on multiple choice tests, for example - simply because they are quantifiable is a fundamental mistake that CIOs should not make. Yet even if that is not intended by item 5 on the top-ten list - the intent is to further develop early warning signs for at risk students so appropriate interventions can be done in a time fashion - isn't it possible, even likely, that privileging quantifiable information will be the unintended consequence?
The use of the term analytics as it pertains to institutional research suggests a view that says - let the data tell the story. We'll learn simply by looking at the data, given today's sophisticated tools to mine the data. I am extremely suspicious of that view. I'd like to present an alternative here. But before I do I need to make a brief disclaimer.
I don't want to get on the case of the folks who do institutional research officially. They do yeoman's work and do it well, especially given the limited resources available to them.
The questions that this institutional research addresses typically have a historical basis and/or are driven by the Provost's office. At issue here is whether there are other places on campus now that can make legitimate claims to ask institutional research questions, because the tools to answer those questions are now good enough to decentralize the process.
Here is an example to illustrate. Suppose the Center for Teaching on campus is making a push to help adjunct faculty who teach high enrollment courses, so their instruction becomes both more effective and more rigorous and therefore less teach to the test in approach. Further, suppose the anticipation of all involved is that there will be a learning curve with the new approach and that early on the students might not do as well under it, but after a few iterations students will do better. This means that early on in the implementation the class grade distribution may produce more mediocre grades and the student course evaluations may produce more dissatisfaction than has been the historical pattern. To help the instructors navigate through this initial phase, the Center for Teaching, which owns the course evaluation data, but has no such ownership over grade information, would like to produce reports based on aggregate data of each sort derived from similar classes taught around campus. Can the Center for Teaching do this? Whose blessing is needed to okay this use? And what if there are many alternative such requests from other units on campus - specific colleges, campus human resources, student government, etc.
If analytics comes to mean that now there will be a second institutional research group, one that reports to the CIO instead of the the Provost, because there are transactions data in the LMS, for example, that may be interesting to a broad audience but that aren't within the scope of traditional institutional research, this will end up as rather disappointing to many on campus as their research questions will remain unanswered. One needs an approach where the generation of research questions is done in a decentralized manner. The term analytics simply doesn't convey that.
Language is not neutral. Item 2 on the top-ten list, about an effective partnership between IT leadership and the rest of campus leadership, I embrace but in my experience it has been an elusive goal. People within IT need to understand that the language they use will have an impact on whether Item 2 can be realized. If through the use of language the implicit message conveyed is that the IT folks are first and foremost tech nerds, they will not be able to deliver on item 2. They will be showing the rest of campus that IT lives in its own bubble.
Isn't it time for that bubble to burst?