Friday, June 24, 2016

A little analyics exercise for my own class

Below are some quantitative data on my course offerings for the last 4 years based on information that Blogger provides and enrollment numbers that Banner keeps.  (The excel file with these data is also available for download.)  Since I teach a small class, you would think this sort of information is kind of superfluous.  I did this just so I could see the type of data folks who do learning analytics look at.

My take away from this is:

(a)  Hits/Post/Student is an indicator of participation or engagement of the class as a whole.  Three out of the 5 offerings had that number near 6.  (Thankfully, none have the number lower than 1.)  Fall 2012 was an unusual semester (that is explained in a comment) and may impacted the numbers.  Fall 2015 was a class I really struggled with.  The numbers seem to bear this out.

(b)  Because I don't have individual student access stats, I can't say anything about student engagement this way.  But I do have a sense of wide variation in engagement across students.  On the under performer side, these are students who are chronically late or miss submitting course work.  On the over achiever side, these are students who will email me privately to discuss issues with an assignment and those who make regular use of office hours.  I really don't know how hits to the site correlate with these other measures, but in principle that should be measurable.

(c)  There seem to be four possible things to explain variation in Hits/Post/Student.  The classroom is one.  The DKH classrooms are fairly dreadful, but 223 DKH is more intimate than 123 DKH.   I hate the tablet armchairs, but they are better than bolt down seating when in a small class setting.

Class size might be another explainer.  I like to use Socratic methods, which works well in a small class setting but may break down in larger classes.  I don't know where the line might be but 25 students may be the about the max for which my approach works well.  There were attendance issues both in spring 2012 (which is why I now teach only in the fall, under the assumption that senioritis is worse in the spring) and in fall 2015.  In a larger class where many kids don't show up, the kids who do come may be influenced by that.

Cohort effects could be quite significant.  I may simply have had a passive bunch of students in fall 2015.

The last thing is my experience teaching the course.  I do make it a point to try something new each time I offer the class, but there clearly was more novelty early on.  That could influence how the live class works and in turn impact how intensively the online part of the course is utilized.

(d)  If I did not have other evidence, I'm not sure that the hit data would be meaningful to me.  It is a bit useful to confirm impressions I have formed by looking at other evidence.  But I would never use it as a primary way of getting a sense of how the class as a whole is doing

The final thing I'll comment on is that even with the data having to be compiled, it wasn't that hard to do.  So the learning analytic folks might ask some individual instructors to do something similar for their courses to see what impression they form from doing this sort of exercise. 

No comments: