Does an experienced writer know whether he's nailed a piece immediately after he's finished composing it? Or is it that in writing the thing he gets too close to his creation to understood how others will react to it and thus the actual reaction by others adds to the author's understanding of what the piece has accomplished? In this case, might it be that the reader's take away from the piece remains distinct from the author's aim for it? If so, is there a necessity that the reader and the author ultimately reconcile the two views? Suppose here we abstract entirely from works of fiction and consider only non-fiction pieces. Does that change how you would answer these questions? Just to make these questions concrete, consider that in yesterday's NY Times Op-Ed there was a piece by a Palestinian author representing views of Palestinians in Gaza, particularly in regard to American foreign policy toward them, with that foreign policy characterized by this author as failed. It might be reasonable to suspect that most readers of that piece are pro-Israel. Wouldn't that in itself suggest that many readers will ignore the arguments the author makes, rather than embrace those arguments? Yet surely the author wrote that piece to move minds and the Times published it if not for that reason then as a commitment to the notion that multiple points of view need to be heard and their readership should respect that notion.
Let's now repeat the questions from the paragraph above but this time substituting teacher for writer (or if you prefer you can use designer of learning objects instead of teacher, though that is a mouthful) and substituting student for reader. And think of these questions posed in a course from a discipline that has a distinctive methodology, such as microeconomics, where the methodology itself can be considered objectively and where the learning object in question is on some aspect of the methodology. In this case it seems reasonable for the fourth question - must the teacher and the student ultimately reconcile their views about the take away from the learning object? - the answer is yes, they must. In this case we're apt to attribute earlier differences in perspective between teacher and student not to differences in point of view but rather because the teacher is expert and the student is a novice, where the novice fails to see many of the implications implicit in the learning object that the expert takes as immediate. The effective learning object, then, would move the novice in the direction of the expert and that effectiveness would be assessed, for example, by the student's performance on an exam that the teacher has written to test the students' understanding. Of course, student understanding depends not only on the quality of the learning object but also on the student's prior understand on this and related topics. Exam performance typically can't parse one cause from the other, so this way it is hard to ascertain the learning object's value add, though if used with a variety of student audiences then maybe that information can be found in the aggregate results.
A different way to get at the value add question, especially for subject matter that the student finds difficult and hard to master, is to consider a variety of different materials on the topic - readings perhaps from several different textbooks on the matter, and multiple learning objects also dealing with the same topic, and let the student self-report whether an Aha! moment was experienced while going through all of these. The Aha then serves as an indicator and the learning object that was in focus immediately before the Aha occurred will likely be attributed as the one with the biggest value add. But for this approach to work, the student must willingly go through many different materials till a solid understanding emerges. If mainly students aren't so willing, then this sort of test has limited applicability. Further, and this is the crux of the matter to me, even if certain learning objects do get identified in this manner as capable of generating the Aha, this doesn't mean that another student exposed to that particular learning object first off would likely reach the Aha without spending substantial time before that in a muddle on the subject. Indeed, an important positive aspect of a learning object is that it encourages students to spend time playing with it, enough so for the students to get familiar with the subject they are studying. But, of course, some of this stick-to-it-ness is part of the character that makes for a good student and would be better attributed to the person than to the object.
Let's switch gears now and move from the philosophical to the concrete. I want to discuss my own recent experience with learning objects designed for intermediate microeconomics. I made these either prior to or during the spring 2011 semester, the last time I taught the course. I will talk about Excelets* that I made - these are numerically animated graphs done in Excel - and YouTube videos of screen capture movies of the Excelets with my voice over and then captioned, to give a narrative about what is going on in the graphs, done as a micro-lecture. For the sake of this discussion, the interesting thing is that this content has had two distinct audiences. The first are the students in that spring 2011 class. The second are students taking intermediate micro elsewhere who are searching, primarily within YouTube, for helpful content because they are stymied on some topic in the course they are taking. I want to contrast what I've learned from the two different groups. I should note here that I also produced essay content, such as this one on Price Differentials along with aloud readings of these, but these did not generate an external audience, so I won't comment about them below.
In the 2011 class I didn't require a textbook, thinking my learning modules were sufficient and/or that many students don't access the textbook sufficiently on their own to make it worth their while to purchase the textbook. Further, I've never liked to follow a textbook closely so at best it offers a different path to the content than what I do in lecture. In the course evaluations at the end of the semester many students said they'd have liked a textbook. Let's peel that onion a bit. The intermediate microeconomics course plays a role in the curriculum akin to the role organic chemistry plays for pre-med students. It is a requirement for students in the College of Business, also for some majors in the College of ACES (Agriculture) and fits the social science general education requirement for some students in the College of Engineering. This apart from it being the key prerequisite for all upper level economics courses and a requirement for the economics major. From what I know of the course, students typically don't like it, especially those in Business. It is too theoretical for their tastes and in that way is unlike the rest of their Business education. Also, the course is hard. The Engineering students are more likely to enjoy the course because it is softer than the typical engineering class and for them the modeling is not that hard at all. Further, they can indulge a social science interest that they have but that doesn't get much attention in the rest of their studies.
Given this prior, there is the dilemma of how to motivate the students to access the learning objects and spend time with them. My imperfect solution to this quandary was to embed these learning objects inside quizzes in Moodle which were required as homework. Here is an example you can access if you have Respondus. There was one question per YouTube video and the entire quiz was devoted to the various spreadsheets in a particular Excel workbook. So the hope was that the students would watch the videos and play with the Excelets in the process of doing the homework. But the students might very well try to do the quizzes without accessing the learning object content. And if they could succeed in getting a good score on the homework in that way, then my mechanism would have failed in providing the right sort of encouragement. I should also note here that all of this online content was made in anticipation of a subsequent offering of the course done in blended format. This version of the course had that online content but had the normal amount of face to face class meeting time. It was not advertized in advance to be a technology intensive class.
A further feature of the course was that I had the students blog and that I divided the content of the course into a narrative piece based on the readings and the blogging (in the second half of the course we read Heilbroner's The Worldly Philosophers) and an analytic piece based on the Exceletes and the Moodle quizzes. The exams tested only the latter. At least for this audience, that proved to be a blunder on my part. Many of the students perceived the course from the lens of the exams. What's on the exams is important; everything else is extraneous. It matters not in this view that I as the instructor select what students will do because I deem all of those activities as important. Many students reported in the course evaluations that the blogging was worthless because it didn't prepare the students for the exams. Also, the first midterm especially was hard, as measured by the the student performance on it, and because I had not taught the course for ten years I hadn't bothered to produce a practice exam for the students before that first midterm. These students want teaching to the test and I confounded that expectation rather than conforming to it. The upshot was that most students didn't like the class at all. And while the learning objects were not the focus of their disdain, the learning objects clearly weren't sufficient to overcome the negative disposition to the course overall. I was quite disappointed reading those evaluations, having put in a lot of energy constructing the learning objects. But I wasn't really surprised by the evaluations, because by then the students had made clear through their performance their instrumental approach to the subject and I knew I hadn't met them halfway in indulging that instrumentalism.
With the external audience things are entirely different. They don't see the Moodle quizzes. They supply their own motivation in accessing the Excel workbooks and the YouTube videos. And because they are so motivated, the learning objects can be considered from another perspective - effectiveness at communicating the economic concepts. The particular YouTube video that gets the most hits is on Income and Substitution Effects. It is a topic students find hard, so this buttresses a previous remark that the external audience is searching to find clarification on topics they find difficult. But hits per se are not a particularly good indicator that the learning object is effective. Only a handful of students make comments, yet it is the comments that are much more revealing about the effectiveness of the learning object. On that score, my most effective video is this one on The Effect Of A Tax. Let me try to explain why.
First, this video explains both a theoretical point and an empirical question. I'm afraid that much else of what we do in intermediate microeconomics is pure theory - structure built on top of which empirical questions may be asked but often that posing of empirical questions is not done sufficiently. As the vast majority of students don't have a theoretical orientation, this theory for theory's sake approach means that students are learning subject matter that they don't perceive they will every apply. No learning object, now matter how well it is done otherwise, can fully succeed in this case because there is no Aha to be found in such work. In this particular case the empirical question regards tax incidence. Who bears the tax, the buyer or the seller, or if in some mixture how is that mixture determined? One can answer that question via a calculus approach, but these students find the calculus less than illuminating. The animated graphs, in contrast, provide a good visual representation to see what is going on. The students can do little experiments to vary the demand elasticity or the supply elasticity at the original equilibrium and see how the result changes. So the students can answer the tax incidence question for themselves with this spreadsheet.
The theoretical question is which curve shifts as a consequence of the tax, the demand curve or the supply curve? The students get to see here both possible representations, one in terms of the buyer's price and the other in terms of the seller's price, and discover that both give the same results. Again the animation helps the students in linking the two together. Static representations can show the different representations but can't really explain to the students how those representations are connected.
Let me also point out that the price and quantity labels are given numerically rather than algebraically. I've come full circle on this issue. When I was a young assistant professor I thought everything should be done algebraically. Only in that way would the students develop a deep understanding of the model. This is still true to some extent but one must admit that if students aren't already fluid with algebraic representations, then presenting the material to the students in that way is like presenting it in a foreign language. They will grasp less of what is going on because their discomfort with the algebra blocks them from getting to the economics. Because the numeric representation is more immediate for them, they can better understand the economics. I should also point out here that in my actual course I had the students view this video on Reverse Engineering The Spreadsheets, so they can discover how the various curves are plotted and see the formulas used to generate those. If the students did diligently reverse engineer each spreadsheet they'd then get the algebraic approach that way. I suspect most of the students in the spring 2011 course did not do this. And clearly the external students have not seen reverse engineering video, as it has gotten very few hits. So they are making their judgments about effectiveness based on the numerical approach only, though they may be getting the algebraic approach from other content provided in the course they are taking.
Let me wrap this piece up by asking whether one can abstract to good learning object design independent of the subject being studied. I'm quite skeptical that this is possible in a meaningful way. I'd agree that on ancillary matters - how busy the diagrams are, the size of font used to label things, the amount of background white space so that things look clean, etc. then one can make a case for what a good learning object must do. I started making Excelets back in 2001 and with practice my technique has improved in making them. (It is also true that with the Mac having a comeback in the last decade I became more sensitive to making these in a way that would work on a Mac.) For example, now I use only the XY scatter with straight lines graph and I've learned to use solid lines for functions and dashed or dotted lines for labels of particular points. I've also learned how to create a fill in a region (the trick is to plot one line as many distinct vertical line segments that are separated by blank cells and have those vertical segments integrate out as the full area) and to use light pastel colors in this case so it doesn't drown out the rest of the graph. So I'd agree that on the appearance front there are general design principles that are good to learn. But those principles are far from sufficient in determining whether the learning object is effective. For that there is no substitute than to try it out with students studying the subject and see how they react to it.
*With the move from Google Docs to Google Drive you have to log in with a Google account to access this content. It used to be that you could access these without logging in. Then you have to download the files. They have not been converted to Google Spreadsheet. They remain in the original Excel.
No comments:
Post a Comment