No grilling out for the evening repast
A result of this abysmal forecast.
pedagogy, the economics of, technical issues, tie-ins with other stuff, the entire grab bag.
Saturday, June 29, 2013
Monday, June 24, 2013
Accessibility And Online Learning Materials: The Moral Conundrum, The Law, And The Likelihood - My Take
Instructors don't get to choose their students. It's the students who choose, by registering for the particular course or not. The expectation is that the instructor will teach all who enroll.
I sometimes wish the counter factual held. My requirements for students would mainly be that they bring strong personal commitment to doing the work for the course and that they have something on the ball ahead of time which allows them to contribute to the class via their participation. I don't want to get hung up here on how to identify whether students have those attributes or not. I simply want to note here that when on occasion I've felt frustrated with a student over the last several years, invariably one or both of these attributes have been lacking. So I think it reasonable for an instructor to expect (request) this from all students even if rational expectations (based on recent prior experience) suggests it is unrealistic.
One might ask whether it is fair to add additional criteria. I will pose that question here with a specific example, unrelated to the accessibility issue, so the reader can get a sense of the ethical dimension of the question without needing to consider legal ramifications. A possible additional criterion is that the students can read, write, and speak English reasonably well.
As is well known, campuses such as mine at Illinois have witnessed a large increase in enrollment of Asian students, particularly from China and also South Korea. Many of them pay full out of state tuition. Their presence thereby helps to ensure the financial viability of the enterprise. However, a good fraction of them have limited English skills at the time they begin their studies. Indeed, one big reason for coming to the U.S. to study is for them to improve in this dimension.
In my class, I encourage discussion in the live session via Socratic dialogue and I have the students do weekly writing assignments out-of-class via blogging. I know that some students are intimidated by this approach and particularly those students who are not confident of their English are apt to drop the course during the first ten days, once they find out what is expected of them.
On the flip side, some of the students who have completed the course appreciate both the class discussion and the blogging. Moreover, my general teaching philosophy is informed by a view that the learner can learn only if she can give voice to her formative thinking. I do try to provide more than one venue for doing so. So I can't see me abandoning my approach because some students will find it off putting up front. But I can imagine some higher up telling me to change my teaching so more international students are willing to take my course, i.e., they want at the subject matter in a way that is accessible to them and since they are paying dearly in tuition we should satisfy their demands.
There is a tension between asking students to give voice to their early thinking and teaching students whose English is limited. I believe a similar such tension exists in thinking about the accessibility issues, as I will try to explain below.
* * * * *
There seems to be a general lack of leadership these days. Problems get swept under the rug rather than be dealt with squarely because there are too many competing imperatives and nobody wants to step up and say which ones can be ignored for now so progress can be made on the others.
I'm not talking about accessibility here. I'm talking about managing campus personnel. I don't chat with folks on campus nearly as often as I used to. But in the limited number of communications I still have, what is becoming apparent is that being overwhelmed by work is the new normal. This is for folks who work in learning technology or in information technology more broadly.
The issue could have been anticipated. Indeed in my penultimate column for Educause Quarterly from a few years ago, I did just that arguing that with fewer staff on board some of the service offerings should be shut down. But doing the analysis is easy. Implementing is very hard. When we don't implement we instead get staff who are overwhelmed as the consequence, though that surely will prove myopic before too long, if it hasn't done so already.
I know much less about how instructors have been impacted. The above mentioned rise in the number of international students who are weak in English perhaps gives the tip of the iceberg. I will guess at other changes where I don't have the detailed information at my fingertips. One is the rise in the number of transfer students, particularly from Community Colleges. I know there have been issues that although their credits articulate their actual prior coursework is not on a par with what they'd have gotten had they attended the U of I instead. A different change might be with the volume of TA support. Departments may have reduced this in an effort to save money. Still another change, sections may have been consolidated and low enrollment courses entirely dropped, again as a cost saving. How much of this has happened I really can't say. My guess would be --- life is tough all over.
* * * * *
Let me turn to accessibility and give a non-technical view based on my own teaching experience. There were two different episodes of rather intensive effort in the aim of accessibility. In 1999 or so, I put all my PowerPoint lectures online. There were many graphs (that's the the way I teach intermediate microeconomics). I provided long text descriptions of the graphs in the Notes area. My rationale for doing this at the time was that many students had difficulty reading the graphs. So potential benefit might be broad, well beyond the benefit from serving visually impaired students. Whether in fact the effort provides such benefit I do not know. (Many of the students disliked the course - it is the analog for business students that organic chemistry provides for pre-med students. Given that, it was hard to parse out the incremental benefit of this one tiny component.)
After I retired I again taught intermediate micro for one semester - spring 2011. I made simulations in Excel (this time the graphs were animated numerically). Then I made screen capture movies of manipulating those graphs and provided voice over to annotate that, the end result a micro-lecture. I produced transcripts of the audio and with that captioned the video. YouTube has a nice tool for putting in the timings into the caption text given a transcript file. As in the prior episode, I did this because I thought it would be broadly beneficial. There is technical content here and seeing the economic jargon displayed on the screen as I say it has benefit for the student, in my view. There is also that the captions can be translated into other languages, possibly a benefit for those who don't know English well. The Analytics tool for YouTube creators does not track use of captions, so it is hard to know whether this benefit has been realized or not.
Each time I was also influenced by other considerations. I wanted to know how onerous it was to do this on your own. The Disability support services folks were pushing hard for these sort of accommodations. I wanted to understand via my own use how much I should advocate on their behalf and how much I should push back at them because what they wanted all of us instructors to do is not really reasonable. My tentative conclusion on this score follows.
Accessibility can be regarded in at least two different ways. The first is "getting at" the materials. The second is producing a good understanding in the student based on the materials that have been accessed. If you solve the first way but not the second, what have you really accomplished? The first way is achieved via production of a text equivalent for the multimedia material. Does the second way get resolved as a consequence? I am much more likely to do the requisite work for making the content accessible if the answer to that question is yes.
Alas, for the "math track" of my current course on the economics of organizations, I have been much less successful in providing good understanding among the students. (See this recent post.) The issue has vexed me. I'm aiming to try some new things. In my way of thinking those experiments are primary. Disability access of the experimental materials is secondary or even tertiary. I'd want to know they are otherwise effective first. I'd also want to know that I will continue to teach this course and there will be a hefty demand for it. To date enrollments have been fairly low. I can't see the Econ department continuing to offer it unless enrollments increase.
There is also that I have more affinity for hard of hearing students than I do for visually impaired students, because of my own experiences. I know it is not exactly the same, but captioned micro-lectures are similar to foreign films with subtitles. I've watched quite a lot of the latter and have some sense from that on both the strength and limits of captioning. But I've never tried to understand an econ graph based purely on text description without drawing the graph myself or seeing a fully produced graph made by somebody else. Indeed, if you go to this particular video on The Effect of a Tax, and read the student comments there, then it should be apparent that for those students they are getting an understanding from the visual demonstration provided in that video that they were not able to get from other sources - the textbooks they are using and the classes they are taking. There is no guarantee that content will produce understanding.
I am aware that there are world class mathematicians who are blind. So it is certainly possible that such materials might produce the desired understanding. But if I had a blind student who was intellectually on a par with the other students in my class, would the text materials alone produce such an understanding? I doubt it. Put a different way, even if I made all my content so every student could get at it, regardless of any impairment they might have, they might still not learn the subject matter because it is too hard for them. An ideal is that this should be their call, not mine. That ideal abstracts from the effort it would take to produce such content.
I never have had a blind student but in that 2011 spring course one of the students was color blind and we discussed my videos a bit. He happened to be a very easy going guy and was actually one of the brightest kids in the class, so he did fine. I did learn from this conversation that color shouldn't be the only differentiator of one curve from another. (So vary thickness of the curves or have one be dashed or provide some other distinguishing features.) I've taken this as a design goal in subsequent things I've produced.
Accessibility is but one imperative that instructors making online content operate under. Copyright is another and student privacy yet a third. Early on when I was an ed tech administrator, I tended to be a strict constuctionist on these matters and therefore was quite conservative in my approach. I now think that is way too restrictive. So on copyright, for example, I will take images posted on the Internet and paste into my PowerPoint, giving attribution via a backlink, unless there is a warning on the original site about not reproducing the materials, in which case I will respect the warning. I have also used entire songs to provide musical accompaniment to these presentations. The former may be fair use. Is the latter? Probably not. But the distribution of this stuff is not broad, so who really cares?
My point is that I come to my own sense of the social good and social harm from the practice itself. I go by that. I'm not big on following rules blindly. And as a general matter, I believe the type of tinkering I do in instruction should be encouraged broadly, because average quality of the course will improve as a consequence.
But such encouragement doesn't appear to be what we are getting. Instead we are getting mandates which stem from exposure to liability - disabled students might sue the university for violation of ADA or analogous state level laws. Mandates are a top down approach that typically give a lip service solution to the problem because the real resources needed to address the problem haven't been provided and the people at the bottom really haven't been enlisted to help on this score. Witness our ethics training, where a good deal of effort is put into assuring that each staff member has done the training, but essentially nothing is done to track whether the training has had a salient impact on staff behavior and where word of mouth communication suggests that many staff hold the entire process in contempt.
Further the liability risk is typically considered in isolation, apart from other competing risks. So campus legal will argue for a strict construction approach, because all they see is the liability risk. They entirely ignore possible chilling effects on creative efforts to improve instruction. We therefore get requirements but no education on what sensible compromise looks like on the matter. Alas, this tends to encourage us to ignore the mandates further. The university is full of very intelligent people. They don't appreciate being treated like children, but that seems to be what we are getting.
Let me wrap up. I for one think universal design a noble aspiration and a goal worth pursuing. But I also try to be a realist about what is feasible to accomplish and with that I think it important to retain the goodwill of instructors who try earnestly, even if they come up a bit short. I'm not sure how one reconciles these tensions. My purpose in this post wasn't to do that. It was simply to draw out a bit that these tensions exist and give some shape to what they look like.
I sometimes wish the counter factual held. My requirements for students would mainly be that they bring strong personal commitment to doing the work for the course and that they have something on the ball ahead of time which allows them to contribute to the class via their participation. I don't want to get hung up here on how to identify whether students have those attributes or not. I simply want to note here that when on occasion I've felt frustrated with a student over the last several years, invariably one or both of these attributes have been lacking. So I think it reasonable for an instructor to expect (request) this from all students even if rational expectations (based on recent prior experience) suggests it is unrealistic.
One might ask whether it is fair to add additional criteria. I will pose that question here with a specific example, unrelated to the accessibility issue, so the reader can get a sense of the ethical dimension of the question without needing to consider legal ramifications. A possible additional criterion is that the students can read, write, and speak English reasonably well.
As is well known, campuses such as mine at Illinois have witnessed a large increase in enrollment of Asian students, particularly from China and also South Korea. Many of them pay full out of state tuition. Their presence thereby helps to ensure the financial viability of the enterprise. However, a good fraction of them have limited English skills at the time they begin their studies. Indeed, one big reason for coming to the U.S. to study is for them to improve in this dimension.
In my class, I encourage discussion in the live session via Socratic dialogue and I have the students do weekly writing assignments out-of-class via blogging. I know that some students are intimidated by this approach and particularly those students who are not confident of their English are apt to drop the course during the first ten days, once they find out what is expected of them.
On the flip side, some of the students who have completed the course appreciate both the class discussion and the blogging. Moreover, my general teaching philosophy is informed by a view that the learner can learn only if she can give voice to her formative thinking. I do try to provide more than one venue for doing so. So I can't see me abandoning my approach because some students will find it off putting up front. But I can imagine some higher up telling me to change my teaching so more international students are willing to take my course, i.e., they want at the subject matter in a way that is accessible to them and since they are paying dearly in tuition we should satisfy their demands.
There is a tension between asking students to give voice to their early thinking and teaching students whose English is limited. I believe a similar such tension exists in thinking about the accessibility issues, as I will try to explain below.
* * * * *
There seems to be a general lack of leadership these days. Problems get swept under the rug rather than be dealt with squarely because there are too many competing imperatives and nobody wants to step up and say which ones can be ignored for now so progress can be made on the others.
I'm not talking about accessibility here. I'm talking about managing campus personnel. I don't chat with folks on campus nearly as often as I used to. But in the limited number of communications I still have, what is becoming apparent is that being overwhelmed by work is the new normal. This is for folks who work in learning technology or in information technology more broadly.
The issue could have been anticipated. Indeed in my penultimate column for Educause Quarterly from a few years ago, I did just that arguing that with fewer staff on board some of the service offerings should be shut down. But doing the analysis is easy. Implementing is very hard. When we don't implement we instead get staff who are overwhelmed as the consequence, though that surely will prove myopic before too long, if it hasn't done so already.
I know much less about how instructors have been impacted. The above mentioned rise in the number of international students who are weak in English perhaps gives the tip of the iceberg. I will guess at other changes where I don't have the detailed information at my fingertips. One is the rise in the number of transfer students, particularly from Community Colleges. I know there have been issues that although their credits articulate their actual prior coursework is not on a par with what they'd have gotten had they attended the U of I instead. A different change might be with the volume of TA support. Departments may have reduced this in an effort to save money. Still another change, sections may have been consolidated and low enrollment courses entirely dropped, again as a cost saving. How much of this has happened I really can't say. My guess would be --- life is tough all over.
* * * * *
Let me turn to accessibility and give a non-technical view based on my own teaching experience. There were two different episodes of rather intensive effort in the aim of accessibility. In 1999 or so, I put all my PowerPoint lectures online. There were many graphs (that's the the way I teach intermediate microeconomics). I provided long text descriptions of the graphs in the Notes area. My rationale for doing this at the time was that many students had difficulty reading the graphs. So potential benefit might be broad, well beyond the benefit from serving visually impaired students. Whether in fact the effort provides such benefit I do not know. (Many of the students disliked the course - it is the analog for business students that organic chemistry provides for pre-med students. Given that, it was hard to parse out the incremental benefit of this one tiny component.)
After I retired I again taught intermediate micro for one semester - spring 2011. I made simulations in Excel (this time the graphs were animated numerically). Then I made screen capture movies of manipulating those graphs and provided voice over to annotate that, the end result a micro-lecture. I produced transcripts of the audio and with that captioned the video. YouTube has a nice tool for putting in the timings into the caption text given a transcript file. As in the prior episode, I did this because I thought it would be broadly beneficial. There is technical content here and seeing the economic jargon displayed on the screen as I say it has benefit for the student, in my view. There is also that the captions can be translated into other languages, possibly a benefit for those who don't know English well. The Analytics tool for YouTube creators does not track use of captions, so it is hard to know whether this benefit has been realized or not.
Each time I was also influenced by other considerations. I wanted to know how onerous it was to do this on your own. The Disability support services folks were pushing hard for these sort of accommodations. I wanted to understand via my own use how much I should advocate on their behalf and how much I should push back at them because what they wanted all of us instructors to do is not really reasonable. My tentative conclusion on this score follows.
Accessibility can be regarded in at least two different ways. The first is "getting at" the materials. The second is producing a good understanding in the student based on the materials that have been accessed. If you solve the first way but not the second, what have you really accomplished? The first way is achieved via production of a text equivalent for the multimedia material. Does the second way get resolved as a consequence? I am much more likely to do the requisite work for making the content accessible if the answer to that question is yes.
Alas, for the "math track" of my current course on the economics of organizations, I have been much less successful in providing good understanding among the students. (See this recent post.) The issue has vexed me. I'm aiming to try some new things. In my way of thinking those experiments are primary. Disability access of the experimental materials is secondary or even tertiary. I'd want to know they are otherwise effective first. I'd also want to know that I will continue to teach this course and there will be a hefty demand for it. To date enrollments have been fairly low. I can't see the Econ department continuing to offer it unless enrollments increase.
There is also that I have more affinity for hard of hearing students than I do for visually impaired students, because of my own experiences. I know it is not exactly the same, but captioned micro-lectures are similar to foreign films with subtitles. I've watched quite a lot of the latter and have some sense from that on both the strength and limits of captioning. But I've never tried to understand an econ graph based purely on text description without drawing the graph myself or seeing a fully produced graph made by somebody else. Indeed, if you go to this particular video on The Effect of a Tax, and read the student comments there, then it should be apparent that for those students they are getting an understanding from the visual demonstration provided in that video that they were not able to get from other sources - the textbooks they are using and the classes they are taking. There is no guarantee that content will produce understanding.
I am aware that there are world class mathematicians who are blind. So it is certainly possible that such materials might produce the desired understanding. But if I had a blind student who was intellectually on a par with the other students in my class, would the text materials alone produce such an understanding? I doubt it. Put a different way, even if I made all my content so every student could get at it, regardless of any impairment they might have, they might still not learn the subject matter because it is too hard for them. An ideal is that this should be their call, not mine. That ideal abstracts from the effort it would take to produce such content.
I never have had a blind student but in that 2011 spring course one of the students was color blind and we discussed my videos a bit. He happened to be a very easy going guy and was actually one of the brightest kids in the class, so he did fine. I did learn from this conversation that color shouldn't be the only differentiator of one curve from another. (So vary thickness of the curves or have one be dashed or provide some other distinguishing features.) I've taken this as a design goal in subsequent things I've produced.
Accessibility is but one imperative that instructors making online content operate under. Copyright is another and student privacy yet a third. Early on when I was an ed tech administrator, I tended to be a strict constuctionist on these matters and therefore was quite conservative in my approach. I now think that is way too restrictive. So on copyright, for example, I will take images posted on the Internet and paste into my PowerPoint, giving attribution via a backlink, unless there is a warning on the original site about not reproducing the materials, in which case I will respect the warning. I have also used entire songs to provide musical accompaniment to these presentations. The former may be fair use. Is the latter? Probably not. But the distribution of this stuff is not broad, so who really cares?
My point is that I come to my own sense of the social good and social harm from the practice itself. I go by that. I'm not big on following rules blindly. And as a general matter, I believe the type of tinkering I do in instruction should be encouraged broadly, because average quality of the course will improve as a consequence.
But such encouragement doesn't appear to be what we are getting. Instead we are getting mandates which stem from exposure to liability - disabled students might sue the university for violation of ADA or analogous state level laws. Mandates are a top down approach that typically give a lip service solution to the problem because the real resources needed to address the problem haven't been provided and the people at the bottom really haven't been enlisted to help on this score. Witness our ethics training, where a good deal of effort is put into assuring that each staff member has done the training, but essentially nothing is done to track whether the training has had a salient impact on staff behavior and where word of mouth communication suggests that many staff hold the entire process in contempt.
Further the liability risk is typically considered in isolation, apart from other competing risks. So campus legal will argue for a strict construction approach, because all they see is the liability risk. They entirely ignore possible chilling effects on creative efforts to improve instruction. We therefore get requirements but no education on what sensible compromise looks like on the matter. Alas, this tends to encourage us to ignore the mandates further. The university is full of very intelligent people. They don't appreciate being treated like children, but that seems to be what we are getting.
Let me wrap up. I for one think universal design a noble aspiration and a goal worth pursuing. But I also try to be a realist about what is feasible to accomplish and with that I think it important to retain the goodwill of instructors who try earnestly, even if they come up a bit short. I'm not sure how one reconciles these tensions. My purpose in this post wasn't to do that. It was simply to draw out a bit that these tensions exist and give some shape to what they look like.
Blogger Navbar Not Accessible
Inside Higher Ed has a piece today about accessibility in online courses being the instructor's responsibility. I was going to post something about that in response. In the process of doing some of my preliminary investigation, it occurred to me to test accessibility of this blog, Lanny on Learning Technology. So I did a Google search on web accessibility tools and followed the first link, an ad for AMP Express. It has a free report done by a robot. (I think they will solicit me for a paid service later, C'est la vie.) Below is a screen shot of the one I got.
It appears I'm boom or bust on this. One of the violations is about text equivalents. It would be nice if the Blogger tool for images put in a prompt for alt text right after the image was uploaded. Currently it does not do that. It does give you the option afterward to click on Properties and then it will give the prompt. Or one can go into the html and insert that manually. Mostly, I don't do either of these. Mea culpa.
The next issue is Properly Title Frames. I had no idea what was being referred to here. So I clicked on the link item to get more detail. Below is a screen shot of the first line of the report.
Tracking this down, I went to the post it referred to called, Checking your work, and looked at the html for that post. There was no iframe in that. Then it occurred to me to look at the html for the Template of the blog. There was no iframe in that either. Then I opened the Web page for that post and checked page source. Again, there was no iframe.
So after puzzling about this for a while longer, it occurred to me to right click on the Navbar of the blog. (This is the bar at the top that has the Blogger logo and the search blog tool.) When you do that a menu appears. I selected the bottom most item, Inspect element. It produces the html for the Navbar. Sure enough, the exact text in the description can be found there.
This is something the blog owner has no control of. And the Navbar is useful to folks who come to blog. The right answer isn't to get rid of the Navbar. It is to get the proper labeling there.
Folks at Google, can you get on it asap?
Accessibility Report |
The next issue is Properly Title Frames. I had no idea what was being referred to here. So I clicked on the link item to get more detail. Below is a screen shot of the first line of the report.
Properly Label Frames |
Tracking this down, I went to the post it referred to called, Checking your work, and looked at the html for that post. There was no iframe in that. Then it occurred to me to look at the html for the Template of the blog. There was no iframe in that either. Then I opened the Web page for that post and checked page source. Again, there was no iframe.
So after puzzling about this for a while longer, it occurred to me to right click on the Navbar of the blog. (This is the bar at the top that has the Blogger logo and the search blog tool.) When you do that a menu appears. I selected the bottom most item, Inspect element. It produces the html for the Navbar. Sure enough, the exact text in the description can be found there.
This is something the blog owner has no control of. And the Navbar is useful to folks who come to blog. The right answer isn't to get rid of the Navbar. It is to get the proper labeling there.
Folks at Google, can you get on it asap?
Thursday, June 20, 2013
It was only the relays
A nine year old freezer
Really is a geezer.
So it might lose its cool
Then replace or retool?
The owner shows stress for
Fear it's the compressor
Or worse a broken seal
Which would finish the deal.
Needed was a fair plan.
Here's to the repairman.
In the end some cheap parts
Normal function restarts.
Really is a geezer.
So it might lose its cool
Then replace or retool?
The owner shows stress for
Fear it's the compressor
Or worse a broken seal
Which would finish the deal.
Needed was a fair plan.
Here's to the repairman.
In the end some cheap parts
Normal function restarts.
Tuesday, June 18, 2013
Action And Over Reaction
In The West Wing season one there is an episode called A Proportional Response. It originally aired almost two years before 9/11. The premise is that the President's personal physician, somebody the President regarded with affection, was en route to the Middle East when the plane he was on was shot down by terrorists. The President had the urge to "invoke the wrath of God" on these evil doers. (Here is some dialog from that episode.) He had to be talked down from that view by his Chief of Staff, who showed the President the folly in escalating the violence.
Reality is worse than TV fiction. There is no sensible Chief of Staff to bring balance that will help to restore normalcy. Today's column by Roger Cohen is a reminder. NSA's intrusive data collection, perhaps FISA sanctioned but almost certainly of worse consequence than the terrorism it aims to deter, surely it is perceived that way in Germany, has become the question du jour. In this way of thinking the terrorists win, not by any direct violence they cause themselves but rather by our disproportionate response, which inflicts massive self-imposed wounds.
This idea of making the abnormal the new normal and then going from there seems to occur in many arenas - doping in pro sports, for example. The one I want to consider here, however, is education in the U.S. and the inequality that prevails in the system. Sad as this is to say, it certainly seems conceivable that things are worse in that dimension now than when we operated under Plessy versus Ferguson. The question is why.
I am a product of the NYC Public Schools. I went to I.H.S. 74 from 1966-68 and the school was integrated via busing. I walked to school as did most of the white kids. The black kids were bussed. This school had been a Junior High when I entered, so I started in 7th grade, but became a middle school while I was there so I graduated after 8th grade. I don't really recall this, but I have a sense that at the time there was not busing for integration purposes at the elementary school I attended, P.S. 203, though it my very well have happened after I left. The High School I later attended, Benjamin Cardozo, was integrated via busing.
There is not much talk nowadays of this period in our history. On social equity grounds busing may have been absolutely necessary ceteris paribus. But all else did not remain equal, at least not for very long. Busing was a shock to the system. I suppose the hope at the time was that people would get over the shock and a new, better normalcy would be attained. That hope is represented in the film Remember The Titans. Instead of getting over it, however, for many there were other reactions. The initial one was white flight - a move to the suburbs to avoid the consequences of integration. I don't know how many of the families of my Middle School classmates moved to Great Neck or even further out in Nassau County, but I'm sure some did. And. obviously, the trend continued after I graduated from High School.
In economics the idea is called the Tiebout Hypothesis, where it is sanitized of its racial connotation. It concerns all "local public good" expenditures that are financed by property taxes. In that model families select which community to reside in by the best public services - tax combination, according to their preference. Housing choice then becomes "voting with your feet." Individual communities become more homogeneous as their particular package of pubic goods and taxes brings about like minded people. The original paper by Tiebout was published in 1956, well before the Civil Rights Act but after Brown versus the Board of Education. Given its date of publication, I doubt that Tiebout anticipated the white flight phenomenon. His motives in writing the piece were probably more benign. Nevertheless, it served as a basis for Conservative thinking about local public goods, particularly schools.
There were other subsequent reactions. Private non-parochial schools emerged for the well-to-do families but where the parents had attended Public School, particularly for those who remained in urban areas. This was school flight away from the Public Schools without housing flight. Accompanying these reactions there was a move toward a more Conservative view of government, particularly at the State and Federal levels. Watergate was a strong facilitating factor here but let's not think of it as the sole cause. And as a consequence of this move to the right there followed the decline in State funding for public education.
After that all sorts of mishegoss came about how to solve the problem: Charter Schools, No Child Left Behind, blaming teachers for the problem, etc. Diane Ravitch has it right in The Death and Life of the Great American School System. Hers is a solitary voice of sensibility, with her main point that we need robust Public Schools. People should rely on their local school. It should be tolerably good, producing well educated graduates.
But we don't seem headed in that direction. Quite the contrary. Our excessive reactions are doing us in. When will we wake up to this fact?
Reality is worse than TV fiction. There is no sensible Chief of Staff to bring balance that will help to restore normalcy. Today's column by Roger Cohen is a reminder. NSA's intrusive data collection, perhaps FISA sanctioned but almost certainly of worse consequence than the terrorism it aims to deter, surely it is perceived that way in Germany, has become the question du jour. In this way of thinking the terrorists win, not by any direct violence they cause themselves but rather by our disproportionate response, which inflicts massive self-imposed wounds.
This idea of making the abnormal the new normal and then going from there seems to occur in many arenas - doping in pro sports, for example. The one I want to consider here, however, is education in the U.S. and the inequality that prevails in the system. Sad as this is to say, it certainly seems conceivable that things are worse in that dimension now than when we operated under Plessy versus Ferguson. The question is why.
I am a product of the NYC Public Schools. I went to I.H.S. 74 from 1966-68 and the school was integrated via busing. I walked to school as did most of the white kids. The black kids were bussed. This school had been a Junior High when I entered, so I started in 7th grade, but became a middle school while I was there so I graduated after 8th grade. I don't really recall this, but I have a sense that at the time there was not busing for integration purposes at the elementary school I attended, P.S. 203, though it my very well have happened after I left. The High School I later attended, Benjamin Cardozo, was integrated via busing.
There is not much talk nowadays of this period in our history. On social equity grounds busing may have been absolutely necessary ceteris paribus. But all else did not remain equal, at least not for very long. Busing was a shock to the system. I suppose the hope at the time was that people would get over the shock and a new, better normalcy would be attained. That hope is represented in the film Remember The Titans. Instead of getting over it, however, for many there were other reactions. The initial one was white flight - a move to the suburbs to avoid the consequences of integration. I don't know how many of the families of my Middle School classmates moved to Great Neck or even further out in Nassau County, but I'm sure some did. And. obviously, the trend continued after I graduated from High School.
In economics the idea is called the Tiebout Hypothesis, where it is sanitized of its racial connotation. It concerns all "local public good" expenditures that are financed by property taxes. In that model families select which community to reside in by the best public services - tax combination, according to their preference. Housing choice then becomes "voting with your feet." Individual communities become more homogeneous as their particular package of pubic goods and taxes brings about like minded people. The original paper by Tiebout was published in 1956, well before the Civil Rights Act but after Brown versus the Board of Education. Given its date of publication, I doubt that Tiebout anticipated the white flight phenomenon. His motives in writing the piece were probably more benign. Nevertheless, it served as a basis for Conservative thinking about local public goods, particularly schools.
There were other subsequent reactions. Private non-parochial schools emerged for the well-to-do families but where the parents had attended Public School, particularly for those who remained in urban areas. This was school flight away from the Public Schools without housing flight. Accompanying these reactions there was a move toward a more Conservative view of government, particularly at the State and Federal levels. Watergate was a strong facilitating factor here but let's not think of it as the sole cause. And as a consequence of this move to the right there followed the decline in State funding for public education.
After that all sorts of mishegoss came about how to solve the problem: Charter Schools, No Child Left Behind, blaming teachers for the problem, etc. Diane Ravitch has it right in The Death and Life of the Great American School System. Hers is a solitary voice of sensibility, with her main point that we need robust Public Schools. People should rely on their local school. It should be tolerably good, producing well educated graduates.
But we don't seem headed in that direction. Quite the contrary. Our excessive reactions are doing us in. When will we wake up to this fact?
Odd Traffic Pattern at Lanny on Learning Technology
This post is being written a little after 6 AM. Normally my blog gets a trickle of flow. But yesterday at around 7 PM it started to experience an increase in traffic of about 40 fold. I don't know why and I'm trying to figure that out.
These are the data on hits for the last week. They show that what started last night is indeed odd.
These are the data on entry pages for the last 7 hours. Virtually all this traffic is going to a single post on The Grid Question Type in Google Forms. It has been my most popular post, but it was made in 2010. So why it should see such a boost from normal traffic is a bit weird.
And these are the data on referring pages. Alas, it is not too informative as the vast majority of pages seem not to have one. There is some email referral from a Yahoo account. If that account got hacked it could possibly generate this traffic. I checked my own Yahoo account, which I almost never use, and it doesn't appear to be the source of this.
These are the data on hits for the last week. They show that what started last night is indeed odd.
These are the data on entry pages for the last 7 hours. Virtually all this traffic is going to a single post on The Grid Question Type in Google Forms. It has been my most popular post, but it was made in 2010. So why it should see such a boost from normal traffic is a bit weird.
Monday, June 17, 2013
Checking your work
Multiple distinct paths toward a conclusion are a good thing. Travel the second or third one (after you've followed the first to its conclusion) and you gain confidence in the validity of a proposition. We learned this in math, but it is a lesson that applies more broadly.
When I was in elementary school and we learned arithmetic, a big deal was made about the difference between what our parents were taught - carry the one - and what we were instructed to do - exchange the ten for ten ones. You have a feeling that they're on a different verse but are still singing the same song as back then.
Of course, the best tune on this score was given to us by Tom Lehrer.
When I was in elementary school and we learned arithmetic, a big deal was made about the difference between what our parents were taught - carry the one - and what we were instructed to do - exchange the ten for ten ones. You have a feeling that they're on a different verse but are still singing the same song as back then.
Of course, the best tune on this score was given to us by Tom Lehrer.
Now, that actually is not the answer that I had in mind, because the book that I got this problem out of wants you to do it in base eight. But don't panic! Base eight is just like base ten really - if you're missing two fingers! Shall we have a go at it? Hang on... (The full lyric is here.)
Thursday, June 13, 2013
Addendum to previous post
I am aware that there are apps to mirror the iPad on the PC. One of those is AirServer. It would seem from this page that you can mirror and then screen capture on your PC. I have downloaded and installed it, but you need a newer iPad than I have. My wife has one of those. I will borrow it from her tonight and give it a try. If I can do that effectively, I might very well go back to inking with a stylus. And then I might splurge for a new iPad. Father's day is coming up, after all.
Animated but out of sync
Having learned a trick, old dogs want to do repeat performance. Back when I had a Fujitsu Tablet PC, I made a variety of screen capture movies of my writing a la delivering a microeconomics lecture, so my voice provides annotation for what appears on the screen. I posted some of those to Google Video. When that closed those videos were migrated to YouTube. So they are still available, but the date of creation has been lost. They probably date back to 2007 or 2008, such as this one on The Envelope Theorem. If I recall correctly, this is a capture of writing in Windows Journal. It was done in a 4:3 aspect ratio, which is why there are the black bars on each side. So it appears dated. The handwriting is, at best, so so. Yet the thing does what it is supposed to do. I'm surprised that it still gets some use. Evidently, there is an audience for this sort of thing.
I've wanted to do something similar for the class I'm teaching this fall, but I no longer have a Tablet PC. (This is one area where work beats retirement. When working I got all the equipment I wanted. Now I have to pay out of pocket for this stuff and ask whether the purchase is justified.) I'm writing this post on my home computer, an all in one Sony Vaio. It's three years old but works reasonably well, so no complaints there. I also have an original iPad. I wanted to produce these micro-lectures using one or both of those. What follows is a sequel to the post No Chalk Dust, No Smudges from last week. The lecture notes I produced there, using a cheap stylus from Amazon.com on the iPad and a free app that Leslie Hammersmith recommended called Note Anytime, were sufficiently good as to raise my hopes about producing a micro-lecture in much the same manner.
However, making a screen capture movie on a PC, say with SnagIt, is different from doing it on the iPad. On the PC there is one application running for the capture and some other application which has the content of what is being captured (Windows Journal in those older videos I made). On my original iPad you need the same app for both content and the capture. TechSmith, makers of SnagIt, have such an app called ScreenChomp. It sounds promising and is what I used. But there is also a different issue that I couldn't resolve well.
The Tablet PCs are "active matrix" meaning they pick up some signal from the stylus so know where to place the digital ink. Other objects on the screen, such as the heel of your hand that holds the stylus, have no impact on what is produced. The Fujitsu I had was well designed this way. In contrast, my iPad and my Vaio, both with touchscreens, are "passive matrix" and thus pick up input from anything that touches the screen. This makes writing with a stylus on the iPad something of a challenge, because if you're like me you want the heel of your hand resting on something solid while you write. You don't want it hovering above. (For those who can write well with their hands not touching the screen at all while holding the stylus, good for you. The rest of this post won't be interesting to you.) I should add that using the index finger instead of a stylus is even worse regarding the quality of what is produced.
Note Anytime has a nifty solution to this issue. You write in a little box at the bottom of the screen, with your hand on the table or edge of the iPad, not touching the screen. That little box is a recreation of some segment of the page, which is also shown. And you can move the segment around either via controls that are provided or by dragging it with your finger. So you can fill the page with writing but input only at the bottom of the screen. For note taking, that works reasonably well.
ScreenChomp doesn't have those features. It also doesn't allow you to scroll the page. It does allow you to erase the page and start again with a clean screen. That is what I did, borrowing the technique of writing on the bottom of the screen that I used in making the lecture notes. (There is no voice over in this video, only writing. I found it difficult to produce the writing so if I were going to use this I'd make another capture of the writing already done, inserting my voice at that time.) But I didn't like the result. No context is built this way, showing only one line at a time with nothing previous shown. Further, because the pen tools in ScreenChomp are a little blunt, I felt I had less control of the stylus than I did with Note Anytime. The upshot is that I wasn't satisfied with the result and opted to try something else.
It occurred to me that I could animate a PowerPoint presentation on a character by character basis and thereby simulate handwriting in the process (really, it would simulate typing on the screen). This would make all characters well formed and easy to read, a distinct plus. You can judge for yourself here (again, there is no audio). But there are a variety of issues that arose doing this.
The lecture notes transferred into PowerPoint took six slides. The average number of characters per slide was close to 190. I made the animation by making multiple copies of the same slide and then having the subsequent slide have the next character show. (Actually, what I did was delete a character in the previous slide, as that proved easier to do.) So there ended up being 190 copies of the same slide, but with different characters being visible. I then put in the timings using the record Narration. My aim was for about one minute on the entire page, so that meant a little more than three characters per second. I practiced pushing the space bar while using a stop watch to get the desired speed. I then discovered that if you already have a timing for a slide and you make a duplicate of the slide then the copy inherits the same timing.
Not having done this before, I had no sense for how long it would take to do. It took quite a while, which was the main problem. It was mind numbing work, and sometimes I would get the slides out of order. I did find that by saying my procedure aloud (duplicate the slide, go back one slide, delete the particular character) I made fewer mistakes and it went faster. But it was still laborious. In the middle of the work I realized I made the slides 4:3 instead of 16:9. But changing the aspect ratio ruined the slides. I'd have to start from scratch. Then I found that I had made an error in the math on the last slide. But with all the copies I made the same error propagated to perhaps 100 slides or so. To correct it I had to make changes on each of them. My point with all this detail is to show that it is not a very robust way of doing things.
After I had finished this I recorded audio in Audacity for the voice over. Then I moved to combine the two. My first thought was to try Slideshare.net and make a slidecast. But that didn't work well at all. Instead of one slide following another it had slide, then black screen, then slide, then black screen, etc. I can only guess why. Having multiple slides per second is not the norm and it didn't like it.
So I thought to put the audio directly into PowerPoint with the intent of then recording the slideshow. In the process of doing that, I learned that PowerPoint has an upper limit on the number of slides moving forward from a given slide that the audio would play. That limit is 999. I had over 1100 slides. So if I were to do this approach, I'd need to split the audio into two pieces and insert the second piece somewhere in the middle of the PowerPoint presentation. That was unattractive to me.
The straw that broke the camel's back was this and I really could have anticipated it ahead of time, but I didn't. In the ideal, since it is the verbal explanation that adds value over what is already in the lecture notes, the pace of what appears on the screen should follow the verbal explanation, not vice versa. On some things that are hard you want to linger a bit with the verbal explanation. On other things that are transparent you can zip right through. With the slide transitions already put in advance you can't do that. And you feel under pressure providing the narration, either to keep up with what is going too fast on the screen or to not get too far ahead. It's that which I was referring to in the post title.
So what I ended up with, which is definitely more sensible as an approach but abandons the goal of simulating the handwriting, is to animate on a line by line basis. The same text stays on the screen for longer that way and it is not that hard to put in the voice over and advance the slides when appropriate. Whether this is better or worse than the type of movies I was making on the Tablet PC, I can't say. That is for others to judge. But I do know that I can produce these things in reasonable time and get something that is usable.
Let me close with one other point. I don't think it is a good idea to do ordinary slideshow mode in PowerPoint for doing the screen capture. The SnagIt controls end up in the system tray that way and it is somewhat difficult to find them afterward. Further, you get no feedback on how long you've been talking during the presentation. In what I recorded above I had PowerPoint in the normal editing view and advanced the slides using the scroll wheel on my mouse. That worked reasonably well, though the arrow turns into a cursor when it hovers over text and once or twice I wheeled through several slides at one time and had to go back. I did discover afterward that you can be in slideshow mode in a window, which might be the best way to do this. Go to the Slide Show menu in the Ribbon, select Set Up Slide Show and then there is a button on the left top for Browsed By and Individual (window). Select that, click the OK button and you're done.
I've wanted to do something similar for the class I'm teaching this fall, but I no longer have a Tablet PC. (This is one area where work beats retirement. When working I got all the equipment I wanted. Now I have to pay out of pocket for this stuff and ask whether the purchase is justified.) I'm writing this post on my home computer, an all in one Sony Vaio. It's three years old but works reasonably well, so no complaints there. I also have an original iPad. I wanted to produce these micro-lectures using one or both of those. What follows is a sequel to the post No Chalk Dust, No Smudges from last week. The lecture notes I produced there, using a cheap stylus from Amazon.com on the iPad and a free app that Leslie Hammersmith recommended called Note Anytime, were sufficiently good as to raise my hopes about producing a micro-lecture in much the same manner.
However, making a screen capture movie on a PC, say with SnagIt, is different from doing it on the iPad. On the PC there is one application running for the capture and some other application which has the content of what is being captured (Windows Journal in those older videos I made). On my original iPad you need the same app for both content and the capture. TechSmith, makers of SnagIt, have such an app called ScreenChomp. It sounds promising and is what I used. But there is also a different issue that I couldn't resolve well.
The Tablet PCs are "active matrix" meaning they pick up some signal from the stylus so know where to place the digital ink. Other objects on the screen, such as the heel of your hand that holds the stylus, have no impact on what is produced. The Fujitsu I had was well designed this way. In contrast, my iPad and my Vaio, both with touchscreens, are "passive matrix" and thus pick up input from anything that touches the screen. This makes writing with a stylus on the iPad something of a challenge, because if you're like me you want the heel of your hand resting on something solid while you write. You don't want it hovering above. (For those who can write well with their hands not touching the screen at all while holding the stylus, good for you. The rest of this post won't be interesting to you.) I should add that using the index finger instead of a stylus is even worse regarding the quality of what is produced.
Note Anytime has a nifty solution to this issue. You write in a little box at the bottom of the screen, with your hand on the table or edge of the iPad, not touching the screen. That little box is a recreation of some segment of the page, which is also shown. And you can move the segment around either via controls that are provided or by dragging it with your finger. So you can fill the page with writing but input only at the bottom of the screen. For note taking, that works reasonably well.
ScreenChomp doesn't have those features. It also doesn't allow you to scroll the page. It does allow you to erase the page and start again with a clean screen. That is what I did, borrowing the technique of writing on the bottom of the screen that I used in making the lecture notes. (There is no voice over in this video, only writing. I found it difficult to produce the writing so if I were going to use this I'd make another capture of the writing already done, inserting my voice at that time.) But I didn't like the result. No context is built this way, showing only one line at a time with nothing previous shown. Further, because the pen tools in ScreenChomp are a little blunt, I felt I had less control of the stylus than I did with Note Anytime. The upshot is that I wasn't satisfied with the result and opted to try something else.
It occurred to me that I could animate a PowerPoint presentation on a character by character basis and thereby simulate handwriting in the process (really, it would simulate typing on the screen). This would make all characters well formed and easy to read, a distinct plus. You can judge for yourself here (again, there is no audio). But there are a variety of issues that arose doing this.
The lecture notes transferred into PowerPoint took six slides. The average number of characters per slide was close to 190. I made the animation by making multiple copies of the same slide and then having the subsequent slide have the next character show. (Actually, what I did was delete a character in the previous slide, as that proved easier to do.) So there ended up being 190 copies of the same slide, but with different characters being visible. I then put in the timings using the record Narration. My aim was for about one minute on the entire page, so that meant a little more than three characters per second. I practiced pushing the space bar while using a stop watch to get the desired speed. I then discovered that if you already have a timing for a slide and you make a duplicate of the slide then the copy inherits the same timing.
Not having done this before, I had no sense for how long it would take to do. It took quite a while, which was the main problem. It was mind numbing work, and sometimes I would get the slides out of order. I did find that by saying my procedure aloud (duplicate the slide, go back one slide, delete the particular character) I made fewer mistakes and it went faster. But it was still laborious. In the middle of the work I realized I made the slides 4:3 instead of 16:9. But changing the aspect ratio ruined the slides. I'd have to start from scratch. Then I found that I had made an error in the math on the last slide. But with all the copies I made the same error propagated to perhaps 100 slides or so. To correct it I had to make changes on each of them. My point with all this detail is to show that it is not a very robust way of doing things.
After I had finished this I recorded audio in Audacity for the voice over. Then I moved to combine the two. My first thought was to try Slideshare.net and make a slidecast. But that didn't work well at all. Instead of one slide following another it had slide, then black screen, then slide, then black screen, etc. I can only guess why. Having multiple slides per second is not the norm and it didn't like it.
So I thought to put the audio directly into PowerPoint with the intent of then recording the slideshow. In the process of doing that, I learned that PowerPoint has an upper limit on the number of slides moving forward from a given slide that the audio would play. That limit is 999. I had over 1100 slides. So if I were to do this approach, I'd need to split the audio into two pieces and insert the second piece somewhere in the middle of the PowerPoint presentation. That was unattractive to me.
The straw that broke the camel's back was this and I really could have anticipated it ahead of time, but I didn't. In the ideal, since it is the verbal explanation that adds value over what is already in the lecture notes, the pace of what appears on the screen should follow the verbal explanation, not vice versa. On some things that are hard you want to linger a bit with the verbal explanation. On other things that are transparent you can zip right through. With the slide transitions already put in advance you can't do that. And you feel under pressure providing the narration, either to keep up with what is going too fast on the screen or to not get too far ahead. It's that which I was referring to in the post title.
So what I ended up with, which is definitely more sensible as an approach but abandons the goal of simulating the handwriting, is to animate on a line by line basis. The same text stays on the screen for longer that way and it is not that hard to put in the voice over and advance the slides when appropriate. Whether this is better or worse than the type of movies I was making on the Tablet PC, I can't say. That is for others to judge. But I do know that I can produce these things in reasonable time and get something that is usable.
Let me close with one other point. I don't think it is a good idea to do ordinary slideshow mode in PowerPoint for doing the screen capture. The SnagIt controls end up in the system tray that way and it is somewhat difficult to find them afterward. Further, you get no feedback on how long you've been talking during the presentation. In what I recorded above I had PowerPoint in the normal editing view and advanced the slides using the scroll wheel on my mouse. That worked reasonably well, though the arrow turns into a cursor when it hovers over text and once or twice I wheeled through several slides at one time and had to go back. I did discover afterward that you can be in slideshow mode in a window, which might be the best way to do this. Go to the Slide Show menu in the Ribbon, select Set Up Slide Show and then there is a button on the left top for Browsed By and Individual (window). Select that, click the OK button and you're done.
Monday, June 10, 2013
Bell Curve Simulation Done in Excel
I don't know what possessed me to do this in the wee hours this morning when I couldn't sleep,
but I thought to construct a bell curve from first principles. I had seen
something like this as a Java Applet in 1990s. In that applet, a ball
was dropped onto a grid of pegs. When it hit a peg, it could go either
left or right, with each possibility equally likely. Perhaps it did
this at a height of 20, meaning the ball would hit 20 pegs before coming
to rest. Now do this ball after ball and look at the distribution of
where the balls end up. Most will be near the center. A few will be
well to the left and a few will be well to the right. The distribution
should look like a bell curve.
I did this here in Excel but I changed things a little. When I did this exactly as described above, I discovered to my chagrin that there were no observations at odd number positions. So I could have done the above with half steps instead of whole ones, but instead I did the following. At each juncture the ball could go left or right as before, but it also could stay where it is, a third possibility. In this simulation, left is a value of -1, right is a value of +1, and staying where it is gives a value of 0. To get these outcomes equally likely (assuming that RAND() produces a uniform random variable on the interval [0,1] and different realizations are independent of one another) I generated a realization of RAND() for each ball and each peg it hit.
On a per ball basis this is done 20 times (with the results in columns AC to AV. Each ball is a row. I then translated these realizations into left, right or stay put. The translations are in columns C to V. (If you zoom out to 50% view, you can see both realizations and translations in one view.) For example, in cell F44 there is the formula:
=IF(AF44<1/3,-1,IF(AF44<2/3,0,1))1>
There are 4000 rows of data. The first such row begins in row 44. There are thus 80,000 realizations of RAND() generated (4000 rows times 20 entries per row).
In rows 2 through 42, I record the positions (in column A) and the number of balls at the position (in column B). To get the number at a position I use the COUNTIF function. As a little check I add up all the balls at each position. This is in cell D24. Sure enough, it gives a value of 4000, as it should.
I then graphed the positions and number of balls with a graph that connects the points. (Tufte might not like that but it's good enough for government work.) You can also just eyeball the results.
Finally, I put in a Spin Button. Each time you change the value with the Spin Button, it should produce a new simulation. By doing that several times, it will give you a sense of what is idiosyncratic to that particular simulation and what seems robust from one simulation to the next. This is the benefit of doing the exercise in Excel.
It does seem to produce a near Bell Curve. That is good. It also produces something different each time, which is also good. There is variation in the sample sum and sample mean from one sample to the next.
When I 'learned' probability and statistics, back in the 1970s, it was all from textbooks and all about theory. There was nothing about data. It seems to me simulations such as this would serve as a bridge between the two. I believe that nowadays most statistics textbooks come with simulation software of some sort. That's a plus. But it would be better still if students could build the simulations themselves. They can with this Excel simulation.
I did this here in Excel but I changed things a little. When I did this exactly as described above, I discovered to my chagrin that there were no observations at odd number positions. So I could have done the above with half steps instead of whole ones, but instead I did the following. At each juncture the ball could go left or right as before, but it also could stay where it is, a third possibility. In this simulation, left is a value of -1, right is a value of +1, and staying where it is gives a value of 0. To get these outcomes equally likely (assuming that RAND() produces a uniform random variable on the interval [0,1] and different realizations are independent of one another) I generated a realization of RAND() for each ball and each peg it hit.
On a per ball basis this is done 20 times (with the results in columns AC to AV. Each ball is a row. I then translated these realizations into left, right or stay put. The translations are in columns C to V. (If you zoom out to 50% view, you can see both realizations and translations in one view.) For example, in cell F44 there is the formula:
=IF(AF44<1/3,-1,IF(AF44<2/3,0,1))1>
There are 4000 rows of data. The first such row begins in row 44. There are thus 80,000 realizations of RAND() generated (4000 rows times 20 entries per row).
In rows 2 through 42, I record the positions (in column A) and the number of balls at the position (in column B). To get the number at a position I use the COUNTIF function. As a little check I add up all the balls at each position. This is in cell D24. Sure enough, it gives a value of 4000, as it should.
I then graphed the positions and number of balls with a graph that connects the points. (Tufte might not like that but it's good enough for government work.) You can also just eyeball the results.
Finally, I put in a Spin Button. Each time you change the value with the Spin Button, it should produce a new simulation. By doing that several times, it will give you a sense of what is idiosyncratic to that particular simulation and what seems robust from one simulation to the next. This is the benefit of doing the exercise in Excel.
It does seem to produce a near Bell Curve. That is good. It also produces something different each time, which is also good. There is variation in the sample sum and sample mean from one sample to the next.
When I 'learned' probability and statistics, back in the 1970s, it was all from textbooks and all about theory. There was nothing about data. It seems to me simulations such as this would serve as a bridge between the two. I believe that nowadays most statistics textbooks come with simulation software of some sort. That's a plus. But it would be better still if students could build the simulations themselves. They can with this Excel simulation.
Friday, June 07, 2013
No Chalk Dust, No Smudges
In the good old days when I taught the first course in the core graduate microeconomics class, I would lecture at the blackboard, without notes, and derive every equation and graph from first principles. The same exact stuff would be in the textbook, but my talking through it presumably added value for the students beyond what they got from reading the book. In intermediate microeconomics, an undergraduate course, I would do something similar part of the time. I had much less a sense of the value add for the students, but it was the method of the discipline and I felt obligated to work the students through the stuff in a thorough manner. Among certain colleagues, both in the department and elsewhere on campus, I developed a modest reputation as someone who taught the right sort of intermediate microeconomics course - with an appropriate balance of math and intuition in the presentation. Other instructors would use more calculus, do less graphically, and be less intuitive. They'd also cover somewhat different subject matter.
I stopped doing the graduate course in 1993, 1994 or thereabouts. I continued to teach the intermediate course till 2001, though as part of my SCALE project the class moved to large lecture (180 students where normal sections had 60 students). When I realized students in the back of the room could not see what I was writing on the board, I switched to PowerPoint - perhaps in 1998 or 1999. It was pedagogically a step down because it showed the results as finished product, like the textbook, rather than actively constructed in front of the students. But at least the students could see what was on the screen.
When Tablet PCs came out I experimented with them for doing the lecture and making movies of that, but at the time I wasn't teaching these courses. So it was more a skill to acquire so I could pass it on to others who might be interested than it was an investment in my own teaching. More recently I have been doing some teaching but I thought doing the math in Excel more innovative than the old fashioned way and I invested in that. But the results have been mixed at best. So I've decided to supplement the Excel with the old fashioned way, but done on the iPad. I will then make exercises for the students to work, some online ahead of class and others during the live class session. I hope I can improve student understanding of the math models this way.
I'm just getting started with this. Here is the first set of lecture notes I've done on the iPad. My plan is to write them out this way first, then make a screen movie where I write them again. And then do yet one more capture where I put in my voice over. I've made enough screen movies before where I've learned there is a certain feeling of pressure while recording. You want to think in a continuous flow as in the live class setting and feel you'll ruin the movie if that doesn't happen. So I hope copying the lecture notes while the movie is being made will be a way to get a better result since copying is less cognitively demanding. We'll see. This content gives an algebraic/calculus look at material. I have other content that covers this graphically in Excel (this YouTube movie, this Excel file, and this YouTube movie which has the demonstration of the result starting at the 4:14 mark).
I also want to note a certain feeling of joy from making this. It is pure nostalgia for the way I lectured back in the early 1990s. It is not based at all on whether students will get value from it, which should be the primary determinant of whether to do this sort of thing. It is only that I did this so often in the past that I feel I'm hard wired to do it, even now. My handwriting is certainly no great shakes. But I think the product is readable.
I stopped doing the graduate course in 1993, 1994 or thereabouts. I continued to teach the intermediate course till 2001, though as part of my SCALE project the class moved to large lecture (180 students where normal sections had 60 students). When I realized students in the back of the room could not see what I was writing on the board, I switched to PowerPoint - perhaps in 1998 or 1999. It was pedagogically a step down because it showed the results as finished product, like the textbook, rather than actively constructed in front of the students. But at least the students could see what was on the screen.
When Tablet PCs came out I experimented with them for doing the lecture and making movies of that, but at the time I wasn't teaching these courses. So it was more a skill to acquire so I could pass it on to others who might be interested than it was an investment in my own teaching. More recently I have been doing some teaching but I thought doing the math in Excel more innovative than the old fashioned way and I invested in that. But the results have been mixed at best. So I've decided to supplement the Excel with the old fashioned way, but done on the iPad. I will then make exercises for the students to work, some online ahead of class and others during the live class session. I hope I can improve student understanding of the math models this way.
I'm just getting started with this. Here is the first set of lecture notes I've done on the iPad. My plan is to write them out this way first, then make a screen movie where I write them again. And then do yet one more capture where I put in my voice over. I've made enough screen movies before where I've learned there is a certain feeling of pressure while recording. You want to think in a continuous flow as in the live class setting and feel you'll ruin the movie if that doesn't happen. So I hope copying the lecture notes while the movie is being made will be a way to get a better result since copying is less cognitively demanding. We'll see. This content gives an algebraic/calculus look at material. I have other content that covers this graphically in Excel (this YouTube movie, this Excel file, and this YouTube movie which has the demonstration of the result starting at the 4:14 mark).
I also want to note a certain feeling of joy from making this. It is pure nostalgia for the way I lectured back in the early 1990s. It is not based at all on whether students will get value from it, which should be the primary determinant of whether to do this sort of thing. It is only that I did this so often in the past that I feel I'm hard wired to do it, even now. My handwriting is certainly no great shakes. But I think the product is readable.
Sunday, June 02, 2013
Manual Copying of Publicly Available Information on the Web with Intelligent Aggregation - Can This Be Worthwhile Research?
Let me preface what I have to say by noting that when I was doing economic research it was all highly theoretical - math models and analysis thereof. The models may have been motivated by the research of somebody else, with that work having an empirical basis. But I simply trusted those results. I didn't look at other data to validate the results on my own. So at heart I'm a theorist and come to questions with a theorist's sensibility.
Nonetheless, I developed something of an empirical approach when I became a campus administrator, at least from time to time. It started straight away with the evaluation of the SCALE project. I became directly involved with the faculty component, along with Cheryl Bullock who was working on the evaluation. We had a few core questions we wanted each interviewee to answer for the sake of the evaluation. But I also wanted to get to know these people, to develop my own personal network and to learn about their motives and uses of technology in their teaching. For that a more open ended kind of discussion was appropriate and we mixed and matched on those. It worked out pretty well.
I did a different sort of project that is more in line with the title of this post. We had been making a claim that ALN (what we called it then, you can call it eLearning now) would improve retention in classes. Yet nobody had bothered, in advance, to find out what retention looks like at Illinois on a class by class basis. So I got the the folks who deal with with institutional data to get me a few years data with 10-day enrollments and final enrollments for every class offered on campus. This wasn't highfalutin research. I plopped the data into Excel and had it compute the ratio of the final number to the 10-day number, represented as a percentage. There may have been adds and drops in between. For my purposes these were inessential, so I ignored them. If it wasn't easy to track those; for starters lets look at what could readily be tracked. So I called my ratios the retention rates. The finding was that by and large those were so high - often above 95% - that it would be impossible for ALN to improve them much if at all, since they were bounded above by 100%. At schools where those ratios were lower, then looking at how ALN might improve things was sensible. This simply wasn't an interesting question at Illinois at the time. We didn't concern ourselves with it further in the evaluation.
Fast forward ten years or so. At the time I was a new Associate Dean in the College of Business and had just returned from a trip to Penn State at their Smeal College of Business for a meeting with my counterparts from various institutions. They had a brand new building and were very proud of how many classes students would be offered in the new building instead of elsewhere around campus. I learned that they had procured scheduling software within the College to pre-schedule classes, with a major effort to have College classes meet in the new building. Then, when dealing with the University, which did the official scheduling, they already had a plan that featured high utilization in the new building. Since we were opening a new building a year later, I got it in my head to imitate what Smeal had already been doing.
But I needed to convince myself so I could then convince others in the College that such a purchase of scheduling software would be worth it. To do that, I needed to do several things. One of those was to understand how we were doing with scheduling at present. Conceivably, one could have tried to run some reports out of Banner for this purpose. However, I didn't know how to do that and I was unsure whether it would give me the information I needed. So I did something much more primitive, but where I was surer it would give me the results I wanted.
I went to the online Course Schedule (this was for the fall 2007 semester). The data are arranged first by Rubric, then by course number, then section. It gives the time of class meeting and the room. The College of Business has three departments - Accountancy, Business Administration, and Finance, and I believe 5 Rubrics. (MBA is a separate rubric and the College has a rubric BUS). So I gathered this information, section by section, and put into Excel.
Now a quick aside for those who don't know about class scheduling. Most classrooms are "General Assignment" and that means that conceivably any class on campus might be scheduled into the particular room. There are some rooms that are controlled by the unit. The Campus doesn't schedule those rooms and I won't spend much time on the ones that College of Business controlled in what follows. It is well understood that a faculty member would prefer to teach in the same building where he or she has an office, all else equal, and it is probably better for students in a major to take classes in the building where the department is housed.
To address this, the Campus gives "priority" for scheduling a particular classroom to a certain department. Before such and such a date, only that department can schedule classes into the room. After that date the room becomes generally available for other classes not in the department to use the room. There is no department with second priority - that gets at the room after the first department has done its work. Timing-wise, that just wouldn't work. It's largely the reason why a College pre-scheduling within its own building can improve on building utilization. They can get started earlier and negotiate allocations across departments.
Ahead of time I did not know which classrooms our departments had priority for. I let the data tell me this. So, after collecting the data as I described above, I need to invert it to at look at it from a room basis rather than from a rubric-course number basis. Excel's sort function is quite good for doing this - provided you enter the data in a way where it can be readily sorted. There is some intelligence, therefore, in anticipating the sorting at the time you are entering the data and coming up with a useful scheme for doing that. What I came up with managed these issues. One of my questions in the title is whether for other such efforts equally useful ways of collecting the data are found. I had some definite things I wanted to learn from the effort, so the schema arose from that. My guess, however, is sometimes the questions will only become apparent after looking at the data for quite a while. So an issue is whether the schema adopted in the interim can give way to a more suitable schema for addressing the questions.
I learned a variety of little facts from going through this exercise and in that way became an expert about College class scheduling where nobody else in the College had this expertise then, because nobody had looked at the information in this way. Soon thereafter, however, the real experts - the department schedulers - took over and became much better informed than I was. Here's a review of some of the things I learned - most obvious in retrospect but not ahead of time. A couple of things I thought more profound.
1. The vast majority of classes met two days a week. That's how it was for me when I was in graduate school and courses were for four credit hours, which met in two different two-hour blocks. As an undergraduate I had mainly three credit-hour classes and then meet three times a week in one-hour blocks. That three-times a week model seems to have gone by the wayside except in the high enrollment classes, where the third meeting time is in discussion section.
2. Business Administration and Finance classes were three hours per week, both at the undergraduate level and the graduate level. Accountancy classes were four hours per week, undergraduate and graduate (and the high enrollment classes had four hours in lecture and then a discussion section or lab). I think the "why" question for this outcome itself quite interesting and worthy of further analysis, but I won't concern myself with it here. For the purposes of scheduling alone, it produced an important consequence. Accountancy was on a de facto grid. Classes started at 8 AM or 10 AM, but never at 9 AM. That carried through the rest of the day. With the other departments there was no standard starting time. A class might start a 9 AM or at 9:30 AM or at 10 AM.
3. There was some classes that met for 3 hours straight, once a week. Presumably these emerged via faculty request, so they could consolidate their teaching to one day a week.
4. Accountancy had classes in the evening. The other departments did not. Nighttime classes are not preferred. So this showed a space crunch for Accountancy.
5. Mainly the new building was an improvement in instructional space but not that much of an increase in overall classroom capacity for the College. (Priority assignment in the Armory, DKH, and some of Wohlers would have to be given up after the new building came into operation.)
6. One classroom is not a perfect substitute for another. In addition to number of seats, some classrooms were flat, where furniture could be more easily arranged on the fly, and other classrooms were tiered, where the writing surface was fixed. Accountancy had a distinct preference for flat classrooms. The other departments preferred tiered classrooms. That was more of a differentiator across classrooms than technology in the room or other factors.
Not all the information I needed arose from this data collection effort. Another thing I learned from meeting with Campus Scheduling in the Registrar's Office was that the Campus faced limits with their scheduling because, in essence, we were too big for the scheduling software that was being utilized, run by the University (so supporting Chicago and Springfield as well as Urbana). They therefore applauded these efforts in pre-scheduling and wanted to help us rather than view this approach as competition to what they did.
I also learned that each of the departments had a shadow scheduling system, idiosyncratic to the department, for doing the scheduling work. And I learned further that the department schedulers were very nice people who were happy to collaborate with one another, but they hadn't done so in the past because there wasn't a perceived need then, and nobody took it upon herself to coordinate the effort. One of the big benefits attained from the procurement of the scheduling software and having the vendors come to visit us to showcase what they had was in it solidifying the collaboration between the department schedulers.
-----
I have belabored the lessons learned above to show what other research efforts of this sort might produce. The approach is somewhat different from how economics is normally done - where a model is constructed prior to the data collection, hypotheses are generated from the model, and then the data are used principally to test the hypotheses. Economics has theory in the predominant position and data in a subordinate role.
What I learned from my Administrator work, and particularly from some early conversations with Chip Bruce, is that with evaluation efforts, in particular, it is often better to be an anthropologist and consider the data collection as mainly being descriptive about what is going on. There is learning in that. We are often far too impatient to try to explain what we observe and render judgment on it, thumbs up or thumbs down. That is less interesting, especially when it is done prematurely, and it will tend to block things we might learn through the descriptive approach.
There is a related issue, one I think most people don't have a good handle on, which is that much real learning is by serendipity. One doesn't really know what will be found until one looks. Thus the looking itself becomes an interesting activity, even if it's just for the hell of it. Of course, dead ends are possible. One might desire to rule them out in advance. I don't think that is possible. That might explain why getting started on data collection is difficult. In the old SCALE days we embraced the Nike motto - Just Do It. That was for teaching with ALN. The idea there was to learn by doing instead of doing a lot of course development up front. It meant that an iterative approach had to be employed. I believe more or less the same thing applies to data collection of this sort.
I now have a couple of undergraduate students doing a summer research project under me that might extend into the fall, if they make headway now. These students took a class from me last fall, did reasonably well in that course, and were looking to do something further along those lines.
They don't have my prior experience with data collection. I wonder if they can get to something like my sense of understanding how to do these things from the project they are working on. We'll see.
Nonetheless, I developed something of an empirical approach when I became a campus administrator, at least from time to time. It started straight away with the evaluation of the SCALE project. I became directly involved with the faculty component, along with Cheryl Bullock who was working on the evaluation. We had a few core questions we wanted each interviewee to answer for the sake of the evaluation. But I also wanted to get to know these people, to develop my own personal network and to learn about their motives and uses of technology in their teaching. For that a more open ended kind of discussion was appropriate and we mixed and matched on those. It worked out pretty well.
I did a different sort of project that is more in line with the title of this post. We had been making a claim that ALN (what we called it then, you can call it eLearning now) would improve retention in classes. Yet nobody had bothered, in advance, to find out what retention looks like at Illinois on a class by class basis. So I got the the folks who deal with with institutional data to get me a few years data with 10-day enrollments and final enrollments for every class offered on campus. This wasn't highfalutin research. I plopped the data into Excel and had it compute the ratio of the final number to the 10-day number, represented as a percentage. There may have been adds and drops in between. For my purposes these were inessential, so I ignored them. If it wasn't easy to track those; for starters lets look at what could readily be tracked. So I called my ratios the retention rates. The finding was that by and large those were so high - often above 95% - that it would be impossible for ALN to improve them much if at all, since they were bounded above by 100%. At schools where those ratios were lower, then looking at how ALN might improve things was sensible. This simply wasn't an interesting question at Illinois at the time. We didn't concern ourselves with it further in the evaluation.
Fast forward ten years or so. At the time I was a new Associate Dean in the College of Business and had just returned from a trip to Penn State at their Smeal College of Business for a meeting with my counterparts from various institutions. They had a brand new building and were very proud of how many classes students would be offered in the new building instead of elsewhere around campus. I learned that they had procured scheduling software within the College to pre-schedule classes, with a major effort to have College classes meet in the new building. Then, when dealing with the University, which did the official scheduling, they already had a plan that featured high utilization in the new building. Since we were opening a new building a year later, I got it in my head to imitate what Smeal had already been doing.
But I needed to convince myself so I could then convince others in the College that such a purchase of scheduling software would be worth it. To do that, I needed to do several things. One of those was to understand how we were doing with scheduling at present. Conceivably, one could have tried to run some reports out of Banner for this purpose. However, I didn't know how to do that and I was unsure whether it would give me the information I needed. So I did something much more primitive, but where I was surer it would give me the results I wanted.
I went to the online Course Schedule (this was for the fall 2007 semester). The data are arranged first by Rubric, then by course number, then section. It gives the time of class meeting and the room. The College of Business has three departments - Accountancy, Business Administration, and Finance, and I believe 5 Rubrics. (MBA is a separate rubric and the College has a rubric BUS). So I gathered this information, section by section, and put into Excel.
Now a quick aside for those who don't know about class scheduling. Most classrooms are "General Assignment" and that means that conceivably any class on campus might be scheduled into the particular room. There are some rooms that are controlled by the unit. The Campus doesn't schedule those rooms and I won't spend much time on the ones that College of Business controlled in what follows. It is well understood that a faculty member would prefer to teach in the same building where he or she has an office, all else equal, and it is probably better for students in a major to take classes in the building where the department is housed.
To address this, the Campus gives "priority" for scheduling a particular classroom to a certain department. Before such and such a date, only that department can schedule classes into the room. After that date the room becomes generally available for other classes not in the department to use the room. There is no department with second priority - that gets at the room after the first department has done its work. Timing-wise, that just wouldn't work. It's largely the reason why a College pre-scheduling within its own building can improve on building utilization. They can get started earlier and negotiate allocations across departments.
Ahead of time I did not know which classrooms our departments had priority for. I let the data tell me this. So, after collecting the data as I described above, I need to invert it to at look at it from a room basis rather than from a rubric-course number basis. Excel's sort function is quite good for doing this - provided you enter the data in a way where it can be readily sorted. There is some intelligence, therefore, in anticipating the sorting at the time you are entering the data and coming up with a useful scheme for doing that. What I came up with managed these issues. One of my questions in the title is whether for other such efforts equally useful ways of collecting the data are found. I had some definite things I wanted to learn from the effort, so the schema arose from that. My guess, however, is sometimes the questions will only become apparent after looking at the data for quite a while. So an issue is whether the schema adopted in the interim can give way to a more suitable schema for addressing the questions.
I learned a variety of little facts from going through this exercise and in that way became an expert about College class scheduling where nobody else in the College had this expertise then, because nobody had looked at the information in this way. Soon thereafter, however, the real experts - the department schedulers - took over and became much better informed than I was. Here's a review of some of the things I learned - most obvious in retrospect but not ahead of time. A couple of things I thought more profound.
1. The vast majority of classes met two days a week. That's how it was for me when I was in graduate school and courses were for four credit hours, which met in two different two-hour blocks. As an undergraduate I had mainly three credit-hour classes and then meet three times a week in one-hour blocks. That three-times a week model seems to have gone by the wayside except in the high enrollment classes, where the third meeting time is in discussion section.
2. Business Administration and Finance classes were three hours per week, both at the undergraduate level and the graduate level. Accountancy classes were four hours per week, undergraduate and graduate (and the high enrollment classes had four hours in lecture and then a discussion section or lab). I think the "why" question for this outcome itself quite interesting and worthy of further analysis, but I won't concern myself with it here. For the purposes of scheduling alone, it produced an important consequence. Accountancy was on a de facto grid. Classes started at 8 AM or 10 AM, but never at 9 AM. That carried through the rest of the day. With the other departments there was no standard starting time. A class might start a 9 AM or at 9:30 AM or at 10 AM.
3. There was some classes that met for 3 hours straight, once a week. Presumably these emerged via faculty request, so they could consolidate their teaching to one day a week.
4. Accountancy had classes in the evening. The other departments did not. Nighttime classes are not preferred. So this showed a space crunch for Accountancy.
5. Mainly the new building was an improvement in instructional space but not that much of an increase in overall classroom capacity for the College. (Priority assignment in the Armory, DKH, and some of Wohlers would have to be given up after the new building came into operation.)
6. One classroom is not a perfect substitute for another. In addition to number of seats, some classrooms were flat, where furniture could be more easily arranged on the fly, and other classrooms were tiered, where the writing surface was fixed. Accountancy had a distinct preference for flat classrooms. The other departments preferred tiered classrooms. That was more of a differentiator across classrooms than technology in the room or other factors.
Not all the information I needed arose from this data collection effort. Another thing I learned from meeting with Campus Scheduling in the Registrar's Office was that the Campus faced limits with their scheduling because, in essence, we were too big for the scheduling software that was being utilized, run by the University (so supporting Chicago and Springfield as well as Urbana). They therefore applauded these efforts in pre-scheduling and wanted to help us rather than view this approach as competition to what they did.
I also learned that each of the departments had a shadow scheduling system, idiosyncratic to the department, for doing the scheduling work. And I learned further that the department schedulers were very nice people who were happy to collaborate with one another, but they hadn't done so in the past because there wasn't a perceived need then, and nobody took it upon herself to coordinate the effort. One of the big benefits attained from the procurement of the scheduling software and having the vendors come to visit us to showcase what they had was in it solidifying the collaboration between the department schedulers.
-----
I have belabored the lessons learned above to show what other research efforts of this sort might produce. The approach is somewhat different from how economics is normally done - where a model is constructed prior to the data collection, hypotheses are generated from the model, and then the data are used principally to test the hypotheses. Economics has theory in the predominant position and data in a subordinate role.
What I learned from my Administrator work, and particularly from some early conversations with Chip Bruce, is that with evaluation efforts, in particular, it is often better to be an anthropologist and consider the data collection as mainly being descriptive about what is going on. There is learning in that. We are often far too impatient to try to explain what we observe and render judgment on it, thumbs up or thumbs down. That is less interesting, especially when it is done prematurely, and it will tend to block things we might learn through the descriptive approach.
There is a related issue, one I think most people don't have a good handle on, which is that much real learning is by serendipity. One doesn't really know what will be found until one looks. Thus the looking itself becomes an interesting activity, even if it's just for the hell of it. Of course, dead ends are possible. One might desire to rule them out in advance. I don't think that is possible. That might explain why getting started on data collection is difficult. In the old SCALE days we embraced the Nike motto - Just Do It. That was for teaching with ALN. The idea there was to learn by doing instead of doing a lot of course development up front. It meant that an iterative approach had to be employed. I believe more or less the same thing applies to data collection of this sort.
I now have a couple of undergraduate students doing a summer research project under me that might extend into the fall, if they make headway now. These students took a class from me last fall, did reasonably well in that course, and were looking to do something further along those lines.
They don't have my prior experience with data collection. I wonder if they can get to something like my sense of understanding how to do these things from the project they are working on. We'll see.