What is current thinking about IQ measurement? Does it make sense conceptually? In my primitive conception, each of us is the product of nature and nurture, perhaps via some complex form of mixing of the two. IQ tries to parse out the nature piece. But, of course, it does more than that. It then tries to rank the nature piece, along a single dimension. Howard Gardner's stuff allows for multi-dimensionality of the nature piece, but still seems to believe that it can be parsed from the nurture component. What if no such parsing makes sense?
The work of Anders Ericsson and his colleagues casts doubt on parsing intelligence, finding instead that seemingly very bright people had the right sort of prior deliberate practice and it is this practice, nurture if you will, that best explains their superior performance on the various intelligence tests. Nature then fits into this story in two possible ways. The first regards motivation. Not everybody will take the training required for high performance. It may be boring, or painful, or personally demeaning. Liking the training may be something that is not teachable. It may be inherent (or absent). The other way is in the productivity of the training, particularly early on. Seeing good results from the deliberate practice can create a positive feedback loop. Someone who can observe personal growth based on their own efforts is more willing to make personal growth a goal. In contrast, if at first it is tough sledding, then the lack of reward can become discouraging.
There are late bloomers. Understanding how they catch on would be very helpful. Reading the post at the link, one gets the impression that a genetic explanation works, at least in part. Our various expressions of intelligence are a consequence of multiple genes working in concert to produce the desired effect. Many of these genes take time to develop, in themselves and in their interactions with other genes. The timing of gene development varies from individual to individual.
But what of the other part? Suppose a person has been deliberately practicing with an approach that in fact does not promote personal growth, at least after a fashion. Then the person will have plateaued and yet been habituated to this non nurturing form of practice. I believe memorization to be in this category - it works well in elementary school and is necessary then ("i" before "e" except after "c"....). It become less and less effective thereafter, yet many students cling to it nevertheless.
It seems to me that THE QUESTION is for people who have engaged in a non nurturing form of deliberate practice for an extended period of time, what can be done to get them to embrace more nurturing forms thereafter? In one shape or another this seems to be the core issue about undergraduate education, at least at universities like Illinois. Who else is framing the issue this way?
The scary part of this is that since the students 18 or older are treated as adults, the responsibility on making this sort of change rests with primarily with themselves. Yet, in my observation, most don't see the necessity in doing so. And the "grades culture" is so intensive and provides a lot of reinforcement of the view that things are fine as long as the GPA is okay. So methods that might encourage a different form of deliberate practice either need to shock students into an awareness that things are not okay or must move entirely outside the grade culture so that sort of feedback is not part of what encourages other approaches.
The current fascination with MOOCs does neither of these things, as near as I can determine. That fascination is there because of the cost issue. I'm not saying the cost issue is unimportant. But it seems to me it is secondary and should not block efforts at finding answers to THE QUESTION. Unfortunately, that's exactly what seems to be happening.