Forcing the horse to drink
You can lead a horse to water ... but you can't make hir drink.
Did you know I was talking about "Academically Adrift"? (IHE articles here and here point to the main story lines.) That's right, I'm talking about the folks who don't realize that you can send a kid to school ... but you can't make hir learn.
I've commented about this book twice, and was ready to comment again when I decided it was time to put those comments here in my blog.
Just for background, it first came to my attention in a January 18 comment on Dean Dad's blog that pointed to a column by someone from “The Hechinger Report” in the Sacramento Bee. That led to the following observation:
That article about Critical Thinking was interesting until we got to the inevitable “decline of civilization” comments from various experts. How can you compare the results of an Unprecedented One-of-the-First Study with guesses about college life 30 or 50 years ago? You can’t. I’m pretty sure we spent 50% of our time socializing in the dorms back in the good old days.
I’m impressed that almost two thirds !! of the study group made significant gains in critical thinking, even if the journalist chose to emphasize the negative side of the same results.
And I was stunned to learn that students in “business, education, social work and communications” showed the least gains, not to mention that “students learned more when asked to do more”. Who would ever have imagined such a result.
When Sherman Dorn questioned the reliability of the assessment tool used to generate data for the book (and later decried the over generalized conclusions drawn from it, I ventured the opinion (on the former article, slightly rewritten here) that:
Commentary based on perceiving a single data point as if it were a century long time series does not show any critical thinking on the part of the commentator.
I would also add that I think I would not have shown any significant improvement in my critical thinking skills after 4 years of college because I entered college with spectacularly good ones due to my high school experiences. I’m sure I improved, but not within the uncertainties of an instrument like they used.
The best article, however, is the one Chad wrote in Friday's Uncertain Principles. What makes it the best is, of course, is that he agrees with me that it is no surprise that 1/3 of all students in college are coasting. Well, that and he links to reports on the actual study the book is based on.
I'll add to his comments that the headline findings are based on an evaluation instrument that Sherman Dorn, a professor of the history of education and an expert on assessment in K-12, thinks is not suitable as an accountability measure.
Further, I'd guess that his students at Union and his fellow students at Williams are and were, like me, likely to have scored pretty high on the "critical thinking" essay they were given and thus less likely to improve. You have to push such students REALLY hard if you want them to improve their already high-level skills.
And I'll also add that I use a very good textbook (Wolfson's "Essentials" for calc-based physics) that runs about 22 pages of reading per week for the first semester (Mechanics and Thermodynamics) and 18 pages for the second semester (Electricity and Magnetism and Optics) -- a total that includes the textbook problems. Yet few students in any random sample at any university would trade that (not to mention homework assignments that are thick with critical thinking challenges) for 40 pages of a history book.
The worst article, however, is the NYTimes interview with one of the authors. There we discover that he actually thinks surveys and a single essay test -- likely taken without any academic (grade) or monetary (continued employment) motivation -- actually measures learning, for example, marketable engineering skills. What a load of narrow minded crap. As if the only job in the world is writing a sociology book.
And I take any claim that grades have been inflated to a Gentleman's B in my classes as a personal insult. My students know better.
But I will agree with the conclusion that challenging students makes them better. What bothers me is that someone who is a professor Emeritus at NYU thinks this is news. This has been known for centuries and obvious to me since elementary school.
3 comments:
I appreciate your trust in my judgment about CLA, but I'd prefer to think of myself as an historian who closely observes assessment practices rather than an expert on assessment.
I think that overall, the reporting on the book is a remarkable Rorschach test of people's ability to think critically in a specific context as opposed to the generic performance that CLA claims to assess.
Thanks for the clarification, but I thought the most telling remark you made was that the author agreed with you.
Also, you might read Chad's blog and its comment #8 to see how that task is viewed by science people.
"the reporting on the book is a remarkable Rorschach test of people's ability to think critically in a specific context as opposed to the generic performance that CLA claims to assess."
HA! Yes, I very much got that impression.
While there are a great many reasons I would love to have reliable data on what fosters learning (and this CLA may very well be a step in the correct direction in getting to that goal), I think the overall pursuit is much more challenging than people give it credit for. Either you are measuring something so specific as to be meaningless in other contexts, or you are measuring general learning in a very sloppy fashion.
I am torn between my self-satisfaction at taking the time to look a little more carefully at the methodology than most people seemed to and wondering if I am only able to do so because I was predisposed to be skeptical of all things learning assessment related. Meta critical thinking is not always comfortable.
Post a Comment