Is Our Kids Studying? – Take 2

| Wed Jul. 7, 2010 11:33 AM EDT

After I posted a couple of days ago on the subject of whether or not college students are studying less than they used to, I got a long email on the subject from Paul Camp, a physics professor at Spelman University. This is pretty far outside my wheelhouse of expertise, but his take was so interesting that I wanted to repost it here just so that everyone would have a chance to comment on it. Here's what he told me:


I've been engaged in a few conversations about this in the past couple of years. I can offer the following data that correlates with anecdotal evidence from other professors at a variety of institutions.

Since the early 1990's, I have pre and post tested all of my introductory mechanics classes using a research based diagnostic instrument, the Force and Motion Conceptual Evaluation. This instrument is based on research by Ron Thornton at Tufts that identified a reproducible sequence of intermediate states that all people seem to pass through in the process of gaining a Newtonian understanding. So it can give me not only a do they get it/do they not measure, but also, along several conceptual dimensions, a measure of how close they are to getting it.

My first job out of graduate school was at an unranked tier 4 institution in Myrtle Beach, South Carolina. Coastal Carolina "University" to be specific. It was the 13th grade. There were a few brilliant students — I've learned that for a variety of reasons you can find exceptional students anywhere — but for the most part the student body was composed of people who were there for financial reasons or because they thought it would be a cool idea to go to school at the beach. The first four pages of our brochure described the beach, not the college. We knew which side our bread was buttered on.

I pretty reliably got 50-60% normalized gains on the FMCE.

Normalized gain is the ratio of how much their scores increased compared to how much they could have increased — (post-pre)/(100-pre). 50-60% is actually pretty stupendous on this particular measure. It means they were typically getting 80-90% of the questions right.

I left that job in a huff. There's a very long story, but the short version is that I was ordered by my dean to give everyone a passing grade and I wouldn't do it. I spent 5.5 years in a research position at Georgia Tech before coming to Spelman.

Spelman is a top 75 liberal arts college, according to US News, and top 10 according to the Washington Monthly. My personal impression of the students is that the average is generally much higher than it was at Coastal. These are students who can think around a few corners. Also, since they are able to cross register in some considerably easier classes at other AUC institutions, I tend to get classes of students who are there because they choose to be there and are therefore more engaged and thoughtful about their efforts.

I think I'm at least as good an instructor as I used to be, and probably a lot better. I know quite a bit more about developmental psychology and cognitive science as a result of my job at Georgia Tech and I think that improves my instruction considerably.

And yet, in a good year I get about 20-30% normalized gains.

I don't really know what is different but something clearly is.

Right now, I'm blaming No Child Left Behind, but that is less because of data than of general suspicion of high stakes testing. In fact, I am also now quite skeptical of pre/post testing (I could send you a research paper on that if you're interested) but not enough that I can account for the difference in the data.

My job at Georgia Tech involved, among other things, observing curriculum implementations in middle schools. I was in the schools at least once a week, and at one point three days out of five for 10 weeks. What I saw was deeply disturbing.

In Georgia, the tests come at the beginning of May so on the first of March all education comes to a screeching halt. From that point on, the entire day is filled with drilling on multiple choice practice tests, pep rallies about how great we're going to do on the tests, and so on. After the test, the school year is over. Until the second week in June, every day is field day, movie day, recess day . . . since you can no longer affect the test, there's really no longer any point in school, now is there?

This means that compared to when I was in school (in Georgia), the school year has been shortened by a third and the one things that students have the most experience with by far is multiple choice testing.

Forgive me if I point out that this isn't really the best preparation for college.

I can't really say that this is a correct account. I can say that many faculty I have spoken with have expressed similar observations without me prompting them, but the difference between me and them is that I have data. I know what I used to get at a crappy college with surfer students, and I know what I now get at a top tier college with highly engaged students, and it isn't consistent with ought to be happening, all other things being equal.

So that's my data point. I suppose I could always have had some kind of mental excursion and become a bad teacher without knowing it, but I don't think so and my students don't think so either, and neither do my peers in and out of the physics department. So I'm going to provisionally discount that explanation.

I left Coastal in 1998. I started at Spelman in 2004. You tell me what changed during that time frame.