Wow. Our experiment is off to a great start—let's see if we can finish it off sooner than expected.
A reader emails me regarding a story about Sam Houston High School in San Antonio, which has finally met state standards for academic achievement:
It's actually a very sad story about kids who are getting screwed by the system and by some educators, and perfectly encapsulates so much of what is wrong with education reporting. At least twice officials explain to the reporter (apparently without realizing they are doing so) why the results being touted are bogus, and yet the paper still presents this as good news about a success. And the school held a pep rally.
Let's take a look. Here's the first explanation:
“I think the main thing is we tested less kids,” English teacher Richard Acuña said. The school identified additional special-needs students who qualified to take tests that aren't counted when the state determines accountability ratings, he said.
And here's the second:
The state requires a 60 percent pass rate in math to reach the academically acceptable threshold. Though Sam scored lower than that, it is still set to receive the acceptable rating because last year the state introduced a new tool that allows schools to get credit for some students who did not pass the TAKS if they appear to be on target to pass in the future.
The formula, the Texas Projection Measure, uses a student's current test scores in several subjects as well as a previous year campus average score to project the student's future test performance. With the TPM, Sam's pass rate in math is 72 percent, enough to put it into academically acceptable territory.
For what it's worth, I'd add a third: the school's passing rate in science jumped from 38% to 62%. In one year. I mean, maybe that's legit, but if it is, they need to figure out how to bottle it and sell it. I'd usually be impressed by a five-point rise in a single year. A 24-point rise hardly seems believable.
As longtime readers know, I have pretty ambivalent feelings about high-stakes testing. I've heard too many horror stories, both in the press and from friends, to be a big fan, but at the same time it's not clear what better option we have. But even if you are a big fan, there's just too much anecdotal evidence that a lot of success in testing regimes comes from gaming the system and lowering the standards of the tests when necessary. Both seem to be in play in this story.