Bob Somerby notes that Merryl Tisch, New York state’s new schools chancellor, is quoted today by the New York Times wondering if rising test scores are really all they’re cracked up to be. And he’s appalled:
Let’s be fair to Tisch. She’s new to her post as head of the Board of Regents — although she’s served as a member of the board since 1996….That said, Tisch’s statement is quite remarkable.
“As a board, we will ask whether the test is getting harder or easier?” What the fig has the board been doing for the past thirteen years? To state the blindingly obvious, the question Tisch raises is well beyond basic; it makes no sense to compare test scores from one year to the next unless we know that the tests in question have remained equally difficult. And in New York City, this question was specifically raised by skeptical teachers at least as far back as 2005.
….As we’ve noted in the past: State education departments should be able to demonstrate that this year’s test is as hard as last year’s. If tests of this type have been competently devised, this shouldn’t be a matter of guesswork. State departments should have technical manuals which show the new tests are equally hard. For some time, we’ve noted that reporters at newspapers like the Times should be insisting on this.
Now see, because I’m a nicer guy than Bob I would have let this go with a halfhearted cheer that at least Tisch was bringing up a good question. But that’s because I’m such a squish. (And also because I’m not entirely sold on these tests anyway.) On the merits, though, Bob is right.
If you truly believe in the high-stakes testing mania — and plenty of people do — that’s fine. Maybe it will turn out to be a good idea. But there’s strong evidence on at least two scores. First, that different states have wildly different standards on their tests, so comparing them to each other is hopeless. Second, that there’s very little correlation between improvements on state tests and improvements on the more reliable NAEP tests. This suggests that a lot of state tests have in fact been steadily dumbed down to meet federal NCLB standards, so comparing them year-to-year is hopeless.
Now, the whole point of high-stakes testing is to provide us with hard, quantitative assessments of how our kids are doing. You simply can’t be a believer in this stuff and not care about whether the tests are meaningful from place to place and year to year. And yet, as Bob says, this issue gets only an occasional mention each year before being quickly dropped down the memory hole until another year’s test results come out and someone happens to casually mention it again. It’s almost enough to make you think that a lot of these folks are more interested in using tests as a political cudgel than they are in whether kids are actually learning something. Almost.