For lawmakers trying to assess, and fund, schools, the messiest question of all is disarmingly simple: What is a failing school? Given all the stories we hear of kids who don't have books and wouldn't know how to read them if they did it shouldn't be so difficult to pluck out the scores of schools that don't pass muster. But what about the thousands more where students may not all be excelling but where teachers work with the meagerest of resources and, with a little encouragement, just might turn the corner? In other words, how do you tell the really bad schools from the just plain middling ones?
Both the Senate and House measures assign grades based on yearly scores on reading and math exams. In the Senate bill schools would have 10 years to bring students up to academic speed and would have to show at least a one percent gain each year in test scores to avoid sanctions from loss of federal funds to, at the extreme end, closure. The House version gives schools a full 12 years and states would be free to come up with their own definition of the yearly progress schools must demonstrate. Although either plan seems to provide an ample cushion, a growing coalition of teachers, education officials and even some White House staffers is now arguing that the current legislation would deem far too many schools failures on the order of 100 percent in some areas.
Fanning their worries is a new study by Thomas Kane, a professor at the University of California at Los Angeles, who simulated how the bills would have affected schools in Texas and North Carolina between 1994 and 1999. Despite being widely hailed as exemplars, Kane found that nearly every school in those states would have been considered a failure at some point during that time period. A major problem, he explains, is the requirement that schools must show unremitting annual progress, an absurd statistical goal when scores often fluctuate from year to year, and even from one test-taking session to the next. "In elementary schools when you can have as few as 68 students per grade, there's quite a bit of the luck of the draw involved," says Kane. "Four or five rowdy kids one year could have a real negative impact on test scores." California, another state that would have seen scores of failures under the new guidelines, this year doled out close to $700 million in bonuses to teachers at high-scoring schools. "Under one system one year you'd be rewarded and the next severely penalized," says Kane.
The fallout from such mixed messages and from potential headlines of widespread school failure is not lost on the White House and its chief negotiator on the education bills, Dallas lawyer Sandy Kress. "We need to find a system where it's not statistical noise or zig-zags in performance that drives classification," he says. Kress and his Congressional counterparts have feverishly been devising just such a system that, he suggests, might end up looking something like the Texas testing model that Bush campaigned on, which set much more modest goals for students. "We need to set the bar at a fair level and maybe even a bit of a lower level at the beginning," says Kress.
But such words elicit nothing short of a tizzy from education reform advocates like Amy Wilkins of the Education Trust. "We can't come up with a definition of failure where all you have to be able to do is breathe to hop over it," she says.
Congressional conferees are still working to craft the final language of that definition. But don't expect to see the finished product until at least September they've postponed defining failure until after their late summer recess.