Monday, Jul. 13, 1981

What Those Soaring Scores Mean

As marks rise, can real learning be far behind?

Students at the Garthar B. Peterson Elementary School in Atlanta used to dread standardized achievement tests. But this spring, when testing time came, their principal staged a pep rally and promised them a trophy and a party if they did well. It worked. When the results came in, the Peterson school had something to cheer about: after years in the doldrums, more than half its students scored above national norms in reading.

Peterson Elementary was not alone. Earlier this year, the Jacksonville school system announced that it now placed fourth in Florida test standings, up from a near bottom ranking just three years ago. California school officials proudly point out that their elementary pupils are doing notably better in math and reading than six years ago. Boston and Chicago have made similar claims about improved test results, and last week the State of New Jersey joined in too. And according to New York City school officials, more than half the public school students there now read at or above their present grade level --for the first time in ten years.

Improvement is generally attributed to increased parental involvement and classroom attention to basic skills. But how real is the progress? Bearing such nicknames and acronyms as CAT, ITBS, CTBS and METRO,* a bewildering battery of tests annually churns out statistics in "percentiles," "grade levels" or "stanines" (a scoring system based on nine that occupies only a single column on a computer punch card). Quite often, these obscure as much as they reveal. A month after New York school officials had boasted about the big jump in average test scores, parents of more than a fifth of the city's seventh-graders learned that their children would be held back a grade because of "reading deficiencies." Simply switching one test for another can dramatically improve scores. When Cleveland schools substituted the California test for the more rigorous Iowa test this year, the number of students with average and above-average scores jumped by 10%.

Critics point out that the "national norm" measuring a student's performance is far from absolute. It has been set lower in recent editions of some tests to reflect declining scores. A student above this year's norm is likely to be below the norm of, say, 15 years ago. Skeptics also complain that teachers are drilling students on test skills instead of real reading and writing. Disgusted by the cramming mania, one Missouri superintendent recommended that schools in his district refrain from "preparing for the Super Bowl."

The closest thing to a standardized estimate of U.S. student achievement is the periodic reports put out by the National Assessment of Educational Progress (NAEP), a Denver-based agency funded by the Federal Government. NAEP tests children at ages nine, 13 and 17 in such subjects as reading, math and science, using a representative sample from all over the U.S. A study released this spring tested reading skills; it contrasted literal understanding of a passage with inferential --the ability to draw conclusions about it. The study found that nine-year-olds have improved as much as 3.9% since 1971 in reading skills, whites by 2.8% and blacks by 9.9%. Among the 13-year-olds, whites stayed about the same; blacks rose by 4.2%. Score one for real learning.

* For California Achievement Tests, Iowa Tests of Basic Skills, Comprehensive Tests of Basic Skills, Metropolitan Achievement Tests.

This file is automatically generated by a robot program, so viewer discretion is required.