What is PVAAS, and how does it help us?

Published February 2008 Voice

At the PSEA House of Delegates in December, PSEA members had the opportunity to listen to experts’ perspectives on PVAAS and its use and value for Pennsylvania schools, teachers, and students. Among those offering their insight on the system were Lee Spears, a 17-year member of PSEA who teaches at the Norristown Area High School, Daniel McCaffrey from the RAND Corporation, and PSEA’s Harris L. Zwerling, Assistant Director of Research.

Focusing primarily on academic growth and progress, the Pennsylvania Value-Added Assessment System, or PVAAS, was initially implemented in 2002 by the Pennsylvania Department of Education and steadily expanded to include all 501 school districts in the Commonwealth.

Pennsylvania Secretary of Education Gerald L. Zahorchak summed it up nicely when he said, “Achievement is a measure, and so is progress. Our overall system is enhanced when we weigh both measures when determining student and school results.”

Given the demands of meeting AYP each year and No Child Left Behind’s single-minded insistence on single-standard accountability, is there any hope that the other brother of assessment—student progress—will make an appearance on the public education landscape and save the day? Well . . . yes and no.

PVAAS: What is it?

Value-added analysis is a statistical method used to measure the influence of an academic unit (district, school, program, or classroom) on the academic growth of an individual student or groups of students over time. According to RAND’s McCaffrey, value-added analysis, “use(s) multiple prior student test scores...to estimate a student’s likely outcome had the student been in the average school. It then compare(s) actual achievement to predicted values for all students and aggregates to school or other group levels.”

Pennsylvania adopted its particular form of value-added analysis, based on the EVAAS system developed by William Sanders and his associates at the SAS Institute in North Carolina. Under contract with the PDE, SAS provides PVAAS data to all of Pennsylvania’s school districts. PVAAS provides districts with individual student and aggregated scores that compare actual performance on the PSSA mathematics and reading tests with what is called the “growth standard,” or how students would have performed had they had an average educational experience. A second component of PVAAS provides projections about the performance of students on future PSSA tests.

PVAAS data are available for all Pennsylvania districts to use for local decision making as deemed appropriate by local authorities. SAS and PDE have taken the position that it is inappropriate to use PVAAS data for evaluating the performance of teachers.

How can it be used?

According to PDE, “as educators begin to better understand value-added and its benefits, collaborative professional dialogue among educators will include discussion of value-added data as an additional piece of information regarding student learning.” PVAAS provides projection data designed to be used to target intervention at an early stage when students are predicted to fail to achieve proficiency in the future. The projection methodology has been accepted by the U.S. Department of Education as part of an alternative means of calculating “safe harbor” for Adequate Yearly Progress under NCLB.

Given its current state of development, PVAAS appears to be appropriate for low risk (low stakes) purposes such as the identification of students requiring intervention and the initiation of discussions about curriculum or professional development. Generally, value-added assessment systems have been used in two ways: as an accountability measure and for professional/curriculum development.

PDE has advised districts that PVAAS is not to be used for teacher compensation or evaluation and, to date, there is little evidence that school districts are using it for such. However, in other states it has been. While one could easily argue that PVAAS provides better performance measures than AYP, this is faint praise. As a vast amount of value-added research literature attests, many technical issues remain. Among the most serious concerns is the precision and volatility of value-added estimates. This concern heightens with the projection methodology, for which the error of measurement is likely to be quite large.

What are its problems?

Fundamental questions need to be answered before relying too heavily on PVAAS data.

“What ‘growth’ does PVAAS truly measure? That’s the first question. What exactly is PVAAS measuring when estimating the statistical relationships between tests in different years that are weakly linked, or tests in different subjects? Does this measure growth with respect to Pennsylvania’s academic content standards? Clearly, it does not,” says Zwerling. “Under such circumstances, PVAAS can only claim to measure the joint distribution of student scores on two assessments that may or may not have related content. This is descriptive, but hardly provides the sort of causal connection that should form the basis for accountability.”

PVAAS does provide some useful information.

Simply, test score growth provides an important piece of information. It tells us when students are making progress even if performing below proficiency.

However, the RAND survey of PVAAS pilot districts indicated administrators and teachers made much greater use of state, district, and classroom tests than of PVAAS.

Is PVAAS the best system for our schools? One major concern is that PVAAS does not account for differences in student demographics, making causal attributions about the influence of educational units invalid and potentially misleading. The RAND study of PVAAS asked the question, what effect has PVAAS had on student test scores? The answer was, not much.

There may be many reasons why that was the case, not the least of which was the newness of PVAAS and the lack of use by districts. However, that study suggests that, at a minimum, policymakers need to do a serious cost-benefit analysis of any additional investments in testing and the statistical analyses of test results. What are the best ways to spend scarce educational resources?

PSEA believes that PVAAS should be used for professional and curriculum development with caution. PSEA also does not object to the low stakes use of the PVAAS projection methodology as an alternative means for calculating “safe harbor.”

The bottom line? No testing program can replace detailed on-site observation and analysis as a means for pursuing and achieving school improvement.



Member Advocacy Center


Featured Video