When the State Board of Education sat down on July 11, 2018, to reconsider how the dashboard based accountability system will measure year-to-year change, two circles of scholarly critics weighed in. The voices from both circles were strong and clear.

One circle was a comprised of a group of more than a dozen scholars from Harvard, USC, UC Davis, Tulane  and University of Washington. Morgan Polikoff of USC’s Rossier School of Education led the charge of the scholar-critics with this message:

“California has chosen basically the worst growth measure you can possibly choose.” –letter from Morgan Polikoff and other scholars to SBE

This collectively authored letter to the State Board of Education criticized the current method of measuring year-to-year change, asserting that it “profoundly fails the validity test.” The letter went on to urge the State Board to use a method that better demonstrates a school’s impact on a student, citing models now being used by the CORE district, and the states of Arkansas, Colorado, Missouri and New York.

Using blunt language that’s rare in the scholarly world, the letter asserted:

“The state’s current ‘change’ model is unacceptable – it profoundly fails the validity test, and therefore it does not accurately represent schools’ contribution to student achievement. Indeed, it is not clear what it represents at all.” –letter from Morgan Polikoff and other scholars to SBE

The second circle of criticism came from Paul Warren, who 15 years ago was in charge of accountability and assessment at the CDE. Paul went on to work for the Office of the Legislative Analyst before moving to his current position at Public Policy Institute of California (PPIC). Published in June, in time for SBE members to digest its 26 pages of clear thinking, the report faulted the method now in use by the dashboard for calculating year-to-year change.

In brief, his PPIC report asked why the CDE would compare different kids over time? When a district’s students in grades 3 to 8 are compared to those who were in grades 3 to 8 in the prior year and the year before that, the comparison ignores the changing composition of the groups being compared. Also, with a longitudinal student data system like CALPADS, why would the CDE not control for mobility and migration into and out of subgroups like free-lunch status and English-learner status?

Paul Warren’s PPIC report offered an equally blunt assessment of the limits of SBAC (also known as CAASPP) results.

“Unfortunately, our analysis shows that using CDE group data to calculate growth estimates for most subgroups of students produces inaccurate measures.” –Paul Warren’s PPIC report

You can read more about Paul Warren’s PPIC report on my blog, or if you’re eager to read the report itself, you’ll find it here. The State Board voted to study the matter, and for the moment decided to not add a growth measure to the accountability system.

The good news is that this debate has brought coherent criticism forward. Those superintendents and board members who have up until now only shared their frustrations about the dashboard’s problems and the volatility of test scores in private, can now do so more publicly. Their worries of appearing to oppose accountability in principle should now be a moot point. There are plenty of problems with both the CAASPP and the dashboard accountability system it depends upon. They all deserve a hearing before this new system is considered to be reliable.

Postscript: The SBE voted to not add a growth measure, and affirmed the status quo of the year-to-year change measure that Polikoff and others warned against. As of July 2019, the growth measure has not yet been determined by the State Board.