In a previous post, I mentioned that I had the opportunity to visit with the HB 1202 committee to discuss assessment. I followed Grant Guyer, Denver’s Executive Director of Assessment, as well as representatives from Harrison District 2.
So I could get a feel for the HB 1202 group, I arrived early to listen in on the conversation. I was impressed with the learning and reflective stance I heard the committee members take. Rather than asserting or defending positions, the committee members were (for the most part) asking really good questions and thinking together.
The contrast of the thoughtful and open approach that the committee had in comparison with advocacy oriented approach Denver took was jarring, at least to me. DPS came in with a clear agenda: influence the committee to (basically) preserve the status quo when it came to state accountability testing.
Because DPS chose to take such a forceful position, I feel it is appropriate that position be critiqued and vetted in public format so that their thinking can be considered and fully vetted. Clearly, DPS’s intent was to influence public policy in a strong way. As this policy impacts every public and charter school in Colorado, examining their claims and thinking is important.
The overarching DPS position is that they (the administration at least) do not support “specific aspects of the shift to minimum federal (assessment) requirements, primarily due to the impact on high schools.”
I’ve attached the report that Grant gave here (DPS Assessment) so readers can review it for themselves (apologies for my scribbles on the scan). However, here are some of their claims and my critique:
Claim #1 – “Standards implementation could be jeopardized as there would not be a consistent, well-constructed assessment to measure of (sic) student performance at the end of a given grade/course.”
What evidence exists to support the claim that standards implementation would be jeopardized if there were no standardized, summative assessments at the end of each grade? Some of the best performing education systems in the world do not test core subjects at the end of each grade, yet they seem to be able to consistently teach to high standards. Further, what evidence or assurances do we have that a machine scored, large-scale, summative assessment is necessary in order for a classroom teacher to teach to high standards? If we are to subject literally hundreds of thousands of Colorado students to an assessment (spending millions in taxpayer dollars to do so), should not the purpose and impact of that assessment be well understood and proven?
Claim #2 – “This would reduce the amount of formal data available to accurately identify where shifts in instruction are needed.”
Large scale, machine scored summative tests are woefully inadequate for the purpose of “shifting” instruction. Primarily, these tests are for accountability purposes and not for guiding formative instructional practices. This is not to criticize the tests themselves – but they were not primarily designed for this purpose. The thinking that summative TCAP, CMAS or PARCC test results will result in effective and responsive classroom level shifts in instruction is hopeful theory with a vacuous evidence base.
Claims #3 & #4 – “Less information available to track student progress toward college and career readiness,” & “Less information available for families to make informed decisions about which high schools are the best options for their children.”
The DPS position assumes (wrongly) that an assessment system at federal minimums (or even fewer assessments) would be devoid of student assessment information in those areas where there is no mandated accountability exam. Clearly, DPS’s approach to improvement is founded on test-based accountability and school choice. In theory, for those two approaches to work you need assessments to shame and punish and big data to create a more perfect school choice “market.” Nothing would preclude DPS from heaping all the assessments they want on students to feed their theory of change. However, if we did not mandate such measures we would not be forcing every other school district in the state to follow DPS’s logic model.
Claim #5 – “Eliminating these data points at the high school level could shift the accountability system to focus too much on status. This distinctly disadvantages urban districts that have students with low levels of preparedness.”
DPS assumes (wrongly) that whatever growth, accountability, and accreditation system we currently have in place would just continue but without some high school assessments. The current accountability framework was designed with one set of assumptions about available test data. In a world with fewer accountability tests, a different model would need to be designed. This different model could conceptualize growth in a number of different ways and could also recognize student poverty demographics and “preparedness” in different ways and it should. Here, DPS just wrongly assumes we would continue the same system we’ve been operating. Further, the report states that “DPS strongly values growth data.” That’s great! But, if this is indeed true, there is no basis to believe DPS could not continue to assess and measure growth without having a mandated state test in place. In fact, dollars currently used for large scale assessments could be provided directly to districts for the very purpose of locally determined measures and analysis.
Claim #6 – “Less external data available to assess student growth for teacher evaluation.”
Besides there being no credible, peer-reviewed evidence that using student testing data to evaluate teachers actually improves instruction and the fact that no high performing system on earth uses this approach, the DPS claim is also flawed. As has been previously discussed, if DPS wishes to have machine scored, large scale assessment data to evaluate its teachers there is no prohibition from them doing just that. The DPS claim seems to infer that without this standardized testing data, our state-wide effort to evaluate teachers using assessment data is in peril – but we already have some 70% of teachers in untested subjects and grades. It is not clear (at least to me) that the presence or absence of summative statewide assessment data does much in helping us solve the significant technical questions related to using testing data to evaluate teachers.
Claim #7 – “…districts would have to take on the additional burden of creating/purchasing products to ensure that schools are meeting student learning expectations (and) the development of local growth measures to assess the performance of schools and teachers.”
As has been previously discussed, dollars currently appropriated for state level accountability assessments could, at some level, be re-purposed to districts for locally determined and more formative measures so its not clear that there would be an additional burden. Further, there are a number of growth measures available for districts to use (student growth percentiles, value-added measures, catch-up/keep-up systems) so it also not clear that a district would need to “develop” these measures.
Again, DPS is following a theory of change for improving their organization built on test-based accountability and school choice. While refraining from a critique of these two approaches to school improvement, I will just say that these are not the only two methods by which a system might build great schools. In fact, the best performing school systems (based on PISA results or equating studies) were not built using these models.
Regardless, it is up to the community of Denver to decide which model is most appropriate for their community and then hold their school leaders accountable for the results.
The larger problem with DPS’s jarring advocacy stance with the HB 1202 committee is that it effectively forces that theory of change on every other school organization in the state – whether we want it, or if there is any evidence to support it, or not.
Of note, in the course of these discussions I have heard no one arguing for the complete abolition of testing and accountability. The better question is how we can have an accountability system that is as efficient and balanced as possible, without over-burdening students and schools with testing. A review the testing approaches in high performing global systems reveals that such a system can be effectively implemented with far fewer tests than we currently use in Colorado.
I encourage further dialogue and discussion on this issue and welcome a response from Grant Guyer (a very nice person, based on my brief interaction with him) or others from DPS. For convenience, I have also posted my presentation materials to the HB 1202 committee for a similar critique, if anyone feels so inclined.