174 Colorado Superintendents have signed on to a position statement related to school funding. Read the actual statement by following this link: Colorado Superintendent Finance Statement
In a previous post, I mentioned that I had the opportunity to visit with the HB 1202 committee to discuss assessment. I followed Grant Guyer, Denver’s Executive Director of Assessment, as well as representatives from Harrison District 2.
So I could get a feel for the HB 1202 group, I arrived early to listen in on the conversation. I was impressed with the learning and reflective stance I heard the committee members take. Rather than asserting or defending positions, the committee members were (for the most part) asking really good questions and thinking together.
The contrast of the thoughtful and open approach that the committee had in comparison with advocacy oriented approach Denver took was jarring, at least to me. DPS came in with a clear agenda: influence the committee to (basically) preserve the status quo when it came to state accountability testing.
Because DPS chose to take such a forceful position, I feel it is appropriate that position be critiqued and vetted in public format so that their thinking can be considered and fully vetted. Clearly, DPS’s intent was to influence public policy in a strong way. As this policy impacts every public and charter school in Colorado, examining their claims and thinking is important.
The overarching DPS position is that they (the administration at least) do not support “specific aspects of the shift to minimum federal (assessment) requirements, primarily due to the impact on high schools.”
I’ve attached the report that Grant gave here (DPS Assessment) so readers can review it for themselves (apologies for my scribbles on the scan). However, here are some of their claims and my critique:
Claim #1 – “Standards implementation could be jeopardized as there would not be a consistent, well-constructed assessment to measure of (sic) student performance at the end of a given grade/course.”
What evidence exists to support the claim that standards implementation would be jeopardized if there were no standardized, summative assessments at the end of each grade? Some of the best performing education systems in the world do not test core subjects at the end of each grade, yet they seem to be able to consistently teach to high standards. Further, what evidence or assurances do we have that a machine scored, large-scale, summative assessment is necessary in order for a classroom teacher to teach to high standards? If we are to subject literally hundreds of thousands of Colorado students to an assessment (spending millions in taxpayer dollars to do so), should not the purpose and impact of that assessment be well understood and proven?
Claim #2 – “This would reduce the amount of formal data available to accurately identify where shifts in instruction are needed.”
Large scale, machine scored summative tests are woefully inadequate for the purpose of “shifting” instruction. Primarily, these tests are for accountability purposes and not for guiding formative instructional practices. This is not to criticize the tests themselves – but they were not primarily designed for this purpose. The thinking that summative TCAP, CMAS or PARCC test results will result in effective and responsive classroom level shifts in instruction is hopeful theory with a vacuous evidence base.
Claims #3 & #4 – “Less information available to track student progress toward college and career readiness,” & “Less information available for families to make informed decisions about which high schools are the best options for their children.”
The DPS position assumes (wrongly) that an assessment system at federal minimums (or even fewer assessments) would be devoid of student assessment information in those areas where there is no mandated accountability exam. Clearly, DPS’s approach to improvement is founded on test-based accountability and school choice. In theory, for those two approaches to work you need assessments to shame and punish and big data to create a more perfect school choice “market.” Nothing would preclude DPS from heaping all the assessments they want on students to feed their theory of change. However, if we did not mandate such measures we would not be forcing every other school district in the state to follow DPS’s logic model.
Claim #5 – “Eliminating these data points at the high school level could shift the accountability system to focus too much on status. This distinctly disadvantages urban districts that have students with low levels of preparedness.”
DPS assumes (wrongly) that whatever growth, accountability, and accreditation system we currently have in place would just continue but without some high school assessments. The current accountability framework was designed with one set of assumptions about available test data. In a world with fewer accountability tests, a different model would need to be designed. This different model could conceptualize growth in a number of different ways and could also recognize student poverty demographics and “preparedness” in different ways and it should. Here, DPS just wrongly assumes we would continue the same system we’ve been operating. Further, the report states that “DPS strongly values growth data.” That’s great! But, if this is indeed true, there is no basis to believe DPS could not continue to assess and measure growth without having a mandated state test in place. In fact, dollars currently used for large scale assessments could be provided directly to districts for the very purpose of locally determined measures and analysis.
Claim #6 – “Less external data available to assess student growth for teacher evaluation.”
Besides there being no credible, peer-reviewed evidence that using student testing data to evaluate teachers actually improves instruction and the fact that no high performing system on earth uses this approach, the DPS claim is also flawed. As has been previously discussed, if DPS wishes to have machine scored, large scale assessment data to evaluate its teachers there is no prohibition from them doing just that. The DPS claim seems to infer that without this standardized testing data, our state-wide effort to evaluate teachers using assessment data is in peril – but we already have some 70% of teachers in untested subjects and grades. It is not clear (at least to me) that the presence or absence of summative statewide assessment data does much in helping us solve the significant technical questions related to using testing data to evaluate teachers.
Claim #7 – “…districts would have to take on the additional burden of creating/purchasing products to ensure that schools are meeting student learning expectations (and) the development of local growth measures to assess the performance of schools and teachers.”
As has been previously discussed, dollars currently appropriated for state level accountability assessments could, at some level, be re-purposed to districts for locally determined and more formative measures so its not clear that there would be an additional burden. Further, there are a number of growth measures available for districts to use (student growth percentiles, value-added measures, catch-up/keep-up systems) so it also not clear that a district would need to “develop” these measures.
Again, DPS is following a theory of change for improving their organization built on test-based accountability and school choice. While refraining from a critique of these two approaches to school improvement, I will just say that these are not the only two methods by which a system might build great schools. In fact, the best performing school systems (based on PISA results or equating studies) were not built using these models.
Regardless, it is up to the community of Denver to decide which model is most appropriate for their community and then hold their school leaders accountable for the results.
The larger problem with DPS’s jarring advocacy stance with the HB 1202 committee is that it effectively forces that theory of change on every other school organization in the state – whether we want it, or if there is any evidence to support it, or not.
Of note, in the course of these discussions I have heard no one arguing for the complete abolition of testing and accountability. The better question is how we can have an accountability system that is as efficient and balanced as possible, without over-burdening students and schools with testing. A review the testing approaches in high performing global systems reveals that such a system can be effectively implemented with far fewer tests than we currently use in Colorado.
I encourage further dialogue and discussion on this issue and welcome a response from Grant Guyer (a very nice person, based on my brief interaction with him) or others from DPS. For convenience, I have also posted my presentation materials to the HB 1202 committee for a similar critique, if anyone feels so inclined.
I was recently honored with the brief opportunity to speak to Colorado’s HB1202 Task Force, which is studying the state’s assessment system and responsible for suggesting changes to the Colorado Legislature for consideration in the upcoming legislative session.
I focused my remarks on the importance, process, and evidence on formative measures. I also spoke to the differences between accountability assessments in the United States (and Colorado) versus other high performing nations or municipalities.
The memo I prepared for the group can be accessed here: HB1202 ECS Flyover
The entire text is also provided below. I welcome observations, comments, questions, or critique.
From: Jason E. Glass, Superintendent & Chief Learner
To: HB 14-1202 Task Force
Re: Formative Assessment & a Flyover of Assessment in Eagle County
The purpose of this memorandum is to briefly orient the members of the HB14-1202 Task Force to the large-scale theory of change, an instructionally focused approach to assessment, and some of the formative measures employed in Eagle County Schools. For clarity, this memo will focus on measures whose chief purpose is for improving instruction, as opposed to measures whose chief purpose is accountability.
The Instructional Core
Eagle County Schools uses an “international benchmarking” approach to school improvement. That is, practices are drawn from comparative studies of high performing education systems, both within the United States and abroad. In addition, the organization focuses on practices which have the support of a peer-reviewed body of evidence.
As such, the “in-school” theory of change rests on three major and interrelated tenets which feature prominently in every high-performing educational system. Liz City and Richard Elmore (2009) capture these three elements in their discussions of the “instructional core,” or the relationship between the teacher and student in the presence of content.
Important to City and Elmore’s framework, there is an emphasis on the relationship between the three components. One element cannot change without impacting the other two. For example, we cannot effectively raise the quality or “rigor” of the content (or standards) without also adapting the instructional approach of the teacher and the engagement level of the student.
Assessment through the Lens of Instruction
Formative measurement is an essential part of bringing the instructional core to life. For the teacher to effectively reach and engage every student in learning, that teacher must understand the level of current content performance or knowledge of their students. The teacher must deliver high quality instruction and then determine if that instruction had the desired impact on students (i.e. improved content knowledge or skills). Almost invariably, some students will require additional supports or a differentiated approach to reach the content or skill standard. So, the teacher must apply some intervention, customized to the student, and then check again to see if that intervention had the effect of raising the student to the performance standard.
The “response to intervention” or “response to instruction” (RtI) model provides a useful framework for understanding this process.
Well designed and employed formative assessments are ‘part and parcel’ to the RtI process. All students should receive a universal screen or benchmark assessment as part of the general education curriculum. As there may be some time (days, weeks, or months) between the administrations of these assessments, they can be referred to as long cycle.
These long cycle results will reveal some students who struggle to meet the standard in the general education environment, who should then receive some intervention customized to that student’s needs. Determining the appropriate intervention often requires the use of a diagnostic test to determine the precise area where the student is struggling (ex. phonics vs. phonemic awareness). Then, once an intervention is applied, the determination as to if the intervention is working should be made through a progress monitoring assessment. As the time between these assessments is less than at the universal level, they are sometimes called medium cycle assessments and may be administered every few learning sessions or weeks (or longer, as the team of practitioners determine).
Even after a targeted intervention, some students will require an intensive support. These students will receive diagnostic and progress monitoring even more frequently – perhaps multiple times over the course of the lesson as the teacher iterates to determine what is the barrier to learning and if it is being mitigated through supports or other interventions.
The RtI approach is based on the principles of a “high reliability system” (see Eck et al., 2011), meaning generally that as the probability of failure increases then supports/interventions and monitoring also increases. The goal is to determine which students are struggling and why as quickly as possible and to intervene so that the student meets the performance standard.
Notably, formative assessments may be more standardized and formal or they may be individualized and informal. A powerful mode of formative assessment is a teacher walking through a room as students work, asking questions and checking for understanding. Alternatively, formative assessment may involve sophisticated and computer-based standardized measures. Variations in formative assessments may stem from variations in the elements of the instructional core (different teachers, different students, and different content) or from constraints related to things like time and technology. This entire process may happen in a very structured and mechanical way, or it may happen much more naturally and intuitively. What is most important is that it is, in fact, happening.
It should also be noted that the formative assessment process is not exclusive to the teacher. Perhaps the most powerful mode of formative assessment is for the student to self-monitor and assess their own progress.
Evidence and Formative Assessments
The body of both comparative and peer-reviewed scientific evidence for the effectiveness of formative assessment is (in my professional opinion) strong.
Black and William (1998), in a meta-analysis, found that student achievement gains associated with formative instructional practices were “among the largest ever reported for educational interventions.”
Similarly, Hattie (2011), also in a meta-analysis of over 50,000 studies, identified strategies related to formative assessment and RtI among the largest effect sizes calculated.
From a comparative system perspective, formative assessment and responsive teaching form the instructional basis of practically every high performing education system. Finland, a system perhaps more averse to summative accountability testing than any other in the world, uses formative assessment extensively. In Schwartz & Mehta’s chapter on Finland in Tucker’s comparative study Surpassing Shanghai, it is noted that “While the Finns do not assess for accountability purposes, they do an enormous amount of diagnostic or formative assessment at the classroom level.”
Notably, when a Finnish principal was asked (in Schwartz & Mehta) how well she knew students were performing, she answered that there was so much formative assessment data at her disposal it was impossible not to know.
Formative Assessments in Eagle County Schools
Eagle County Schools relies on a number of formative measures to guide instruction. Choice over the appropriate use of these formative measures is left to the building practitioners, including the building principal, teacher leaders, and classroom teachers.
Depending on grade/developmental level, student characteristics, staff preferences, content area, or specific purpose – the following is an incomplete list of formative assessments used in Eagle County.
- Early Childhood & Elementary
- GOLD Assessment
- mCLASS (DIBELS Next/IDEL)
- AIMS Web
- Core Knowledge Language Arts
- Engage New York, Literacy & Math (Achieve)
- District Formative Measures (ECS Teacher Developed)
- Classroom grades (standards based)
- Middle School
- mCLASS (DIBELS Next/IDEL)
- Renaissance STAR
- NWEA MAPS
- Engage New York, Literacy & Math (Achieve)
- District Formative Measures (ECS Teacher Developed)
- Classroom grades
- High School
- NWEA MAPS
- District Formative Measures (ECS Teacher Developed)
- Classroom grades
Eagle County Schools is, admittedly, not yet a globally high performing system. But, we are in our first year of building an instructionally focused assessment system patterned after global high performers. As such, formative assessment is central part of that effort.
Black, P., & William, D. (1998). Inside the Black Box: Raising Standards Through Classroom Assessment. Phi Delta Kappan, 80, 139-148.
City, E., Elmore, R., Fierman, S., & Teitel, L. (2009). Instructional Rounds in Education: A Network Approach to Improving Teaching and Learning. Cambridge, MA: Harvard Education Press.
Eck, J., Bellamy, G., Schaffer, E., Stringfield, S., Reynolds, D. (2011). High Reliability Organizations in Education. Noteworthy Perspectives, 1-48.
Hattie, J. (2011). Visible Learning for Teachers: Maximizing Impact on Learning. New York, NY: Routledge.
Tucker, M. (2011). Surpassing Shanghai: An Agenda for American Education Built on the World’s Leading School Systems. Cambridge, MA: Harvard Education Press.
Info-graphic from the National Center on Education & the Economy
Colorado’s recently released TCAP results landed across the state with a soft thud. Overall, scores were flat or down in most subjects and grades. Even among charter schools, the ballyhooed darlings of the reform movement, results leaned toward the disappointing accented by wild fluctuation.
Reactions from pundits, state education leaders and the state’s largest newspaper, the Denver Post, ranged from somber to puzzled, but ideas about next steps quickly emerged: stay the course or even accelerate the reforms Colorado has been aggressively pursuing. Namely, that the state should continue with the hyper-accountability (more tests and consequences, even considering extending some form of accountability to the children) or market-based approaches (more charter schools or even expanding to private school voucher schemes).
What is most troubling about the reactions of our state leaders and resident non-profit policy wonks is how completely disconnected their reactions and proposed solutions are from what is really happening in schools across our state.
How quickly we have forgotten that Colorado has cut education funding by over a billion dollars annually for the past four years. In many schools, resources went in reverse nearly 20%, resulting in massive layoffs, pay freezes, and the loss of essential school resources like curricular materials and instructional supports for the state’s neediest kids.
All across the Centennial state, our teachers and principals were and are working to achieve more with less. If any of the so-called or self-proclaimed experts had thought to descend from on high and ask a classroom teacher, then the answers to flat TCAP scores would have been plainly clear.
In spite of this historic gutting of public education in Colorado, our educators – for the most part – held the line on statewide student achievement results. But instead of standing up for those who stood in the breach for our kids, Colorado’s educators received more blame and shame, more disruption and disparagement.
As our schools struggle to piece together and implement the blizzard of disconnected, often unfunded, and frequently nonsensical state reforms, we should ask: is it rational to expect any endeavor to become more complex and to produce better outcomes while the means of production are financially devastated?
Yet our state’s “no-excuses” leaders turn on their reality distortion fields and wonder why statewide scores are flat. Why aren’t our testing, evaluation, and market reforms – that brought such national attention and recognition to Colorado – working as planned?
The answer, quite simply, is that they’ve never worked anywhere at scale and the body of evidence to support these approaches is scientifically anemic and ideologically biased.
There are no high performing education systems in the United States, or anywhere in the world for that matter, that have achieved systemic and sustained greatness through the means Colorado now aggressively pursues.
Instead of working to de-professionalize education by cutting teacher wages, vilifying unions, and allowing practically anyone who isn’t a felon to become a teacher – the high performing systems have worked to make education a high status and very selective profession. There are no stories of mass shaming, firing, and disenfranchisement among those systems that have actually achieved sustainable greatness.
The best performing education systems on earth aren’t having discussions about opening more charter schools because they don’t have any. This is not to say we should eliminate Colorado’s charter schools -many of them do a fine job. It is to say that the work of genuine greatness requires extraordinary effort and execution put behind proven practices. Handing over the management of public education to some non-profit entity and calling it a charter school does not, by this action alone, make the education better and does not further the goal of system-wide genuine quality.
The best education systems on earth also aren’t discussing the privatization of their schools through voucher schemes. This is because they are focused on supporting and continuing to make their public schools even greater – instead of intentionally dismantling and disrupting them.
The best education systems are also judicious in their use of assessments. They test only at key transition points, relying on practitioner developed assessments that measure high level skills and concepts. Here in Colorado, our kids must take literally dozens of standardized tests over the course of their academic careers. Yet we can’t seem to let go of a single test because the theory of test-rank-punish as a means of improvement is far too ingrained.
Parents ask, “Why are we testing my child from February to May instead of teaching them?” Assessments are important; especially those that help educators tailor instruction to help kids learn. But the parents and the kids know – standardized testing is not the same thing as learning.
The problem with years of TCAP staleness starts and ends with the foisting of disconnected state-level reforms that have no basis in evidence. State-level policies that ignore and supersede the intricate art and science of instruction are too broad and generic to work, resulting in the unintended consequences of overloading schools with rules and regulations handed down without any funding to offset their administrative costs.
The Denver Post’s editorial about Colorado’s TCAP scores ended with a plea to continue the path our state is already on in terms of accountability and market-based approaches. According to the Post, we need to get these reforms fully implemented and give them time to work.
In the end, I expect the editorial board at the Post will get their wish. Colorado probably has too much ego, political capital, and careerism invested in these policies to change course now. But we should also expect many years of future editorials – all with an eerily familiar lament – wondering why, systemically, things just aren’t working out as planned.
July is upon us and summer is in full swing for educators across our country. While school is the farthest thing from the minds of many kids and families, the reality is the start of a new school year is only a few short weeks away and there is much to do between now and when students arrive back on campuses throughout the valley.
Many people believe a major perk of being an educator is getting summers off from work. While it is certainly true that educators get some down time in summer (that they deserve!), for most teachers and administrators the summer is not a completely work or stress free time.
For most educators, they spend at least some time during the summer attending learning events like conferences or professional meetings. Many other educators turn into students in the summer by taking graduate credit coursework.
For my wife Sarah (a teacher) and I, this has certainly been the case. Just in the past few years, Sarah has taken classes at places like the New York Metropolitan Museum, the Guggenheim, and the MOMA to become a better art teacher. Summers for me the past few years were spent as a graduate student at Seton Hall University in New Jersey, where I completed my doctoral work. Both of us are proud to bring the knowledge these experiences gave us back to Eagle County for our students and community.
Here in Eagle County Schools, we hold teaching to be a profession and thus we encourage this kind of professional growth. We want all our students to be life-long learners and our staff embodies this character trait. Our district does provide a small tuition credit ($1,500) to help incentivize these efforts, but by far most of the costs are picked up by the educators themselves.
At any given time, we have educators pursuing their Master’s degrees, Education Specialist degrees, and even Doctorates. We also have educators taking coursework from colleges, universities, professional associations, and other educational organizations that is independent of a specific degree track.
But the ongoing learning and work of the educators during the summer only represents a part of what’s going on in our schools over the summer “break.” With hard fought dollars from this past legislative session, we are able to replace flat-panel monitors or install “smart-board” systems in our classrooms. We’ve also been able to get our teacher laptops and student computer lab machines all under warranty again. And, we’ve been working hard to improve the “curb appeal” of our buildings through grounds-keeping, landscaping, paint, and sidewalk work.
We’re also working hard to get ready academically for the next school year. While we still have a ways to go, I’m proud to say we were able to restore of the some staffing cuts in buildings that happened during the great recession. This means our kids in Eagle County Schools will get more support and individual attention.
We also had a banner year recruiting – drawing more candidates from selective colleges and universities. While I’m excited about this talented crop of hires for our schools, all new teachers need support. To meet that need, we’ve been working hard this summer to put in place orientations and mentoring supports so that every educator starting out in Eagle County is supported with good information and seasoned expertise.
Finally, we’ve been working hard to align our curriculum systems in math and language to internationally benchmarked expectations. Our kids can and should learn at the same pace and level as kids anywhere in the world. To meet that goal, we’ve been working hard this summer to get these foundational elements of math and reading aligned to high expectations and to think about how we can support all kids toward world-class expectations.
So, on behalf of all the dedicated and proud employees at Eagle County Schools, let me say “enjoy summer!” But know that in every public school in our community there is a buzz of activity and excitement about August. Our schools and the people in them are on the move and we are excited about what the future holds for our students and our community.
A version of this article appeared in the Vail Daily on July 9, 2014.
Yesterday, Bellwether Education Partners, “a national non-profit dedicated to helping educational organizations,” released a new report entitled “Genuine Progress, Greater Challenges: A Decade of Teacher Effectiveness Reforms” by Andrew J. Rotherham and Ashley LiBetti Mitchel. The report can be accessed here.
This report is national in scope, but popped up on my radar when Colorado’s Donnell Kay Foundation tweeted out a link to the report under the Colorado education policy hashtag “#edcolo,” which I review for state news on a regular basis.
After reviewing the report, I added some (admittedly cheeky) commentary on the report via Twitter. Specifically, I criticized it as pseudo research parading as empirical evidence. I also noted that no high performing education system has achieved greatness pursuing the strategies recommended in the report.
One of the authors of the report, Andy Rotherham (a known national education policy wonk), replied to my tweet, stating “When U actually read report & engage w/ what’s in it (rather than playing to crowd) we’ll be here @COJasonGlass @bellwethered.” Almost immediately, Donnell Kay (or whoever handles their account) favorited the tweet and Andy Smarick (another national policy wonk on education reform and a partner at Bellwether) retweeted it. *Profuse apologies for those unfamiliar with “twitter-speak!”*
Given that the report itself is a re-cycle and re-hashing of the same usual suspects and policy positions when it comes to educator quality and all these individuals/groups have an extreme propensity for citing one another’s writings and hyping each other up, I find the accusation that I’m the one “playing to the crowd” downright amusing!
But, I digress.
Andy did have the courtesy to send me a very respectfully worded email, asking if we could talk about the report and the issues therein and suggested that there was room for common ground. I sincerely appreciate the civility and spirit of that message and I do think Andy is a quality writer and good thinker. My critique of the report is in no way personal toward him or his co-author.
I do think that Andy deserves a more full explanation of my concerns with his report and I do apologize for the abruptness and lack of depth in my tweets on this matter – thus is the inherent drawback of using Twitter for complex conversation!
Rather than respond privately to Andy via email, I am choosing to critique his report via this public forum. The reason for this is that Andy and the Bellwether Foundation have put forth this document in the public realm, ostensibly with the goal of influencing public policy when it comes to educator quality. As such, a critique of the report also belongs in the public realm.
So, in the spirit of respectful public dialogue and a commitment to a free-market of ideas (which I am sure Andy equally supports), below is my critique of the Bellwether report.
One last thing before I begin – I apologize for the free flowing form of my thoughts in the writing below. I am a working Superintendent and father and my time is precious. Forgive me if this lacks the flow and organization of a more professional piece.
AREAS of SUBSTANTIAL AGREEMENT
Instructional quality is of great importance – the Bellwether report makes this statement early and prominently and I could not agree more.
Teachers matter a great deal to student outcomes – with the qualification of “within school factors,” I strongly agree with this statement. The Bellwether report does acknowledge this qualification. To be more clear, outside of school factors actually matter more when it comes to student outcomes. This is not noted as an “excuse” for why our system of education cannot and should not be better, it is noted to say that one cannot reasonably expect to systemically and at-scale improve student outcomes if one ignores the out of school factors.
The industrial union model has been, to a degree, a detriment to the teaching “profession” – While teachers’ unions adopted an industrial and confrontational approach to bargaining for good reason (low wages, discriminatory practices, inhuman working conditions) and have historically gained in these areas as a result, holding on to this model in today’s era is a detriment. Unions must evolve to be guardians of of quality and of the profession. In my professional opinion and to the credit of unions, this transformation is underway in the United States – but it has been and continues to be a process.
Educator quality has a long and interesting history – The report notes that efforts to improve educator quality through mechanisms such as licensure and efforts to define “highly qualified” have been underway for several years. I would also add educator preparation program accreditation and prospective teacher testing as other levers, which are touched on in the report – if only briefly.
Pension reform is necessary – To which I would add two qualifications. First, this is not true in all states. Some states have over-promised and mis-managed their pension systems and created massive unfunded liabilities. However, other states have been conservative and pragmatic with their systems and they are quite sustainable. Second, we must be cautious about the motives and plans of those wishing to reform pension systems. While there are some who genuinely wish to shift the funds to public employees in the form of defined contribution plans and increase direct compensation, there are others who wish to “reform” pensions as a back-door way of de-funding public education and intentionally harming public servants. Similarly, we must also be suspicious of the motives of Wall Street firms who wish to destroy and privatize pensions so as to create opportunities for profiteering.
Personalize professional development - While I take a bit of exception that this must be in some way hitched to evaluation, to the degree that we empower and provide autonomy to our front line educators to determine and customize professional learning to their context and needs, we are in agreement.
Focus on recruitment – The best performing education systems in the world are damned selective about whom they allow to enter the teaching profession. Generally, this is accomplished through a combination of raising the prestige of the profession, raising the initial compensation levels, and treating the profession with reverence and respect. If the United States approached the teaching profession in the same way many high performing global systems do (and the way the best performing systems in the U.S. historically have), the thinking that we need to rank and fire people would diminish tremendously.
AREAS of SUBSTANTIAL DISAGREEMENT
A one sided historical narrative - The report attempts to tell the story of educator quality in the United States. While this is indeed a worthwhile and interesting topic (at least in my judgment!), the report relies on a tired narrative of unions and comatose school administrators as the villians and education reform groups and their “get tough” leaders as heroes. How can anyone expect a historical review of educator quality to be taken seriously as a scholarly piece without even a mention of John Dewey?
Unions are the problem - As previously mentioned, this story needs a villain and teachers’ unions serve that role in this report. However, the highest performing education systems on Earth are (for the most part) highly unionized. In these systems, unions serve as professional guilds and important partners for educator quality. Using this report as yet another frontal attack on unions does not help us make the transition to that professional and collegial model. Using the lens of international benchmarking to best systems, dismantling and disenfranchising the union does not seem to be in the playbook.
Evaluation is a mechanism for improving educator quality – This report repeatedly leaps to the conclusion that improving evaluation systems will improve teaching and improve student outcomes. This causal link has no empirical basis and giants from the field of business management (notably Deming and Herzberg) have been telling us for decades the practice is an ineffective means of improvement. Yet, the education reform movement has swallowed whole this approach of evaluate/rank/punish as a mechanism for improvement and now we have national education policy build on this unproven and potentially detrimental assumption.
Achievement gaps exist; and teachers are the answer – As discussed previously, teachers are really, really important and on this point we agree. Yet, by this report’s citations teacher effects account for 7% or 8% of the variance when it comes to student outcomes. Much of this variance, we know, comes from societal issues relating to student poverty. Any systemic effort aimed at closing the achievement gap must include a commensurate systemic effort at mitigating the effects of poverty on learning.
“The last few years have produced real progress on teacher effectiveness and more generally in American schools…” – This statement comes directly out of the report and makes the classic logical fallacy of “post hoc, ergo proptor hoc.” More simply, Y followed X, so Y must have been caused by X. In spite of the constant attacks and shaming of the American education system in an effort to beat the drum of reform, American schools are better performing now than ever and achievement gaps are narrower than ever. To make any sort of claim that this improvement (which has been underway since the 1960′s) is the result of relatively recent “educator effectiveness” reforms is bogus. While often maligned as unresponsive and overly bureaucratic, the American education system has actually been very adaptive to the shifting demands our society has placed on public education. Rather than a system which has been resistant to change, the American education system has been very successful at meeting change. See Clayton Christensen’s Disrupting Class for a lengthy discussion on this point.
Removing ineffective educators is the key to large scale improvement – I am unaware of any organization or system, public or private, which achieved systemic and sustained greatness via the creation of large scale, complex and Rube Goldberg-ish attempts to rank and fire employees. Even in the so-called cut-throat world of American business, firing people is a relatively rare occurrence. Focusing on firing people is more likely to create alienation and fear in an organization than large scale improved performance. This is not to say that individual accountability isn’t important – some people need a lot of it! Rather, it is to say that we have other higher leverage strategies more likely to produce the outcome we want, such as more effective recruiting and empowerment of our professionals.
Performance-based compensation is a key element for improving educator quality – The report does acknowledge that the research is “mixed” on this point, but I’d more characterize the evidence to indicate that performance-based compensation has no impact on student outcomes. I’d urge Rotherham and Mitchel to more closely read the Vanderbilt POINT study, which they do reference. The “no effect” finding should come as no surprise. Researchers like Frederick Herzberg and Deci & Ryan have clearly told us that the most important aspects of a compensation system is that it is adequate and fair and that money is not a strong “motivator” for quality. The simple behavioristic approach of offering merit pay to educators so they will work harder for kids has no basis in evidence and is professionally insulting.
Transparency and choice will lead to improved teacher preparation – This statement comes right out of the report as the authors recommend creating more of a free market for teacher preparation, allowing more groups to prepare teachers, and removing barriers to entering the profession. Rather than a recipe for quality, this is a recipe for increased variability. Higher performing education systems actually restrict educator preparation institutions and demand higher quality to get a systemic impact. No high performing system has used a Teach for America or “let a thousand flowers bloom” approach to educator preparation.
Traditional education “interest groups” have too much power and are the problem – While it is clear that the authors did put some considered thought into this report and their writing, this claim borders on laughable and is, at a minimum, self serving. The traditional interest groups (of which I would include those groups which represent teachers, school boards, and school administrators) are the only groups representing the large scale voices of practitioners in the field. It is groups like Bellwether (and Donnell Kay here in Colorado, for that matter) who have worked to shove out these traditional groups and the voices of practitioners and replace them with a parade of of ideologically-minded nonprofits who are all advocating for some vision of an American education system built on test and punishments, the deconstruction of public schools, and the destruction of community-based decision making. The traditional interest groups are not the problem; the hijacking of education policy by big money philanthropists and their nonprofit fronts are precisely the problem.
I’d like to again thank Andy Rotherham for calling me out on my Twitter criticisms of his report. It is a lengthy piece that deserved more attention than 140 characters could provide. I hope this blog posting makes my concerns with the report more clear and I look forward to engaging with Andy (or others) in the spirit of open and respectful discussion.
I can’t think of anyone who likes to take tests. The mere mention of acronyms like ACT and SAT conjure up cold sweats and bad memories of hours sitting in auditoriums or school cafeterias feverishly coloring in bubbles in a state of nervous anxiety. Yet, these experiences have become such a foundational element of the American education system that they are almost a ritualistic rite of passage, or perhaps a form of systemic hazing.
While there aren’t many people who like tests, I also can’t think of an educator worth their salt who doesn’t place high value on valid, reliable, and timely assessment data.
A quality educator uses testing data, tightly aligned to the curriculum, to see how students are progressing in their mastery of course content and skills. The quality educator then adapts the instructional technique (differentiation), or lines up additional supports (specialists or assistive technology), to help each student reach the goal.
FORMATIVE AND SUMMATIVE TESTING
Testing for the purpose of adapting instruction and providing support is known as formative assessment. It is a hallmark of all high performing education systems.
Paradoxically, most of the tests mandated through state or federal laws (like No Child Left Behind), are not formative in nature and have almost no instructional value. These tests are summative – they occur at the end of instruction to measure what the learner retained.
These summative tests are given to students in subjects including reading, writing, math, science, social studies, and English Language proficiency. They happen near the end of the school year, and it takes months before we get the results. This makes summative tests akin to an autopsy – they give us great information about what happened, but are woefully late to do anything to assist the patient.
ASSESSMENT IN THE UNITED STATES & IN HIGH PERFORMING SYSTEMS
Interestingly, there is only one system in the world where every student is tested at the end of every year using a machine-scored, multiple choice format: the United States.
Contrary to popular belief, high performing global systems (including Finland) do have tests, but these tests are formative in nature and are used to direct instructional decisions and provide learning support.
When high performing systems do give end of year summative tests, they are very different than the machine scored, bubble-sheet forms we see in Colorado.
Instead of testing every student every year, high performing systems test at key “gateway” points in a student’s progression. These assessments are given at the transition from elementary to middle school, from middle school to high school, and on exit from high school.
High performing systems also tend to use tests which require students to demonstrate skills like writing, formulating and defending a position, synthesizing complex information, problem solving, and critical thinking. Classroom teachers (instead of machines) frequently score these tests, so that feedback on how instruction might be improved immediately gets to where it can do the most good.
TESTING FOR ACCOUNTABILITY VS. TESTING FOR INSTRUCTION
In Colorado, we test every student every year from grades 3 through high school in a variety of subjects. One driver behind this approach is so that we can amass data to identify, shame, punish, and occasionally reward schools and teachers who get high test scores.
No high performing system in the world uses such an approach as a strategy for quality.
Instead, high performing education systems are judicious about their use of testing and insist on clear and immediate connections to teaching and learning.
THE PATH AHEAD FOR TESTING IN COLORADO
Colorado is in the process of redesigning its system of assessments to move away from those scanned bubble sheets covered with #2 pencil lead. It is replacing those tests with computer-based tests, which are intended to measure higher-order thinking skills instead of multiple choice test accuracy.
The tests in English language arts and math are called PARCC (Partnership of Assessment for Readiness in College and Careers). They are aligned to the internationally benchmarked high expectations embedded in Colorado’s Academic Standards and the Common Core.
These efforts to improve the assessments and to align to high expectations are the right work. However, the PARCC test is still a summative grade-by-grade, every student every year test that is then hitched to the state’s blame and shame system of accountability for schools and teachers.
We should applaud efforts to improve the state’s assessment system – but we should know by now that the era of hyper-testing and punishment ushered in under the federal No Child Left Behind Law isn’t working for our kids, schools, or communities.
THE LOCAL CONNECTION
Our schools are obligated by law to participate in these big data state-testing schemes. However, we are putting our focus on formative assessments that link closely to our curriculum and serve to improve instruction.
While we have to take part in the big government solutions imposed on us by Washington D.C. and our own state legislature, we can choose to put our energies into formative measures that will actually be of value to our students. The by-product of which is improved student
Note – this article originally appeared in the Vail Daily
The Vail Symposium, a great civic organization we have here in Eagle County dedicated to facilitating key public policy discussions, had recently scheduled a tremendous event on education to discuss the role of unions.
The event was to have featured Randi Weingarten, President of the American Federation of Teachers, and Hannah Skandera, state education Chief in New Mexico, to square off in a sort of edu-celebrity cage match on unions.
Because of the nature of our resort community, it is not uncommon for our valley to get very talented speakers, musicians, and artists to visit. Still, we were excited about this conversation because of it’s focus on education policy and that it would have allowed our community to engage with two key national figures in Weingarten and Skandera.
Regrettably, both speakers cancelled. However, the Symposium was kind enough to allow me to fill in and facilitate a larger discussion with the community on education policy. I’m looking forward to the event and information on it can be found here.
Our school district, Eagle County Schools, did provide a brief for the prior speakers to give them the local context of our district and its relationship with our union. This document can be accessed here:
To sum up, we are working to pattern the approaches we use in Eagle County after those strategies that have been proven effective at systemically improving student outcomes in other high performing education systems. In these high performing systems, we see relationships that are collaborative, healthy, and respectful.
For us, it calls into question the wisdom of any reform strategies predicated on disenfranchising or dismantling unions. In the high performing systems we have studied, union-busting just doesn’t seem to be in the playbook.
Note that we do not make the claim that unionization has a causal relationship with high performance. There are certainly plenty of education systems with strong unions that are not high performing. However, in those systems which have achieved the kind of long-standing and systemic success we are seeking, we find none of them have gotten there through expending energy on an adversarial relationship with the union.
As always, I look forward to and appreciate any reactions.
Superintendents representing 99% of the public school students in Colorado sent a letter to state elected officials and Governor Hickenlooper today. The letter is straightforward in its request. I present it here for your consideration and distribution: LetterToGeneralAssembly
Today, on Twitter, I asked some critical questions about opinion piece the Honorable Rep. Jared Polis wrote for the Denver Post. You can read the article yourself, but the central claim of Rep. Polis’ argument is that “public school choice is an asset to improve all schools.”
I’ve written before that I’m not an opponent of school choice. However, I do question whether school choice policies have the capacity to actually lead us to system-wide improvement and, if school choice isn’t carefully overseen, that it can lead to a re-segregation of our schools – effectively returning us to an era of “separate but equal.”
I asked Rep. Polis (and a non-profit called “A+ Denver” which claims to “advocate for the changes necessary to dramatically increase student achievement in public education”) some questions about school choice and its ability to really “improve all schools.” I’ll put these questions here, and also provide some answers based on the evidence.
Question 1 – Which high performing global systems have used choice and competition as drivers for greatness? Answer – no education system that leads the world’s performance league tables has used school choice and competition as a driver for greatness.
Question 2 – Does school choice improve all public schools? Answer – there is no peer reviewed, journal quality evidence to support this claim.
Question 3 – Are we overselling school choice as a policy for large scale improvement? Answer – given that no high performing system has used this approach, and we have no quality evidence to support this claim, I’d deduce that we are overselling this policy, if the goal is that all schools improve.
From Rep. Polis, I got the typical imperious silence one should expect from a Member of Congress. “A+ Denver” did respond with another statement/claim, saying “school choice combined with performance management will have an impact on the largest school systems.” To which I again say: evidence, please.
Enter Rich Wenning
Rich Wenning is the current Executive Director at BeFoundation, a nonprofit purportedly working to bring about “sustained and dramatic improvement in the educational outcomes of disadvantaged students and the vitality of their communities.”
Let me say that I make no personal criticisms of Rich or his organization. While I admit I don’t know a lot about them or the strategies they use, BeFoundation has a wonderful purpose statement and I applaud any group that champions better services for students in poverty. Also, Rich and I both spent some time at the Colorado Department of Education, though our tenures did not overlap. State agencies are incredibly tough place to work, and I commend him for the work he did with the Colorado Growth Model website – although the Growth Model doesn’t take into account the error present in all student assessment data, which is a serious methodological flaw, in my professional opinion.
Rather than address any of the questions I raised, Rich chose to attack my school district, Eagle County Schools using the Colorado Growth Model.
In my experience, I’ve noticed that when someone goes on the attack when a critical question is asked, it is an indication that they recognize that there is some truth or a painful point in the question that they are trying to deflect. But since Rich and I didn’t fully explore this notion (and Twitter certainly has its limitations!), we’ll let that issue go without further examination.
In his attack, Rich also used data from before I was even the Superintendent in Eagle County, but that is another matter as well.
For the sake of discussion, let’s explore Rich’s attack and the point (I think) he was trying to make.
Rich compared Eagle County’s growth results to those of Denver Public Schools. According to the way-cool bubbles on the growth model, DPS’s results generally outperform Eagle County. To this, I’d say “congratulations” to DPS! It’s great they are making progress and it’s additionally great news because they are such a large district.
I think Rich was trying to make the point that DPS’s results were higher because they have school choice. However, there are a great variety of school choice options in Eagle County as well. According to a CDE report on charter schools, about 12% of students in Denver are in charter schools. In Eagle County, about 20% of all students are in either charter or private school options. Since Eagle County and DPS both have school choice options, can we really make the inference that school choice is driving the results? I think Rich is generally a smart guy, based on his successful career and many accomplishments – but this seems like a pretty basic logical error.
Also of note, Chalkbeat Colorado did a great job covering the heartbreaking story of Denver’s Manual High School and how, despite years of “no excuses” and other disconnected/disjointed education reforms, little real improvement had been made.
I wonder, Rich, how can this possibly happen given Denver’s myriad of school choice options? Aren’t all schools supposed to improve as a result of school choice? Shouldn’t choice and competition and the supposed open market for schooling have pressured Manual to get better? Could it be that school choice facilitated “white-flight” that may have actually exacerbated the poverty-based problems Manual continues to struggle with? I don’t know the answers to these questions, but I’m hoping you do, Rich.
Rich, Eagle County is not a perfect school system. But we did have one of our two comprehensive high schools recognized by U.S. News and World report as one of the top 10% in the United States. And our other high school produces Boetcher Scholars and puts a number of kids into top colleges (including Ivy League Schools) every year. We even have a ski and snowboard academy that is a public school and which put four current or former students in the Olympics. But we don’t have a story like that of Manual High School, Rich. Somehow, despite all our shortcomings, we’ve been able to keep that kind of failure from our community and our kids.
Rich, like all schools, we have students who struggle. But we are working very hard, Rich, to build not just a good system – but a great system, a world-class system. We have a great plan, Rich and we are proud of it, we are excited about it, and we are executing it. I’d love to have you read our plan and think about it too, Rich – we’d love to have your feedback in helping us become a great school district!
So, Rich, please do resist the urge to make unfounded claims about school choice being yet another “silver bullet” that will be the cure-all for schools. Such claims are misleading to the public and to families. I know you are a data guy, Rich, and the evidence just doesn’t support that claim. No matter how much you (and others) may say it, believe it, and want it to be true – that just doesn’t make it so.
What is true is that the work of building a great school is really, really hard work and it doesn’t matter if you are a public, charter, or private school. Genuine greatness requires a focus on instruction, it takes being supportive and respectful of great teachers, it takes working hard to customize instruction to fit students, and it takes intensive efforts to mitigate the effects of poverty as early and as aggressively as we possibly can.
Rich, though you might feel defensive, try hard not to take shots at us. The people in our schools are giving it all they’ve got in a genuine effort to be great. We will get tired, so we need people like you cheering us on and supporting us.
So, Rich, we at Eagle County Schools aren’t perfect. But, we are trying really, really hard to be better – because we love our children and we love our community and we want wonderful outcomes for both of them.
This exchange was probably more than you expected! I do appreciate your engaging with me and I look forward to your reactions and thoughts, Rich.