You are currently browsing the category archive for the ‘Federal’ category.

AP Photo/Carolyn Kaster, File

AP Photo/Carolyn Kaster

Dear Senators Bennet and Gardner,

I am writing to urge you to reject the appointment of Betsy DeVos as the next Secretary of Education.

Ms. DeVos is entirely unqualified for the role, having few real accomplishments of her own besides those brought about by the wealth of her husband’s family and their purchased political influence.

Ms. DeVos also has no substantive background in education besides serving in an advocacy role for school vouchers and other privatization schemes. These approaches are designed to deconstruct public education, a foundational institution for our democratic republic supported by the founding fathers.

Even in the area of school choice, ostensibly the reason for her nomination, Ms. DeVos’ approach misses the mark.

Her record of unregulated, low quality school choices in Michigan has not only decimated that state’s public education system, but left in its wake a mish-mash of low performing and profiteering educational operations.

Finally, a core tenet of education policy is that such decisions are best governed at the local and state levels. With this nomination, it is clear President-Elect Trump intends to move forward on a campaign promise to push a $20 billion school choice plan on states, though it is less clear how this would be funded.

Moving this effort forward in any form would be a gigantic interference with state and local control for those states willing to jump through the hoops in order to get the federal dollars in this “Race to the Bank” model.

Concomitantly, it would mean those states refusing to participate in such a plan would effectively be sending their federal education dollars to private schools in other states.

There is certainly a place for school choice and private schools in our nation’s education system, but we should resist ideologically driven efforts to dismantle public schools in pursuit of a politically motivated goals.

Thank you for your time and attention to this important matter.

Kind regards,

Jason E. Glass, Ed.D.

Superintendent & Chief Learner

Eagle County Schools

Opposing Forces

Last week, a policy fight related to how struggling students should be counted and used in rating schools broke out at the state level, pitting education professionals on one side against education reform and civil rights groups on the other.

The heart of the argument was technical and wonky in nature, but provides some insight and a preview of fights ahead as Colorado (and other states) decides how it will navigate a new federal landscape which allows much more state level flexibility.

In this particular case, the issue involved the state accountability system – which is used to keep track of how students are doing and then acknowledge or punish schools and districts according to the results.  The old federal law, No Child Left Behind (NCLB), required fairly detailed reporting on different student “sub-groups,” such as race, disability, and language status.

The politics on NCLB always made for strange bedfellows. Republicans liked the testing and accountability provisions, and civil rights groups liked the detailed reporting for minorities and types of students who have traditionally struggled on exams.

The problem, at least according to education professionals (like teachers and school administrators) was that the NCLB system required schools which serve the most diverse and at-risk students were to be held to a much higher level of accountability than those whose student body is less diverse, and that a single student who failed to meet the “proficiency” designation (or failing to make growth) on the test could be counted multiple times against a school.

For example, let’s say a school has a student who is Hispanic, has a disability, is learning English, and qualifies for Free/Reduced Lunch (a measure of student poverty) and this student failed to reach proficiency and growth targets.  Rather than just being counted against the school once, this student would be counted against the school four times – once for each of the subgroups they fell into.

Former Colorado Education Commissioner Robert Hammond convened a statewide workgroup to study the state’s accountability system and this group recommended changes where the hypothetical student described above would only be counted against a school once, though data on all the different subgroupings would still be made publicly available.

Education professionals have long cried foul about the state’s accountability system, and how it unfairly targets and shames schools serving the most at-risk student populations and this multiple counting issue is part of that problem.

The coalition of education reform and civil rights groups protested strongly against the proposal to only count these students once and successfully lobbied the state board of education into backing away from it.

The heart of the disagreement stems from how strongly these reform and civil rights groups feel about test-based accountability.  Their argument might be summarized as follows: If we test all students against high academic expectations, publicly report those results, and then establish firm consequences for schools failing to succeed – then our education system will improve and all students will get the supports they need.

This theory underlies the entire testing and ranking approach that was baked into NCLB and that the country has been following for almost 15 years.

Education professionals have long pushed back against the NCLB accountability-driven approach, countering with a different theory.  To summarize that thinking:  If we provide high quality instruction, engage the learner, support the educator, and mitigate the damaging effects of poverty – then our education system will improve and all students will get the supports they need.

While I’m admittedly oversimplifying, note that the desired ends between these groups are (basically) aligned, but the approaches to achieving this result differ dramatically.

How this argument has played out is important because it portends an even greater conflict looming for the state.  In late 2015, NCLB was replaced with a new federal education law called the “Every Student Succeeds Act,” or ESSA.

ESSA provides states much wider latitude to determine things like testing, accountability, and what punishments would be handed down to struggling schools.  The question now is whether or not our state will actually use any of that latitude.

Looking ahead, I expect we’ll see a strong push from education professionals to significantly revise the NCLB-era accountability system under which the state currently operates.  I expect we’ll also see a similar strong push from education reform and civil rights groups to make sure nothing changes.

Of course, what is needed is a reasonable and fair compromise.  We do need to make adjustments to the state accountability system which unfairly blames and shames schools serving high concentrations of diverse and impoverished students.  We also should maintain a transparent system of accountability that both pressures and supports underperforming school systems to get better.

I’d like to say I’m optimistic – but I’m not.  In an all-too-familiar-refrain, years of bitter argument on this issue divides and polarizes both sides, making a compromise path difficult to find.  In addition, the state agency naturally poised to lead this discussion (the Colorado Department of Education) is a wounded and understaffed bureaucracy, now with its fourth Commissioner in a year and still in the wake of several high level resignations.

At this point, no one is quite sure what will happen.  However, everyone is certain we’ve got our work cut out for us as a state.

Note: A version of this article appeared in the Vail Daily on 6.15.16.

Tightrope Walker

Photo courtesy of Natalie Curtiss via Flickr

There are number of testing bills being considered by the Colorado Legislature this year.  Some of these take significant steps to roll back the testing system in the state while others exist merely to create the appearance of doing so.

At the same time, another bill (SB 223) clarifies that parents have the right to refuse to have their students take the test, commonly referred to as “opting out.”

Anti-testing advocates and groups argue that testing in Colorado has gone far beyond reasonable levels and that parents need legislation to both roll back the tests and to protect families who refuse to take the exams.  This side is made up of a strange mix of parent advocates, teachers’ unions, and individuals on the far right who are opposed to government over-reach.

The other side of the chessboard lines up testing proponents and a slew of well-funded “ed reform” groups.  Supporters of the tests argue that evaluating teachers based on tests scores, and ranking schools using these results are “innovations.” They claim that without these measures, the accountability and choice reforms the state has worked to put in place over the past few years will come abruptly undone.

It’s amazing how quickly the rhetoric changes.  Just a few years ago, many of those on the anti-testing side of this debate were labelled “defenders of the status quo” by  education reformers.  Now, the shoe is on the other foot with the ed reform camp scrambling to protect the laws and tests they put in place since 2010.

Without making any judgments, the arguments advanced by both sides are essentially correct.  Colorado testing has gone off the deep end in terms of the number of tests students are required to take and there does need to be some kind of mechanism for legally handling the exponential growth in the “opt out” movement we are seeing in some schools this year.  On the other side, removing the assessments would mean a roll back and sort of repudiation of the teacher and school ranking systems many of our current ed reform laws were designed to create. Additionally, a fundamental theory of the school choice movement is creating a school “marketplace” where parents can make educational decisions informed by data – test data specifically – this reform loses some steam without test data to drive it.

In my professional opinion, the right policy (at least at this point) is to move back the testing levels as close to “federal minimum” requirements under No Child Left Behind.  This is really as far as the state can go without putting federal education dollars in jeopardy, or at least minimally forcing the state into a gigantic game of chicken with Secretary Duncan.  Changing those federal minimums is something we, as a country, need to take a critical look at as well – but that’s a whole other subject!

The “opt out” movement is merely a symptom of a larger root cause: over-testing.  Putting in place some kind of legalized opt-out mechanism just puts a Band-Aid on the larger problem and will not allow the state to move past this issue.  Unless the legislature reduces the number of tests in a meaningful way, the “opt out” movement is going to persist and ultimately undermine the usefulness of all state testing data.

If we put aside the table-pounding voices from the anti-testing side, as well as the “big” money-fueled-coordinated-slick public relations campaigns from testing proponents, the challenge remaining for the legislature is finding a tolerable equilibrium in testing implementation. Given the “all or nothing” rhetoric individuals and groups involved seem to be taking, this is no easy task.

Fundamentally, the legislature has got to reduce the number of tests to a point where “opt out” numbers fall to their historically low numbers.  But they can’t go too far in that direction, or they risk the education reform groups continuing to push for more testing and measurement.

At the end of the legislative session, I expect the legislature to find that equilibrium position that most people in the state will accept . . . but that neither the anti-testing nor education reform groups will find completely satisfying.  While that is likely to be the ultimate outcome, don’t hold your breath or turn away not expecting this to be a spectacle.  Whatever happens, this is going to be fun to watch.

*A version of this article appeared in the Vail Daily on April 15, 2015.

Dont-Rock-The-Boat-600x400

In a previous post, I mentioned that I had the opportunity to visit with the HB 1202 committee to discuss assessment.  I followed Grant Guyer, Denver’s Executive Director of Assessment, as well as representatives from Harrison District 2.

So I could get a feel for the HB 1202 group, I arrived early to listen in on the conversation.  I was impressed with the learning and reflective stance I heard the committee members take.  Rather than asserting or defending positions, the committee members were (for the most part) asking really good questions and thinking together.

The contrast of the thoughtful and open approach that the committee had in comparison with advocacy oriented approach Denver took was jarring, at least to me.  DPS came in with a clear agenda: influence the committee to (basically) preserve the status quo when it came to state accountability testing.

Because DPS chose to take such a forceful position, I feel it is appropriate that position be critiqued and vetted in public format so that their thinking can be considered and fully vetted.  Clearly, DPS’s intent was to influence public policy in a strong way.  As this policy impacts every public and charter school in Colorado, examining their claims and thinking is important.

The overarching DPS position is that they (the administration at least) do not support “specific aspects of the shift to minimum federal (assessment) requirements, primarily due to the impact on high schools.”

I’ve attached the report that Grant gave here (DPS Assessment) so readers can review it for themselves (apologies for my scribbles on the scan).  However, here are some of their claims and my critique:

Claim #1 – “Standards implementation could be jeopardized as there would not be a consistent, well-constructed assessment to measure of (sic) student performance at the end of a given grade/course.”

What evidence exists to support the claim that standards implementation would be jeopardized if there were no standardized, summative assessments at the end of each grade?  Some of the best performing education systems in the world do not test core subjects at the end of each grade, yet they seem to be able to consistently teach to high standards.  Further, what evidence or assurances do we have that a machine scored, large-scale, summative assessment is necessary in order for a classroom teacher to teach to high standards?  If we are to subject literally hundreds of thousands of Colorado students to an assessment (spending millions in taxpayer dollars to do so), should not the purpose and impact of that assessment be well understood and proven?

Claim #2 – “This would reduce the amount of formal data available to accurately identify where shifts in instruction are needed.” 

Large scale, machine scored summative tests are woefully inadequate for the purpose of “shifting” instruction.  Primarily, these tests are for accountability purposes and not for guiding formative instructional practices.  This is not to criticize the tests themselves – but they were not primarily designed for this purpose.  The thinking that summative TCAP, CMAS or PARCC test results will result in effective and responsive classroom level shifts in instruction is hopeful theory with a vacuous evidence base.

Claims #3 & #4 – “Less information available to track student progress toward college and career readiness,” & “Less information available for families to make informed decisions about which high schools are the best options for their children.”

The DPS position assumes (wrongly) that an assessment system at federal minimums (or even fewer assessments) would be devoid of student assessment information in those areas where there is no mandated accountability exam.  Clearly, DPS’s approach to improvement is founded on test-based accountability and school choice.  In theory, for those two approaches to work you need assessments to shame and punish and big data to create a more perfect school choice “market.”  Nothing would preclude DPS from heaping all the assessments they want on students to feed their theory of change.  However, if we did not mandate such measures we would not be forcing every other school district in the state to follow DPS’s logic model.

Claim #5 – “Eliminating these data points at the high school level could shift the accountability system to focus too much on status.  This distinctly disadvantages urban districts that have students with low levels of preparedness.”

DPS assumes (wrongly) that whatever growth, accountability, and accreditation system we currently have in place would just continue but without some high school assessments. The current accountability framework was designed with one set of assumptions about available test data.  In a world with fewer accountability tests, a different model would need to be designed.  This different model could conceptualize growth in a number of different ways and could also recognize student poverty demographics and “preparedness” in different ways and it should.  Here, DPS just wrongly assumes we would continue the same system we’ve been operating.  Further, the report states that “DPS strongly values growth data.”  That’s great!  But, if this is indeed true, there is no basis to believe DPS could not continue to assess and measure growth without having a mandated state test in place.  In fact, dollars currently used for large scale assessments could be provided directly to districts for the very purpose of locally determined measures and analysis.

Claim #6 – “Less external data available to assess student growth for teacher evaluation.”

Besides there being no credible, peer-reviewed evidence that using student testing data to evaluate teachers actually improves instruction and the fact that no high performing system on earth uses this approach, the DPS claim is also flawed. As has been previously discussed, if DPS wishes to have machine scored, large scale assessment data to evaluate its teachers there is no prohibition from them doing just that.  The DPS claim seems to infer that without this standardized testing data, our state-wide effort to evaluate teachers using assessment data is in peril – but we already have some 70% of teachers in untested subjects and grades.  It is not clear (at least to me) that the presence or absence of summative statewide assessment data does much in helping us solve the significant technical questions related to using testing data to evaluate teachers.

Claim #7 – “…districts would have to take on the additional burden of creating/purchasing products to ensure that schools are meeting student learning expectations (and) the development of local growth measures to assess the performance of schools and teachers.”

As has been previously discussed, dollars currently appropriated for state level accountability assessments could, at some level, be re-purposed to districts for locally determined and more formative measures so its not clear that there would be an additional burden.  Further, there are a number of growth measures available for districts to use (student growth percentiles, value-added measures, catch-up/keep-up systems) so it also not clear that a district would need to “develop” these measures.

Conclusions

Again, DPS is following a theory of change for improving their organization built on test-based accountability and school choice.  While refraining from a critique of these two approaches to school improvement, I will just say that these are not the only two methods by which a system might build great schools.  In fact, the best performing school systems (based on PISA results or equating studies) were not built using these models.

Regardless, it is up to the community of Denver to decide which model is most appropriate for their community and then hold their school leaders accountable for the results.

The larger problem with DPS’s jarring advocacy stance with the HB 1202 committee is that it effectively forces that theory of change on every other school organization in the state – whether we want it, or if there is any evidence to support it, or not.

Of note, in the course of these discussions I have heard no one arguing for the complete abolition of testing and accountability.  The better question is how we can have an accountability system that is as efficient and balanced as possible, without over-burdening students and schools with testing.  A review the testing approaches in high performing global systems reveals that such a system can be effectively implemented with far fewer tests than we currently use in Colorado.

I encourage further dialogue and discussion on this issue and welcome a response from Grant Guyer (a very nice person, based on my brief interaction with him) or others from DPS. For convenience, I have also posted my presentation materials to the HB 1202 committee for a similar critique, if anyone feels so inclined.

I was recently honored with the brief opportunity to speak to Colorado’s HB1202 Task Force, which is studying the state’s assessment system and responsible for suggesting changes to the Colorado Legislature for consideration in the upcoming legislative session.

I focused my remarks on the importance, process, and evidence on formative measures.  I also spoke to the differences between accountability assessments in the United States (and Colorado) versus other high performing nations or municipalities.

The memo I prepared for the group can be accessed here: HB1202 ECS Flyover

The entire text is also provided below.  I welcome observations, comments, questions, or critique.

Memorandum

From:  Jason E. Glass, Superintendent & Chief Learner

To:      HB 14-1202 Task Force

Re:      Formative Assessment & a Flyover of Assessment in Eagle County

Date:   9.15.2014

Purpose

The purpose of this memorandum is to briefly orient the members of the HB14-1202 Task Force to the large-scale theory of change, an instructionally focused approach to assessment, and some of the formative measures employed in Eagle County Schools.  For clarity, this memo will focus on measures whose chief purpose is for improving instruction, as opposed to measures whose chief purpose is accountability.

The Instructional Core

Eagle County Schools uses an “international benchmarking” approach to school improvement.  That is, practices are drawn from comparative studies of high performing education systems, both within the United States and abroad.  In addition, the organization focuses on practices which have the support of a peer-reviewed body of evidence.

As such, the “in-school” theory of change rests on three major and interrelated tenets which feature prominently in every high-performing educational system.  Liz City and Richard Elmore (2009) capture these three elements in their discussions of the “instructional core,” or the relationship between the teacher and student in the presence of content.

Instructional Core

Important to City and Elmore’s framework, there is an emphasis on the relationship between the three components.  One element cannot change without impacting the other two.  For example, we cannot effectively raise the quality or “rigor” of the content (or standards) without also adapting the instructional approach of the teacher and the engagement level of the student.

Assessment through the Lens of Instruction

Formative measurement is an essential part of bringing the instructional core to life.  For the teacher to effectively reach and engage every student in learning, that teacher must understand the level of current content performance or knowledge of their students.  The teacher must deliver high quality instruction and then determine if that instruction had the desired impact on students (i.e. improved content knowledge or skills).  Almost invariably, some students will require additional supports or a differentiated approach to reach the content or skill standard.  So, the teacher must apply some intervention, customized to the student, and then check again to see if that intervention had the effect of raising the student to the performance standard.

The “response to intervention” or “response to instruction” (RtI) model provides a useful framework for understanding this process.

RtI

Well designed and employed formative assessments are ‘part and parcel’ to the RtI process.  All students should receive a universal screen or benchmark assessment as part of the general education curriculum.  As there may be some time (days, weeks, or months) between the administrations of these assessments, they can be referred to as long cycle.

These long cycle results will reveal some students who struggle to meet the standard in the general education environment, who should then receive some intervention customized to that student’s needs.  Determining the appropriate intervention often requires the use of a diagnostic test to determine the precise area where the student is struggling (ex. phonics vs. phonemic awareness).  Then, once an intervention is applied, the determination as to if the intervention is working should be made through a progress monitoring assessment.  As the time between these assessments is less than at the universal level, they are sometimes called medium cycle assessments and may be administered every few learning sessions or weeks (or longer, as the team of practitioners determine).

Even after a targeted intervention, some students will require an intensive support.  These students will receive diagnostic and progress monitoring even more frequently – perhaps multiple times over the course of the lesson as the teacher iterates to determine what is the barrier to learning and if it is being mitigated through supports or other interventions.

The RtI approach is based on the principles of a “high reliability system” (see Eck et al., 2011), meaning generally that as the probability of failure increases then supports/interventions and monitoring also increases.  The goal is to determine which students are struggling and why as quickly as possible and to intervene so that the student meets the performance standard.

Notably, formative assessments may be more standardized and formal or they may be individualized and informal.  A powerful mode of formative assessment is a teacher walking through a room as students work, asking questions and checking for understanding.  Alternatively, formative assessment may involve sophisticated and computer-based standardized measures.  Variations in formative assessments may stem from variations in the elements of the instructional core (different teachers, different students, and different content) or from constraints related to things like time and technology.  This entire process may happen in a very structured and mechanical way, or it may happen much more naturally and intuitively.  What is most important is that it is, in fact, happening.

It should also be noted that the formative assessment process is not exclusive to the teacher.  Perhaps the most powerful mode of formative assessment is for the student to self-monitor and assess their own progress.

Evidence and Formative Assessments

The body of both comparative and peer-reviewed scientific evidence for the effectiveness of formative assessment is (in my professional opinion) strong.

Black and William (1998), in a meta-analysis, found that student achievement gains associated with formative instructional practices were “among the largest ever reported for educational interventions.”

Similarly, Hattie (2011), also in a meta-analysis of over 50,000 studies, identified strategies related to formative assessment and RtI among the largest effect sizes calculated.

From a comparative system perspective, formative assessment and responsive teaching form the instructional basis of practically every high performing education system.  Finland, a system perhaps more averse to summative accountability testing than any other in the world, uses formative assessment extensively.  In Schwartz & Mehta’s chapter on Finland in Tucker’s comparative study Surpassing Shanghai, it is noted that “While the Finns do not assess for accountability purposes, they do an enormous amount of diagnostic or formative assessment at the classroom level.”

Notably, when a Finnish principal was asked (in Schwartz & Mehta) how well she knew students were performing, she answered that there was so much formative assessment data at her disposal it was impossible not to know.

Formative Assessments in Eagle County Schools

Eagle County Schools relies on a number of formative measures to guide instruction.  Choice over the appropriate use of these formative measures is left to the building practitioners, including the building principal, teacher leaders, and classroom teachers.

Depending on grade/developmental level, student characteristics, staff preferences, content area, or specific purpose – the following is an incomplete list of formative assessments used in Eagle County.

  • Early Childhood & Elementary
    • GOLD Assessment
    • mCLASS (DIBELS Next/IDEL)
    • AIMS Web
    • Core Knowledge Language Arts
    • Engage New York, Literacy & Math (Achieve)
    • District Formative Measures (ECS Teacher Developed)
    • Classroom grades (standards based)
  • Middle School
    • mCLASS (DIBELS Next/IDEL)
    • Renaissance STAR
    • NWEA MAPS
    • Engage New York, Literacy & Math (Achieve)
    • District Formative Measures (ECS Teacher Developed)
    • Classroom grades
  • High School
    • NWEA MAPS
    • District Formative Measures (ECS Teacher Developed)
    • Classroom grades

Conclusion

Eagle County Schools is, admittedly, not yet a globally high performing system.  But, we are in our first year of building an instructionally focused assessment system patterned after global high performers.  As such, formative assessment is central part of that effort.

References

Black, P., & William, D. (1998).  Inside the Black Box: Raising Standards Through Classroom Assessment.  Phi Delta Kappan, 80, 139-148.

City, E., Elmore, R., Fierman, S., & Teitel, L. (2009).  Instructional Rounds in Education: A Network Approach to Improving Teaching and Learning. Cambridge, MA: Harvard Education Press.

Eck, J., Bellamy, G., Schaffer, E., Stringfield, S., Reynolds, D. (2011).  High Reliability Organizations in Education.  Noteworthy Perspectives, 1-48.

Hattie, J. (2011).  Visible Learning for Teachers: Maximizing Impact on Learning.  New York, NY: Routledge.

Tucker, M. (2011).  Surpassing Shanghai:  An Agenda for American Education Built on the World’s Leading School Systems.  Cambridge, MA: Harvard Education Press.

 NCEE Infographic

 Info-graphic from the National Center on Education & the Economy

 

Testing

I can’t think of anyone who likes to take tests. The mere mention of acronyms like ACT and SAT conjure up cold sweats and bad memories of hours sitting in auditoriums or school cafeterias feverishly coloring in bubbles in a state of nervous anxiety. Yet, these experiences have become such a foundational element of the American education system that they are almost a ritualistic rite of passage, or perhaps a form of systemic hazing.

While there aren’t many people who like tests, I also can’t think of an educator worth their salt who doesn’t place high value on valid, reliable, and timely assessment data.

A quality educator uses testing data, tightly aligned to the curriculum, to see how students are progressing in their mastery of course content and skills. The quality educator then adapts the instructional technique (differentiation), or lines up additional supports (specialists or assistive technology), to help each student reach the goal.

FORMATIVE AND SUMMATIVE TESTING

Testing for the purpose of adapting instruction and providing support is known as formative assessment. It is a hallmark of all high performing education systems.

Paradoxically, most of the tests mandated through state or federal laws (like No Child Left Behind), are not formative in nature and have almost no instructional value. These tests are summative – they occur at the end of instruction to measure what the learner retained.

These summative tests are given to students in subjects including reading, writing, math, science, social studies, and English Language proficiency. They happen near the end of the school year, and it takes months before we get the results. This makes summative tests akin to an autopsy – they give us great information about what happened, but are woefully late to do anything to assist the patient.

ASSESSMENT IN THE UNITED STATES & IN HIGH PERFORMING SYSTEMS

Interestingly, there is only one system in the world where every student is tested at the end of every year using a machine-scored, multiple choice format: the United States.

Contrary to popular belief, high performing global systems (including Finland) do have tests, but these tests are formative in nature and are used to direct instructional decisions and provide learning support.

When high performing systems do give end of year summative tests, they are very different than the machine scored, bubble-sheet forms we see in Colorado.

Instead of testing every student every year, high performing systems test at key “gateway” points in a student’s progression. These assessments are given at the transition from elementary to middle school, from middle school to high school, and on exit from high school.

High performing systems also tend to use tests which require students to demonstrate skills like writing, formulating and defending a position, synthesizing complex information, problem solving, and critical thinking. Classroom teachers (instead of machines) frequently score these tests, so that feedback on how instruction might be improved immediately gets to where it can do the most good.

TESTING FOR ACCOUNTABILITY VS. TESTING FOR INSTRUCTION

In Colorado, we test every student every year from grades 3 through high school in a variety of subjects.  One driver behind this approach is so that we can amass data to identify, shame, punish, and occasionally reward schools and teachers who get high test scores.

No high performing system in the world uses such an approach as a strategy for quality.

Instead, high performing education systems are judicious about their use of testing and insist on clear and immediate connections to teaching and learning.

THE PATH AHEAD FOR TESTING IN COLORADO

Colorado is in the process of redesigning its system of assessments to move away from those scanned bubble sheets covered with #2 pencil lead. It is replacing those tests with computer-based tests, which are intended to measure higher-order thinking skills instead of multiple choice test accuracy.

The tests in English language arts and math are called PARCC (Partnership of Assessment for Readiness in College and Careers). They are aligned to the internationally benchmarked high expectations embedded in Colorado’s Academic Standards and the Common Core.

These efforts to improve the assessments and to align to high expectations are the right work. However, the PARCC test is still a summative grade-by-grade, every student every year test that is then hitched to the state’s blame and shame system of accountability for schools and teachers.

We should applaud efforts to improve the state’s assessment system – but we should know by now that the era of hyper-testing and punishment ushered in under the federal No Child Left Behind Law isn’t working for our kids, schools, or communities.

THE LOCAL CONNECTION

Our schools are obligated by law to participate in these big data state-testing schemes. However, we are putting our focus on formative assessments that link closely to our curriculum and serve to improve instruction.

While we have to take part in the big government solutions imposed on us by Washington D.C. and our own state legislature, we can choose to put our energies into formative measures that will actually be of value to our students. The by-product of which is improved student

Note – this article originally appeared in the Vail Daily

 

WhichWay

One of the ways we’ve started working to create better outreach to our community with Eagle County Schools is to create an “Insider’s Academy,” where community members attend a series of courses on how public education works.

At the first meeting, I gave a presentation that outlined the (many) purposes of public education and a brief look into it’s history in the United States.

My PowerPoint from that presentation is linked below.  I hope it is of some value to anyone interested in the topic or in a similar effort in your community.

PowerPoint: Purpose of Public Education

It should go without saying that the leading American reform du jour is to construct educator (particularly teacher) evaluation systems that use student achievement as a significant component of the evaluation.

The exact components in creating such systems remain a work in progress across the country and always involve significant trade-offs (e.g. efficiency vs. authenticity; complexity vs. understandability; generalizability vs. specificity).  As if designing such systems in schools across the entire country weren’t complex enough, public policy decisions are also already driving how these (still largely experimental) systems are going to be used for things like teacher accountability, dismissal, licensure, and compensation.

Here in our district, Eagle County Schools, this organization has been working on these systems for nearly a decade.  While I think our system is stronger than many being proposed nationally, it is still very much a work in progress and a journey.

This post is not intended to argue the prudence of this work nor is it intended to prognosticate about the probability of this sort of reform resulting in dramatic improvements in educator quality.  It is intended to recognize that, like it or not, much of the country is currently engaged in designing these sort of systems and we can benefit from practical lessons learned from experience.

As our educators wade (yet again) into the work of redesigning an evaluation system, there are some key design elements that we might keep in mind so that the “right drivers” (to steal Michael Fullan’s words) are at the center of these efforts.

As it may be helpful to other educators and school leaders engaged in this work, I present the design principles I’ve asked our educators to hold close as they go about the construction (or re-construction in our case) of a compliant evaluation system.

Please see the link below for the design principles we are using in Eagle County.  As always, I appreciate and look forward to any feedback you might have.

Design Principles for Evaluation Systems

Shane Vander Hart recently wrote a piece for his very entertaining and thought provoking blog, Caffeinated Thoughts responding to my remarks at the 2011 SAI Annual Conference.

After gently letting left-leaning Jennifer Hemmingsen have it over her coverage of education policy in Iowa, I would stand to lose my “I don’t give a damn about politics, let’s improve schools” credentials if I didn’t give right-leaning Shane Vander Hart the same treatment.

Let’s first set the record straight about the Iowa Core and the Common Core. I don’t expect Shane and I to ever see eye to eye on this and that’s ok – in this country we are free to disagree and are better from an open exchange of ideas. As I understand it, Shane’s position is that the Iowa Core/Common Core is some sort of Obama-driven-federal-takeover-plot aimed at indoctrinating your children to love Chairman Mao and slowly transform this country into North Korea. OK, I may have embellished that last statement … slightly (apologies Shane – just having some fun at your expense!).

Where does this conspiracy theory drivel come from? The fact is that the National Common Core was and remains a STATE led (not a federal government) initiative. The Common Core represents student expectations in reading and math that are on par with the highest performing systems in the world and also represent the kinds of skills our students are going to need to be competitive in a global context. The fact is that a common thread among the highest performing school systems in the world is the adoption of clear and rigorous standards for all students (see example after example in Michael Fullan’s latest work and in Marc Tucker’s analysis of high performing school systems).

Shane goes on to (falsely) state the the Iowa Department of Education and the State Board had no authority from the legislature to establish the Iowa Core or merge it with the Common Core. This is just silliness about the authority to enact the Iowa Core (which contains the Common Core as its Math and English/Language Arts elements). The fact is that the Iowa legislature gave the Iowa Department of Education and the Iowa State Board the directive to establish the Core. To the point that this wan’t an open process, all of the State Board’s steps to include the Common Core in the Iowa Core were public proceedings, as is every action taken by the Board. Sorry Shane, this is within the lines.

Shane goes on to make the dreadfully predictable case that I am pushing for some sort of hyper-centralized school system. Actually, as I’ve stated many times before and stated in my remarks to the SAI Administrators, I’m calling for a reasonable balance of all the players in the education system. Each part has an important role to play, and Iowa’s schools will be best served if all the parts are working together and in symphony.

Governor Branstad was clear to me about my role in Iowa: Make these schools among the best in the world. That happens by building capacity at ALL levels and focusing the whole system on carefully selected strategies tailored to this context. It will not happen by closing your eyes and hoping all 350 districts in the state of Iowa spontaneously pull off becoming a world-class system on their own through some miraculous convergence.

Improvement to put Iowa on par with the highest performing systems in the world takes an intentional and focused effort. Raising useless and worn out rhetoric about government takeovers, “indoctrination,” and “educrats” just regurgitates political soundbites and does little to move Iowa forward to being a great school system.

We do need to build up and support local capacity – but we also need to focus our efforts in a way that makes this fractured patchwork of schools start to move as a system.

Jason Glass
Des Moines, IA

Three Dancing Figures - Photo by Phil Roeder
Three Dancing Figures. Sculpture by Keith Haring. Photo by Phil Roeder.

Of all the policy debates going on with the American education system, certainly one of the most intriguing is defining the “appropriate” role of the federal, state, and local actors and agencies. For me, it’s difficult to argue any of these levels are unimportant when it comes to education policy in the United States – the rub comes in trying to define what the best role for each of them should be.

Perhaps rather than engaging in what will be an endless debate over the “appropriate” role of the federal government in education, or at the state level reigniting the cyclical debate over Dylan’s Rule versus Home Rule as the best policy approach, we should consider how we can engage each of these important levels in their strength areas and find the right balance across them.

The national perspective is critical in establishing high level goals and connecting ideas. One wonders if students with disabilities or minorities would have the same kind of access to education they do today if not for the leadership of the federal government. Or, if the knowledge base for something like Response to Intervention would have grown at the pace it has without national support. States play an absolutely critical role as well. It is important for states to set high expectations, fairly monitor progress toward those expectations, provide adequate funding and accounting oversight, and continually build educational system capacity within each state. Finally, districts (and I would include everyone working at the local level here, right down to that crucial classroom teacher) have a role to make the critical day to day and tactical decisions about how teaching and learning happens.

Rather than spending energy trying to push one group out of the picture (and this is a fantasy, in my opinion), or try and imagine a world where one group stands completely alone in making all the educational policy decisions (this is also a fantasy), we should consider what are the strengths each has and how to play to those – and find the right balance among the three to move the whole system forward.

It’s got to be about balance and playing to strengths.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 13,691 other followers

Jason’s Tweets!