The previous APPR system was indefensible. And, as we are learning, the present APPR system is indefensible, too. To create another system that is a revision of the present system and expect different results is where the insanity enters. Yet, the proposals that are on the budget table will only serve to perpetuate a flawed system while re-elevating the level of drama. Indeed, the din between the warring parties is exacerbating the insanity.
Although the changes he wants are all the wrong changes, at least the Governor is making a proposal. Rather than just counter the Governor with vitriol and demonstration, what if a better alternative was actually proposed? So far, alternatives aren’t being advanced – opponents of the Governor’s plan seem to either be defending the status quo or arguing that evaluation isn’t even necessary. That is the wrong approach and it won’t sell well. If you aren’t part of the solution, then you are a part of the problem, right? Later in this post an alternative approach is offered. Before proposing that alternative approach, however, here’s a quick review of what a system of evaluation doesn’t need so that we know not to include it in a future system:
Things that don’t help
We don’t need independent observers in the APPR system. There is no evidence that outside observers are warranted (nor does anyone have the money to pay for them). The inflation in the APPR score is due primarily to the conversion scales used to translate rubric scores to a number of points. The Lead Evaluators we now use in schools are completely capable of providing teachers with growth-producing feedback based on the evidence gathered from classroom visits and other artifacts.
We don’t need to increase dependence on state test scores, or growth scores, for teacher accountability. Besides the fact that the value-added modeling is statistically dubious, the state is only able to provide such scores for a very small fraction of teachers, merely 14%. Because the state can’t provide state growth scores for more than a small fraction of teachers, a locally developed system had to be put into place. That system, unfortunately misnamed Student Learning Objectives, is subject to a wide variation between districts and the perception is that it is subject to artificial manipulation.
We don’t need different sets of standards for teachers and principals (with no standards for the central office). We need to align our systems which means aligned standards are needed, too. And, as it is now, there are no agreed-upon standards for other educators. Perhaps most importantly, the district is left out of the equation, having no standards or common expectations.
We don’t need all of those teacher and principal rubrics. Right now, 19 different teacher rubrics have been approved, and ten different principal rubrics have been approved. The notion of providing local choice has resulted in a cacophony.
We don’t need a variety of scales and conversion charts. This just invites abuse and manipulation of the system. Whether or not this has actually occurred is moot. The wide variation opens the APPR system up to the perception that the system has been manipulated which is enough to doom it. Grant Wiggins, in a recent post, compared the data from a few of the highest performing schools in NYC to the lowest performing schools in NYC and found an inverse relationship between student achievement and teacher scores. A similar analysis of districts in Central New York concluded that there is a moderate positive correlation between student achievement and teacher scores.
Choosing the right drivers for a better APPR
What we do need is a different approach to APPR, one that is based on a different paradigm. That paradigm has been successfully described by Michael Fullan. Instead of a system that is based on the “wrong drivers of change,” we need a system based on the “right drivers” of change. Here’s a snapshot:
|Wrong Drivers||Right Drivers|
|Accountability using test results, and teacher appraisal, to reward or punish teachers and schools||Capacity building|
|Individual teacher and leadership quality||Collaborative work|
(focusing on the technology as the solution)
|Fragmented strategies||Systemic strategies|
Fullan clarifies that the items listed in the “wrong drivers” category do have a place in schools but that they don’t drive change and increased student achievement. It is possible to create an APPR system that reflects the “right drivers.”
A New APPR Architecture
Instead of arguing over the old or new APPR systems, and instead of arguing about how much to count test scores, all of the energy that has been sucked away in the drama and vitriol could be refocused to the development of an APPR system that reflects the “right drivers” of change. It is possible, if we try, to develop a system that invests in social capital and capacity building of our educators.
Here’s an example of how an aligned system of evaluation and accountability might look, with just a few of the dimensions listed for illustrative purposes:
|Collaboration||Works with colleagues to identify most important learning standards, assess student learning, and make instructional changes based on the student work.||Includes collaboration time, on a regular basis, in the master schedule.||Provides professional development, protocols, and other resources for teachers to use when collaborating.|
|Data from Stakeholders||Uses survey data from students and families to make instructional and cultural adjustments to the classroom.||Administers the survey tool and makes the results available to teachers in a usable format. The principal uses the school aggregate with the school improvement team to identify and plan for continuous improvement.||Provides the resources to gather such data from students and families, such as the Tripod Survey or other similar tools.|
|Local Achievement Data||Works with colleague to use the results of common formative assessments to plan subsequent instructional interventions.||Ensures that common formative assessments are developed, administered, calendared, and collaboratively analyzed.||Provides professional development, protocols, and time for the development of common formative assessments that match the curriculum (which includes state standards, as well as the locally identified standards such as 21st century skills or the 4Cs.)|
|State Accountability Data||Participates in accountability assessment requirements.||Administers the state accountability assessments to the particular grade levels and samples as required.||Publishes the results of the state accountability assessments along with the district’s locally adopted measures of student achievement.|
|Curriculum||Delivers classroom curriculum is consistent with that in other similar classrooms and is based on the district-wide curriculum.||Ensures that the district’s guaranteed and viable curriculum is being used by all teachers.||Operates an active curriculum review cycle that monitors the effectiveness, alignment, and currency of the guaranteed and viable curricula.|
To develop a complete architecture, there are many resources upon which to call. The Diagnostic Tool for School and District Effectiveness (DTSDE rubric), for example, is gaining ground in the state as a reflective school improvement tool. The School Alliance for Continuous Improvement is another example of a system look at. The Baldrige Criteria, as well as Effective Schools resources can be helpful, too.
The bottom line is that we need a comprehensive, aligned system of evaluation and accountability that is based on research, best practice, and common sense. We didn’t have it in the old APPR system and we don’t yet have it in the present system. The changes that have been suggested by the Governor won’t help, either. It’s just more of the same. If student learning and continuous improvement are, in fact, our common goals, then another approach is crucial. Let’s stop the insanity and be part of the solution.