The Massachusetts’ Model Evaluation Tool: The next best thing or …
by Wayne Ogden
Starting in 2010 with an application for “Race To The Top” (RTTP) funds, the Massachusetts’ Department Of Elementary & Secondary Education began an ambitious journey toward a near total redesign of educator evaluation in the Commonwealth. Constituents representing the largest teacher unions (MTA & AFT) as well as organizations representing school superintendents and principals (MASS, MESPA & MASSA) were invited to join the DESE Task Force on the development of a new evaluation tool and procedures in this major shift away from the prior ways of evaluating the state’s 50,000-plus educators.
Two years and thousands of hours later the new system is getting underway. With the help of federal money we have new, model evaluator tools and performance rubrics in place. The DESE has published “Guides” for schools and districts and their implementation of the teacher evaluation tools, as well as similar documents for the evaluation of principals and superintendents. The DESE also published, “Model Contract Language” for use by school districts and their unions in the mandatory collective bargaining process that must accompany these changes.
Most underperforming school districts and some “early adopters” have a year of practice under their belts and the remaining school districts are gearing up for full implementation of the new process by the 2013-2014 school year. While school districts and the state teacher unions prepare for these changes, they do so without knowing what the final two critical components of Massachusetts’ “Model System” will look like. The unfinished portions of the system are the two most complicated and controversial: how to include ratings for educator impact on student learning and how to incorporate student and staff feedback into the evaluation system.
Changes to the process by which Massachusetts’ educators have been evaluated are widely believed to be long overdue. In the words of a former school teacher and principal who served on the DESE Task Force, “Current evaluation practices in the state are wobbly, at best. We’re often stuck in place, unable to move beyond simple compliance with procedures. Now, the Task Force and the Board (of Education) have a chance to break the logjam. We can create a more ambitious, focused and growth-oriented framework. I’m hoping for a breakthrough.” (DESE Webinar, January 10, 2012).
This collaboration, with the exception of the AFT who did not send a representative to the DESE Task Force, has been described as inclusive and professional. After much negotiation and compromise all elements of the “Model System” seemed to have broad support until recently, when some of the challenges of creating such sweeping changes began to surface as educators and districts tried to take the new system from a “state model” to local implementation.
The first bump in the road to implementation appeared this spring in local school districts when local teacher union leaders and regional MTA representatives, contrary to a statewide MTA leadership endorsement of the “Model”, tried to alter substantially some key language of the Model Contract in negotiations with local School Committees. The second impediment to successful implementation of the new system came in the form of financial considerations -who is going to fund the costs associated with the training of both administrators and teachers in the new practices required by the evaluation system? There is current legislation pending on Beacon Hill that would answer this question by requiring local districts to absorb the costs of training using federal funds if no local money was available.
The third, and I believe most threatening challenge facing successful implementation, concerns just how the educators will be “trained” as apart of this undertaking. The DESE Task Force has promoted the new system as one aimed at “collaboration and continuous growth” through a five-step process of: self-assessment, analysis, goal setting & plan development, implementation of the plan, formative assessment/evaluation, and summative evaluation. This complex set of goals and cycle of improvement, in addition to a very detailed set of performance rubrics, requires a depth of training of teachers previously absent in most school districts. Most, if not all school districts simply lack the time, let along the expertise, to conduct sessions of this depth and magnitude, AND must do it on a short time-line. Even the present level of training of principals in their work as evaluators and instructional leaders has been wildly uneven across the Commonwealth, usually dependent on the budgetary wealth of a particular school district. Add to this many schools and districts where visits to classrooms and meaningful discussions of practice are desultory at best.
What I find unsettling is that, despite these challenges posed by the model system, many school districts are attempting to introduce the new system with out-dated and simplistic approaches to training. Routinely, school leaders across the state are being trained apart from their teachers in the specifics of observing and analyzing teaching according to the model system. In some school districts the teachers receive some training from their respective unions and, in others, training is provided by their districts. I strongly suggest that any professional development in support of this model system done by segregating teachers from their evaluators is a recipe for misunderstanding and failure.
If, as the DESE suggests, the new system is about “collaboration and continuous learning”, then let’s train teachers and their evaluators together on all phases of the evaluation tools and performance expectations. Let’s encourage teachers and principals to have open and candid conversations about what is/is not good instructional practice. Training educators while isolated in groups from one another, will result in confusion as to what the criteria are for proficient and exemplary practices. That confusion will lead to conflict, grievances and arbitrations --the symptoms of the old “us versus them” mentality associated with decades of teacher evaluation. Furthermore, it is unlikely to result in the improvement of student performance that everyone seems to be calling for in this major change of practice.