New York City Rates 18,000 Teachers
New York City has released the ratings of roughly 18,000 teachers. The results showed that teachers who were most and least successful in improving their students’ test scores could be found all around — in the poorest corners of the Bronx, and in middle-class neighborhoods of Queens, like Bayside and Forest Hills. The teachers taught in schools in wealthy swaths of Manhattan, but also in immigrant enclaves. They were in similar proportions in successful and struggling schools, and they were just as likely to have taught the most challenging of students and the most accomplished.
The ratings covered three school years ending in 2010, and are intended to show how much value individual teachers add by measuring how much their students’ test scores exceeded or fell short of expectations based on demographics and prior performance. Such “value-added assessments” are increasingly being used in teacher-evaluation systems, but they are an imprecise science. For example, the margin of error is so wide that the average confidence interval around each rating spanned 35 percentiles in math and 53 in English, the city said. Some teachers were judged on as few as 10 students.
The ratings were begun as a pilot program four years ago to improve instruction in 140 city schools.
“I believe the teachers will be right in feeling assaulted and compromised here,” Merryl H. Tisch, the chancellor of the State Board of Regents, said in an interview. “And I just think, from every perspective, it sets the wrong tone moving forward.”
In releasing the reports, New York became only the second city in the country where teachers’ names and ratings have been publicized.
Whether or not they are made public, such ratings have been gaining currency, in part because they are favored by the Obama administration’s Race to the Top initiative. New York City principals have made them a part of tenure decisions. Houston gave bonuses based in part on value-added measures, though that program was reorganized. In Washington, poorly rated teachers have lost their jobs.
The ratings are more than a year old and are based on test results that have been somewhat discredited, since the state later recalibrated the scoring. Still, they offer a peek at the state’s future evaluation system, which will use value-added measures for at least 20 percent of teachers’ evaluations.
In simple terms, value-added models use mathematical formulas to predict how a group of students will do on each year’s tests based on their scores from the previous year, while accounting for factors that include race, gender, income level and other test results. If the students surpass expectations, their teacher is rated well — “above average” or “high” under New York’s models. If they fall short, the teacher is rated “below average” or “low.”
What many teachers point out is that the scores cannot account for many other factors: distractions on test day; supportive parents or tutors; allergies; a dog continually barking near the test site. There are also schools where students are taught by more than one teacher, making it hard to discern individual contributions.
“This data is based on ONE test taken on ONE day when several variables, such as child poverty, quite possibly will affect student performance,” Lea Weinstein, a teacher at Middle School 45 in the Bronx, wrote to The New York Times in response to her rating. “Yes, I administered this test that generated this data to my sixth-graders two years ago. I no longer teach sixth grade, and I no longer teach in the same school, or even the same subject. How is this data relevant today?”
In New York, the ratings cover teachers in fourth through eighth grades, because of when state tests are given. They are distributed on a curve, so that for 2009-10, 50 percent of teachers were ranked “average”; 20 percent each “above average” and “below average”; and 5 percent each “high” and “low.” Teachers received separate reports for math and English, though in the lower grades they generally teach both. The data released on Friday did not include teachers in charter schools or District 75.
At the Ocean School in Far Rockaway, Queens, where virtually every student is poor enough to qualify for free lunch, none of the 12 teachers in the ratings ranked “low” or “below average.” The city gave the school a “C” on its progress report last year. At P.S. 290 on the Upper East Side in Manhattan, which got an “A” on its progress report, the 16 evaluations included one “low” and four “below average” ratings.
I believe that teachers need to be rated – but is this the correct way of doing so? Professional licensed supervisors observe teachers in the K-12 system. Those on probation are observed three times a term. Principals have the ability to walk into any class in their school and observe a teacher, teaching. As the article states, teachers were judged on the basis of as few as 10 students. Teachers in subjects like music and art were not measured as to adding value. External distractions were not taken into account. The margin of error is as wide as 35 percentiles in math and 53 percentiles in English. The test used has been “somewhat discredited”. The head of the New York State Board of Regents (New York States accrediting agency stated “I believe the teachers will be right in feeling assaulted and compromised here.” Charter schools weren’t included in the ratings.
If the measurements are so imprecise and are so flawed, why use them? It appears that New York City, in keeping with the “Race For the Top” funding is looking for a way to evaluate teachers. I think New York City needs to keep looking. I do not think they have found the correct way.