My district unintentionally made this process even more difficult. They put the American Literature test for juniors before the US History test. Almost all of my students came out of the American literature test with the impression that it was very easy. It's mostly a test that asks students to analyze readings rather than recall information. Many juniors can do this without the benefit of an American Literature class. This brings me to my main point. We compare teachers in American Literature in the same manner we do in American History. It's all about test scores. How is this fair? They are the same grade level, but one is based on analytical reading and another is based on recalling a ridiculous amount of information. Students in American Literature can practice the skills needed to pass the test over and over again while teachers in American history have very little time to reteach because if we do we won't finish the content. I tried to preach that the US History test would not be the same as the American Literature test. I had to hope and pray that my kids took me seriously.
The other comparison that makes this ridiculous is comparing test scores from year to year. I am a perfect example of this. Last year, I taught an International Baccalaureate class (If you don't know what it is, it's higher level course work similar to AP) that was required to take the US History EOCT. My test scores were awesome. Out of almost 60 kids, I only had 3 below a 70. I had less than 7 more who were under the exceeds category. The three failures either transferred in from another country or had attendance issues. This school year, I teach 5 regular level US History courses. I have worked my tail off trying to get the kids this year to care and take the test seriously. I honestly have worked harder at it than I did with my IB kids last year. I can almost guarantee the scores this year will be lower. How is it fair to add this into my evaluation? Teaching is a profession where you continually have to get better, and I have worked since last year to do so. Unfortunately, if you judge me on test scores, it does not show that.
Don't get me started on the new system that evaluates based on an impressive list of statistics that are supposed to show how you helped a student improve. How do you accurately show that in US History? They have not taken a Social Studies test since Middle School. Students change a great deal in the almost three full years in between. Some get close to graduation and they actually begin to work at school. Their scores improve, but this is all from outside factors. Some go the other way based on changes in their life such as drug abuse. There scores would drop, and its from nothing I did. I would be evaluated on that.
Where I think these new statistics will be very interesting, is seeing their effect on what are considered the best schools in the state. If the state is truly measuring the school's effect on student improvement, how will these schools be measured? Many of these schools get students who did well on test in their primary years, and continue to do well on tests in their secondary years. The school's effect on these is small. How do you judge a school if they go from a 92 percent pass rate to a 93? The improvement is small, but there is not a lot of room for movement
There has to be a better way to judge teachers and schools. Teachers have to be free to innovate and try new things, without the fear of test scores coming down on them. Teachers also have to have room to grow with experience. We want teachers and schools that are constantly improving and innovating. Find a way to get rid of the ones who are just here for the paycheck, but cultivate the ones who want to improve and be good. That's what we really need. Oh, and a society where parents are responsible and involved in their child's education........................