On November 16, 1801, a group of New York politicians led by Alexander Hamilton began a political broadsheet that would eventually become one of the most influential publications in the metro area.
Recently, it decided to cease being a newspaper…and become a tool of propaganda instead.
On Friday, February 24, after a lengthy court battle, the New York City Department of Education was forced to comply with a Freedom of Information Law (FOIL) request filed by the New York Post, the aforementioned tabloid founded over 210 years ago. The DOE released the infamous Teacher Data Reports (TDRs)—the rankings of supposed teacher effectiveness based on standardized test scores in English Language Arts and mathematics.
In the days that followed, each of the city’s major media outlets released the teacher scores (with names attached) in varying formats. Some ranked teachers from highest to lowest percentile. Others released searchable databases by district, borough and school. Still others, such as the New York Times, published the data with lengthy addenda explaining that the scores shouldn’t be used to rate or rank teachers, since it was a single indicator based on outdated, faulty data with a ridiculously wide margin of error.
(These explanations, by the way, were provided by the DOE itself, along with a recommendation that the media treat the data fairly as it was intended.)
However, the New York Post, the paper that initiated the FOIL request, didn’t stop at a mere spreadsheet of names and numbers.
After releasing its own version of the teacher data—with language so editorialized it hardly passed as hard news—the Post released a story about the alleged parent uproar over a Queens teacher who received the lowest scores in the city.
The story’s lead paragraph read: “The city’s worst teacher has parents at her Queens school looking for a different classroom for their children.”
In that one sentence, the Post lost the last vestige of journalistic integrity.
The controversy over the TDRs embroils teachers, administrators, parents and political leaders. The arguments range from the valid to the ludicrous.
The data was flawed.
It’s impossible to rate teachers based on only one indicator in each subject.
The data doesn’t take into account the myriad of extenuating circumstances.
The DOE secretly wanted the scores released.
The DOE supposedly encouraged media outlets in their FOIL requests and even expedited the process.
The DOE got into a devil’s compact with the UFT leadership, the mayor, Fox News, the Republican Party, the Tea Party, the Freemasons, Jesuits, the Vatican, the Trilateral Commission and the Bilderburg Group to publicly tear out the entrails of “ineffective” teachers…
(Okay, that last one was far-fetched—but you get the point.)
The actual release of the data is a moot point. Until a new law or federal court ruling decides otherwise, the scores are out, and will probably be released again in the future (even if the DOE itself stopped collecting such scores).
The real issue, one that has an even farther-reaching implication than the classroom, is how media outlets use that data. While it is true that the First Amendment gives newspapers quite a bit of leeway, there are definite boundaries that journalists cannot cross.
When a newspaper publishes a story based on a flawed, incorrect and unsubstantiated source, it crosses that boundary.
When a newspaper uses false data to publicly shame an individual, it is not only unethical. It is slanderous.
The inaccuracy of the TDRs was acknowledged by teachers, administrators, and even the DOE itself. All parties agreed that the data was imperfect. What’s more, the data has such a wide margin of error that any percentile derived from it is akin to throwing a dart at a dartboard blindfolded.
Thus, the TDRs are a flawed, inaccurate, and therefore non-credible source—by open admission from the powers that be.
The papers can print the data, as long as their stories about them have multiple sources discussing the data. So far, all the newspapers covered this base (in the Post’s case, just barely.)
Yet the labeling of teachers in superlatives, as “best” or “worst”, based on TDR data does not pass the journalistic smell test. Along the same vein as the Queens teacher’s article, the Post also published a piece about teachers with the highest percentiles. The following was the lead to the story:
“The city’s top-performing teachers have one thing in common: They’re almost all women.”
Not only does this statement say absolutely nothing (considering the vast majority of teachers in the city are women anyway), but it makes a dangerous classification—the same kind of classifying that drove that Queens teacher to a virtual lynch mob by ill-informed parents.
When news stories throw around a value judgment based on one singular measure—a measure that is so ridiculously flawed even its authors disavow it—the journalists behind these stories used what amounts to false, unsubstantiated information.
It is, in effect, mocking (or exalting) people based on a probable lie. That, ladies and gentlemen, is the textbook example of slander and libel.
The New York Post’s editorial pages have attacked teachers’ union and teachers for years now. Yet this frenzied hatred never hit the news headlines as hard as it did this weekend.
They have used unsubstantiated, inaccurate data to shame teachers, using the unfortunate quotes of ill-informed parents in the process as they whip up support for their negativity.
Worst of all, they have the gall to couch this journalistic lynching as hard news.
The New York Post should stop calling itself a newspaper. It is now no better than a common propaganda pamphlet that panders to the lowest common denominator. At times I even agreed with the Post politically—but their tactics disgust me.
Finally, for those whose reputations have been ruined by this pseudo-journalism, there is a weapon far more powerful than any ordnance. It usually has a suit, a briefcase, and an avalanche of legal motions.
See you in court, Rupert.
Ready for Inspection! The Problem with “Quality Reviews”
“No matter how nitpicky, how fastidious a reviewer can be, he (she)’ll never, ever come close to what you actually do in your classroom.”
Some time ago, an acquaintance I knew from the Department of Education, a science specialist, told me this when I was complaining about State Quality Reviews (SQRs).
As true as this is (and he should know—he actually does SQRs for the district), it still doesn’t explain how a two-day beauty pageant defines years worth of expertise and academic achievement.
In New York State, that’s exactly what an SQR does.
For those in the Neighborhood living outside New York, you may have something similar. They come under various names: reflections, reviews, audits, analyses. Here in the Empire State, these inspections are known as Quality Reviews, with the appropriate air of a Dickensian workhouse.
These official reviews are masked as “learning experiences” meant to provide “reflective feedback” on our practice. After you choke a little bit on your own vomit, you’ll realize their true purpose: to make sure schools do exactly what they’re supposed to do in the manner expected from the state education department—or at least to the whims of the pack of inspectors sent to your school.
The reviews come in multiple levels. The peer review, a less invasive but no less insidious device, involves groups of teachers and administrators rating each other. The educational equivalent of a gladiatorial contest, the peer review is usually less intense since fellow teachers and admins rarely want to crap on their own brethren.
The State Quality Review, or SQR, involves a pack of reviewers from a mix of different places, from the district to the DOE offices in Tweed to the state offices in Albany. A two-day affair, the SQR usually is triggered if a school suffers a drop in their rating or is rated a School in Need of Improvement according to No Child Left Behind.
Even this level of review comes in different degrees. For example, if your school dropped in ranking due to poor test scores in targeted areas, such as English Language Learners (ELLs) or Special Education Students, the review will most likely focus on the school’s work in that area. Otherwise, in case of a monumental screw-up, the entire school apparatus is put under the microscope.
My school recently had the former: a review based on our supposed lack of progress in ELLs and Special Education. Even so, the entire school was mobilized. Reams of assessment reports, data reports, student diagnostic reports, spreadsheets, graphs, charts, lesson plans, rubrics, student work, teacher evaluations, curriculum maps—all of it gets collected into a series of massive binders. These binders are designed for a dual purpose: to provide adequate evidence that we’re doing our job even without making educational targets; or to overwhelm the reviewer with work to the point that they just assume the school’s doing a thorough job without cracking open these three-ring behemoths.
Rarely does the review not go past the binder stage.
After a day of sifting through numbers and charts, day two features the classroom visits. In theory, the visits are supposed to be “random.” Therefore, every class is spruced up, cleaned up, papered with new charts and new student work (with appropriate rubrics and task cards). In practice, however, since the visits target certain populations, it is often the classes with said populations that get visited—and are often prepped ahead of time.
The result is a series of visits into model classrooms in the vein of Disney World’s World of Tomorrow rides. Bulletin boards stand as monuments, replete with student work, carefully labeled with comments, a rubric and task card (never mind the mind-numbing hours spent preparing these works ahead of time). The charts around the room carefully detail every minute movement in the academic process (usually after re-doing and sprucing up charts the teacher has used for years).
Even the procedures need procedures—such is apparently a “well developed” classroom. I’m surprised there are no charts detailing how to effectively utilize the lavatory (Lord knows they can use it).
The children sit in their seats (the more impossible ones are either conveniently absent or not-so-subtly convinced/cajoled/threatened to behave) and stage a performance worthy of Broadway. While they are listless, lethargic or outright defiant most of the year, the SQR somehow summons articulate, well-mannered, enthused children gleefully engaging in one of your “A” lessons (a little coaching certainly helps.)
All the while, the reviewers (some blasé, some meticulous, and even a few true-believers with Nazi brutality) ask the teachers and children questions about their learning, mostly to figure out if the little whelps are actually paying attention. It’s a scream when they go off-script. One year, a boy was asked his favorite subject. He replied, “Home.”
Some of the questions teachers get can be downright insulting. One teacher was asked to show her lesson for that day. She was asked to show the lesson’s objective (which is clearly marked on most lesson plan books, which seemed to go above the head of this reviewer). After pointing to the lesson objective in her plan, she was then asked, “Why is that the objective?”
Hmmm…how about because that’s what the phony-baloney curriculum map they had to make (and could barely read) says to do.
Even the tone of that question—and I wasn’t present to hear it—would suggest that the reviewer was not among academic professionals but rather a pack of chimps that still needed Jane Goodall to teach them how to poke at anthills with a stick.
In the end, the review usually comes with a long checklist of positive points and things to work on (NEVER negative points, because the word “negative” doesn’t exist in a well-developed classroom *vomit*). The negatives rarely carry much substance, but rather focus on how to create MORE useless paperwork to make the appearance of learning.
Sometimes, they even suggest to return to methods and theories that were discarded during the LAST quality review.
After coming out of the subsequent scotch fog, I had some serious questions about the SQR process. Why the reams of paperwork? Why collect data that often says little and means even less? Why ask children for answers who are notoriously honest—even in the best schools?
Most importantly…how does a quality review help children learn more?
I’m looking really hard, and I haven’t the foggiest.
The window dressing, the bulletin boards, the charts—they are only as effective as the teacher behind them. Any trained animal can clean up well enough to perform a show.
The “evidence” question doesn’t wash with me. Most of a teacher’s best work is done without a ream of paperwork or forms to complete. Effective professionals know what data works and what data is simply filler for a spreadsheet. More data doesn’t necessarily mean improvement.
Thus, if reviewers are really looking for reams of evidence, are they viewing teachers as professionals? Or are teachers more like Goodall’s chimps, according to the state?
Therefore, maybe that’s how the education reform crowd, the NCLB nancies and TFA fops, views all of us who chose education as a calling: a pack of trained animals that can’t be trusted to make intelligent decisions and need a zookeeper to collect the feces.
Which leads back to the earlier quote. My friend was absolutely right. The quality review can’t scratch the surface of what a teacher does in the classroom. Yet the very existence of such a review undermines the status of professionals whose talents and achievements far exceed any binder of data.
So if the state continues to treat me like a chimp…well, let’s just say chimps are marksmen with their bowel movements.
Filed under Uncategorized
Tagged as Classroom, Comedy, Commentary, Education, education reform, Educational leadership, Humor, Humour, Jane Goodall, Leadership, Lesson plan, New York, Opinion, Special Education, SQR, Standards, State Quality Review, Student, Teachers, Teaching