“No matter how nitpicky, how fastidious a reviewer can be, he (she)’ll never, ever come close to what you actually do in your classroom.”
Some time ago, an acquaintance I knew from the Department of Education, a science specialist, told me this when I was complaining about State Quality Reviews (SQRs).
As true as this is (and he should know—he actually does SQRs for the district), it still doesn’t explain how a two-day beauty pageant defines years worth of expertise and academic achievement.
In New York State, that’s exactly what an SQR does.
For those in the Neighborhood living outside New York, you may have something similar. They come under various names: reflections, reviews, audits, analyses. Here in the Empire State, these inspections are known as Quality Reviews, with the appropriate air of a Dickensian workhouse.
These official reviews are masked as “learning experiences” meant to provide “reflective feedback” on our practice. After you choke a little bit on your own vomit, you’ll realize their true purpose: to make sure schools do exactly what they’re supposed to do in the manner expected from the state education department—or at least to the whims of the pack of inspectors sent to your school.
The reviews come in multiple levels. The peer review, a less invasive but no less insidious device, involves groups of teachers and administrators rating each other. The educational equivalent of a gladiatorial contest, the peer review is usually less intense since fellow teachers and admins rarely want to crap on their own brethren.
The State Quality Review, or SQR, involves a pack of reviewers from a mix of different places, from the district to the DOE offices in Tweed to the state offices in Albany. A two-day affair, the SQR usually is triggered if a school suffers a drop in their rating or is rated a School in Need of Improvement according to No Child Left Behind.
Even this level of review comes in different degrees. For example, if your school dropped in ranking due to poor test scores in targeted areas, such as English Language Learners (ELLs) or Special Education Students, the review will most likely focus on the school’s work in that area. Otherwise, in case of a monumental screw-up, the entire school apparatus is put under the microscope.
My school recently had the former: a review based on our supposed lack of progress in ELLs and Special Education. Even so, the entire school was mobilized. Reams of assessment reports, data reports, student diagnostic reports, spreadsheets, graphs, charts, lesson plans, rubrics, student work, teacher evaluations, curriculum maps—all of it gets collected into a series of massive binders. These binders are designed for a dual purpose: to provide adequate evidence that we’re doing our job even without making educational targets; or to overwhelm the reviewer with work to the point that they just assume the school’s doing a thorough job without cracking open these three-ring behemoths.
Rarely does the review not go past the binder stage.
After a day of sifting through numbers and charts, day two features the classroom visits. In theory, the visits are supposed to be “random.” Therefore, every class is spruced up, cleaned up, papered with new charts and new student work (with appropriate rubrics and task cards). In practice, however, since the visits target certain populations, it is often the classes with said populations that get visited—and are often prepped ahead of time.
The result is a series of visits into model classrooms in the vein of Disney World’s World of Tomorrow rides. Bulletin boards stand as monuments, replete with student work, carefully labeled with comments, a rubric and task card (never mind the mind-numbing hours spent preparing these works ahead of time). The charts around the room carefully detail every minute movement in the academic process (usually after re-doing and sprucing up charts the teacher has used for years).
Even the procedures need procedures—such is apparently a “well developed” classroom. I’m surprised there are no charts detailing how to effectively utilize the lavatory (Lord knows they can use it).
The children sit in their seats (the more impossible ones are either conveniently absent or not-so-subtly convinced/cajoled/threatened to behave) and stage a performance worthy of Broadway. While they are listless, lethargic or outright defiant most of the year, the SQR somehow summons articulate, well-mannered, enthused children gleefully engaging in one of your “A” lessons (a little coaching certainly helps.)
All the while, the reviewers (some blasé, some meticulous, and even a few true-believers with Nazi brutality) ask the teachers and children questions about their learning, mostly to figure out if the little whelps are actually paying attention. It’s a scream when they go off-script. One year, a boy was asked his favorite subject. He replied, “Home.”
Some of the questions teachers get can be downright insulting. One teacher was asked to show her lesson for that day. She was asked to show the lesson’s objective (which is clearly marked on most lesson plan books, which seemed to go above the head of this reviewer). After pointing to the lesson objective in her plan, she was then asked, “Why is that the objective?”
Hmmm…how about because that’s what the phony-baloney curriculum map they had to make (and could barely read) says to do.
Even the tone of that question—and I wasn’t present to hear it—would suggest that the reviewer was not among academic professionals but rather a pack of chimps that still needed Jane Goodall to teach them how to poke at anthills with a stick.
In the end, the review usually comes with a long checklist of positive points and things to work on (NEVER negative points, because the word “negative” doesn’t exist in a well-developed classroom *vomit*). The negatives rarely carry much substance, but rather focus on how to create MORE useless paperwork to make the appearance of learning.
Sometimes, they even suggest to return to methods and theories that were discarded during the LAST quality review.
After coming out of the subsequent scotch fog, I had some serious questions about the SQR process. Why the reams of paperwork? Why collect data that often says little and means even less? Why ask children for answers who are notoriously honest—even in the best schools?
Most importantly…how does a quality review help children learn more?
I’m looking really hard, and I haven’t the foggiest.
The window dressing, the bulletin boards, the charts—they are only as effective as the teacher behind them. Any trained animal can clean up well enough to perform a show.
The “evidence” question doesn’t wash with me. Most of a teacher’s best work is done without a ream of paperwork or forms to complete. Effective professionals know what data works and what data is simply filler for a spreadsheet. More data doesn’t necessarily mean improvement.
Thus, if reviewers are really looking for reams of evidence, are they viewing teachers as professionals? Or are teachers more like Goodall’s chimps, according to the state?
Therefore, maybe that’s how the education reform crowd, the NCLB nancies and TFA fops, views all of us who chose education as a calling: a pack of trained animals that can’t be trusted to make intelligent decisions and need a zookeeper to collect the feces.
Which leads back to the earlier quote. My friend was absolutely right. The quality review can’t scratch the surface of what a teacher does in the classroom. Yet the very existence of such a review undermines the status of professionals whose talents and achievements far exceed any binder of data.
So if the state continues to treat me like a chimp…well, let’s just say chimps are marksmen with their bowel movements.