Instructional Rounds: Not just a Repackaged Walkthrough
They’re the newest thing out of Harvard: Instructional Rounds. But aren’t they just walkthroughs repackaged?
I think not. After years of working with those responsible for developing the instructional rounds protocol and facilitating a number of iterations across the country, I can say unequivocally that walkthroughs and rounds are not the same, nor should they be. Both procedures are designed to spark improvements in student learning and they can easily supplement one another, but it’s important to be clear on why either is a good use of time.
To start with, walkthroughs require application of one’s expertise. Generally conducted by an administrator, they’re often used as an implementation audit, checking teacher behaviors against a predetermined list of expected best practices. Many schools and districts now use an electronic device (iPads are increasingly popular) with pre-loaded best practice indicators that collect centralized data for trends and aggregated patterns of practice. These help a principal determine whether the professional development underway is having the anticipated impact on practice. Instant feedback to the teacher is often given and becomes an expected part of the process. And although most principals are deliberate about accentuating the positive, in reality the walkthrough is designed to look for what is not happening with the implication that the principal will then take action to make sure that it does. The bottom line is that walkthroughs rely on an expert observer, applying that expertise to assess and guide instructional practice. The teacher is the focus of attention.
In contrast, the instructional rounds process is all about learning, adult learning specifically, to understand something previously unknown. Participants are asked to literally “let go” of their expertise as they learn how to collect descriptive and nonjudgmental data about classroom dynamics: the relationship of the teacher and the student in the presence of content with a deliberate initial focus on the student. Rounds participants are asked to first “look down” to understand what students are actually doing in order to draw conclusions about what they are really learning. The teacher is an important part of the equation, but only in relation to the student, the content, and the task at hand.
Why is this important? I’ve found that it is one thing to learn to assess teaching practice but that often this is done in isolation of student learning. This example may help explain what we mean. In a recent rounds visit, a principal was visibly upset that the “teacher was doing everything right, but the kids weren’t engaged.” It’s all well and good to “know good instruction when you see it,” but it’s an entirely different ballgame (or can be) to consider the causal effects of teaching on learning. In a recent set of walkthroughs I accompanied the principal and central office supervisors to two classrooms. They were delighted to report that each teacher received the allotted checks on the list of instructional framework indicators, but when we turned the conversation to what students were actually doing it became an entirely different conversation. In one classroom students were carefully memorizing and replicating algorithms. In the second classroom students were working with an essential question, exploring a complex idea through research, discussion, and literature. Changing the conversation to what students were actually doing (and therefore learning) shifted the group’s thinking about instructional as well.
Another key difference is in the actual collection of data itself. Rounds participants will engage in a collective analysis of what they observed, working with evidence prior to making any claims. Instead of analyzing teacher behaviors in the moment, rounds participants are asked to literally “write what they see” with a level of specificity that will add to the subsequent analysis and that is absent of judgment. This can be an almost excruciating shift for administrators and, in some cases, even more difficult for teachers who engage in rounds networks. The descriptive data, however, exposes differences in perceptions and builds a common vision of what is meant by terms that are often used casually. “Engagement” to one educator may be seen as “on task” to another.
Rounds involve a network of colleagues who collectively make sense of what they observed and draw conclusions collaboratively. The learning in a rounds network is multi-layered and non-linear; it is admittedly messy and cumulative as there is never a “right answer” to be found at the end of the day. It can be very uncomfortable for participants who may have preconceived ideas flipped upside down as happened in the example cited earlier about how the same type of teacher behaviors observed led to dramatically different student outcomes. Network members walk away with new personal insight about the learning process itself and, more often than not, new questions about teaching and learning. At the same time, the collaborative nature of the network begins to drive a shared understanding about teaching and learning, a common language to support collaborative learning, and a collegial culture that builds improvement across a system. And the school hosting the visit will itself come away with concrete ideas about the next level of work for a problem of practice that, if resolved, will move one or more of their improvement strategies forward.
The notion of “problems of practice” is the third component that sets rounds apart from walkthroughs. The rounds process is intended to support an existing improvement strategy that may be stalled. Rather than a focus on individual teachers, it informs work that is collectively important to the school, a place where they may be stuck, where there is genuine question about why the anticipated results aren’t as expected. Framed as a problem of practice, it guides the data collection in classrooms and literally provides many pairs of eyes to shed light on something that is genuinely puzzling to the host school. In some cases, the problem of practice is district-driven and the rounds visits inform strategy implementation across schools.
So which process is right for you and your school or district? I have a bias that rounds should only be done if it will inform important work that is already underway. Too often these kinds of activities become ends in themselves and further challenge the time constraints of over-worked teachers and administrators. The three components of rounds: collegial network learningthrough classroom observations around a problem related to an existing improvement strategy work together in a powerful synergy to support system-wide improvement. Where both processes, walkthroughs and instructional rounds, are intended to improve teaching and learning, a walkthrough intends to change the practice of the observed. My experience is that instructional rounds will change the practice of the observer. For those who are ready to invest in collegial learning as a strategy to improve a system at scale, instructional rounds will leave an indelible mark upon your system.
For more information about the instructional rounds process see See Instructional Rounds: A Network Approach to Teaching and Learning by City, Elmore, Fiarman, and Teitel, Harvard Education Press, 2009 or contact me. I'd be happy to help you.