Hi, I hope this finds you well. With the end of the mask mandate in schools in Connecticut, it has been a crazy time, but that pales in comparison with all the other crazy stuff that’s been going on, like the invasion of the second largest country in Europe by its neighbor. I visited Ukraine when it was still part of the Soviet Union, and it is very strange and depressing to see the names of cities that I have visited superimposed on film of tanks in the streets and rockets exploding and a woman giving sunflower seeds to soldiers.
So it has sometimes been a bit of a struggle to stay focused on my day job, but I am fortunate to work with wonderful people on projects that excite me, so that’s what I write about. In particular, this Coaching Letter is about making the shift from thinking about strategic planning as a stage in the work of leadership that comes before implementation, that in turn comes before measuring outcomes, to strategic planning as a circular process involving constant experimentation and iteration. I have a lot to say about this—I’ve been thinking about it for years, but it seems like such a big, gnarly issue that I haven’t actually written about it in these coaching letters much. But I’ve been working on several workshops, including the Acceleration NIC and a cross-district gathering in Derby next week!, so this is a sort of meta look into making the shift from linear to circular planning. More accurately, it’s a guide to what’s been written that I have found useful to explain the difference in approaches, which I am writing with the Derby workshop participants in mind, but which I hope will inspire others to find out more and start experimenting.
Circular planning is another way of talking about continuous improvement. I tend to use the term “improvement science” because that’s the phrase we’ve been using lately to describe the set of tools we’ve been applying in many of our current projects and contracts. And it is a distinct technology. But in ethos and stance it is not so different from other approaches that acknowledge the work of improving outcomes for students is incredibly difficult and complex, and therefore we have to learn our way to improvement—the right thing to do with all educators in every context is unknown and unknowable, and therefore everything is an experiment. (I had stickers made for the participants in the Derby workshop that say that.) So any approach that advocates understanding systems, developing a theory of action, involving the people closest to the mechanism for improvement (in the case of education they are teachers and students), designing and trying out changes, measuring results, and trying again, falls under this umbrella. And while there are multiple technologies, they are all varieties of continuous improvement: Total Quality Management (TQM), action science, design thinking, Toyota Kata. If you want a historical explanation that goes all the way back to Sir Francis Bacon, you should read The History and Evolution of DevOps by Tom Geraghty.
Another way of thinking about this that may be more compelling to some leaders comes from Meyers and VanGronigen (2021) in Ed Leadership: The Best Laid Plans Can Succeed:
In education, improvement planning has been viewed mostly as something strategic rather than operational—a focus on “big” over “small.” Although strategic planning is important, too much reliance on it can result in bureaucratic compliance, where broad elements of planning, like visions and objectives, are clear, but the specific action steps needed to achieve them are not.
They make the case that focusing on what can be done in 90 day blocks of time creates a plan that is more realistic, more readily operationalized, and more likely to generate data that can be used to inform the next 90-day plan. This is another article I wish I’d written, because we were using 90-day plans since I first joined the Center—but they have really good research and advice; it’s well worth reading.
There’s a really important point that I want to make up front. I definitely have a strong opinion that some methods of planning are more powerful than others. However, I am also aware that it is not the tools and techniques themselves that make a difference. I know a lot of leaders I respect who have always used a traditional approach to planning, but it works for them because of the way they lead rather than the tools and techniques they use. This is not a recent finding. For example, see this from the abstract of a 1995 (Powell, 1995) study on Total Quality Management (the parent of the Improvement Science I’ve been spending so much time on):
The findings suggest that most features generally associated with TQM—such as quality training, process improvement, and benchmarking—do not generally produce advantage, but that certain tacit, behavioral, imperfectly imitable features—such as open culture, employee empowerment, and executive commitment—can produce advantage. The author concludes that these tacit resources, and not TQM tools and techniques, drive TQM success, and that organizations that acquire them can outperform competitors with or without the accompanying TQM ideology.
And then a very similar point is made in the latest issue of Educational Leadership (Mehta, Yurkofsky & Frumin, 2022, Linking Continuous Improvement and Adaptive Leadership). In fact, there are so many useful points in this article that I had a hard time choosing what to quote, but I’m going with:
Successful examples of continuous improvement processes require developing the associated dispositions, rather than following a set of steps. As we interviewed the designers of the methods, one after another expressed that the purpose of the process or steps was as a way to anchor these dispositions—such as disciplined inquiry, imaginative thinking, and reflective practice—not to be an end in themselves… some of the most successful examples we’ve seen of the use of the continuous improvement processes come when people embrace the dispositions rather than the steps. Perhaps the most important disposition is a commitment to disciplined, reflective inquiry, drawing on multiple sources of data and evidence.
Therefore, districts and states that require plans in a particular format to try and shoe-horn schools and districts into using continuous improvement practices are barking up the wrong tree. A template is not a great lever for changing practice.
Again, I know people who are using very “traditional” strategic plans and make them work just fine. And I know people who are trapped, and trap others, in very compliance-driven approaches to tools like Data Wise and PLCs that aren’t working for anybody. So keep that in mind as you read on.
1. If your current strategic plan/strategic planning process isn’t working, don’t blame the people for whom it is not working. I wrote an article for the Kappan a few years ago called An Improvement Plan Is Not Enough—You Need a Strategy. This article contains some of the best lines I’ve written:
“Given how often school improvement plans falter and fail, we should at least consider the possibility that our plans themselves—and not the millions of educators struggling gamely to implement them—are to blame”;
“Many districts require that school plans include SMART goals—which can easily send the message to principals that the priority is to make sure their plans include goals that fit the SMART definition, not to make sure that the plan is, in fact, a smart one”;
“Too many schools and districts think of improvement planning as an exercise in compliance because the format used encourages completion at the cost of contemplation, and accountability at the expense of capability.”
I know it’s tacky to brag about one’s own writing, but at least I don’t do it often. And then Jennie Weiner and I wrote The Strategy Playbook to provide an alternative way of thinking about strategic planning. But while it touches on creating routines as the backbone of continuous improvement—as does another Kappan article that Richard and I wrote, Improvement Routines—it doesn’t get deep into learning from implementation to fuel improvement…
2. Don’t assume you already know all you need to know in order to improve. I know that seems obvious, but the premise of most strategic plans is exactly that: if we do all the things that are listed here, then we will meet our goals. In fact, there are some things you cannot know until you implement—which is one of the main points in the Edmondson & Verdin (2017) article I quote from so often, Your Strategy Should Be a Hypothesis You Constantly Adjust. They use the term “strategy as learning”: “An alternative perspective on strategy and execution… conceives of strategy as a hypothesis rather than a plan… We call this approach “strategy as learning,” which contrasts sharply with the view of strategy as a stable, analytically rigorous plan for execution in the market. Strategy as learning is an executive activity characterized by ongoing cycles of testing and adjusting, fueled by data that can only be obtained through execution.” There’s another article about strategy as learning that is directed at philanthropic organizations, Eyes Wide Open, that makes many of the same points—“Much of the knowledge needed to support strategy can arise only during implementation”—but goes into even more detail about the traps that organizations fall into when they over-simplify the link between strategy and outcomes.
3. Re-conceptualize your planning as short-term continuous improvement. Maybe start with the Meyers and VanGronigen article I quoted from earlier: The Best Laid Plans Can Succeed. And if you want help getting started, or need more resources, or want to talk to others in the same boat, let me know and I will do my best to assist.
4. There are tons of tools available for using data acquired through implementation to improve your strategy. Here are a few of them:
- Five ways to create a continuous learning culture
- Going the distance with improvement science in education
- NYCDOE Improvement Science Handbook
- 90-Day Cycle Handbook
- Learning and practicing continuous improvement: Lessons from the CORE Districts
- What’s your theory?
- Driver diagrams
- What we talk about when we talk about ‘root cause’: It’s a lot more nuanced than you might think
- Mike Rother’s homepage—he is the author of Toyota Kata, see below.
And then there are many very useful books:
Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press.
Langley, G. J., Moen, R. D., Nolan, K. M., Nolan, T. W., Norman, C. L., & Provost, L. P. (2009). The improvement guide: a practical approach to enhancing organizational performance. John Wiley & Sons.
Patton, M. Q. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and use. Guilford Press.
Rother, M. (2019). Toyota Kata: Managing people for improvement, adaptiveness and superior results. MGH, New York.
However you currently think about continuous improvement, I hope this gives you some ideas and tools to delve a little deeper. As always, please let me know if there is anything I can do for you. I really appreciate the people who take the time to respond to these Coaching Letters—it’s very helpful to know what’s been useful, and I benefit greatly from the suggestions and feedback. Best, Isobel
Isobel Stevenson, PhD PCC
If someone forwarded you this email, click here to subscribe to The Coaching Letter
Please follow me on Twitter: @IsobelTX
Author with Jennie Weiner of
The Strategy Playbook for Educational Leaders:Principles and Practices