The costs of predictability

imageIn theory, there is no difference between theory and practice. But, in practice, there is. - Jan L. A. van de Snepcheut

Doubt is uncomfortable, certainty is ridiculous - Voltaire

We demand guaranteed, rigidly defined areas of doubt and uncertainty. - Douglas Adams

There is an understandable desire for predictable performance from our project's leadership, sponsors, and customers. Such reliability enables a focus on higher-value strategies and tactics, once the underlying foundational means of production can be counted upon to achieve such outcomes. Yet Gary Hamel, author of The Future of Management, suggests that the more we try to achieve such predictability, the more elusive it will become:

There is another, more general limitation to our shopworn management principles. While ostensibly they serve the goal of operational effectiveness, they minister to a need that is perhaps even dearer to top management's heart: predictability. One can fairly describe the development of modern management as an unending quest to regularize the irregular, starting with errant and disorderly employees. Regularity (achieved through standards, controls, plans, and procedures) makes management's job easier. It helps executives recognize and correct deviations when they occur. It allows business leaders to make predictions and then stick to them. It reduces the chance that middle managers will be caught out by their superiors. In other words, it helps the bureaucratic class maintain its self-comforting illusion of control. In the bible of modern management, "no surprises" is the first commandment.

Increasingly, though, we live in an irregular world, where irregular people use irregular means to produce irregular products that yield irregular profits. For example, while one can imagine a highly disciplined product development process yielding the "son-of-iPod," a line extension within Apple's family of iconic music players, it's unlikely that a rigid, mechanistic process would have ever hatched the iPod itself. In the 21st century, regularity doesn't produce superior performance... Of course, deviations from the norm can destroy value, as when, for example, they impair product quality. Nevertheless, an organization that worships regularity with a single-minded devotion is likely to have trouble distinguishing between value-destroying irregularities and value-creating irregularities. The risk is that management systems designed to promote alignment and consistency end up culling out variations of all sorts - the good and the bad. With exactitude and invariability fast losing their power to generate above-average returns, companies are going to have to learn to love the irregular.

It is popular to believe that increasing attention and investments towards planning and discipline will be paid back by reduced variation in project performance. But in the real world, achieving these outcomes is nearly always more difficult than it would appear with the benefits of hindsight. In his book, A Checklist Manifesto, author Atul Gawande describes the nature of the challenges in complex environments, and how it differs from problems which are merely complicated:

Two professors who study the science of  complexity—Brenda Zimmerman of York University and  Sholom Glouberman of the University of Toronto—have proposed a distinction among three different kinds of problems in the world: the simple, the complicated, and the complex. Simple problems, they note, are ones like baking a cake from a mix. There is a recipe. Sometimes there are a few basic techniques to learn. But once these are mastered, following the recipe brings a high likelihood of success.

Complicated problems are ones like sending a rocket to the moon. They can sometimes be broken down into a series  of simple problems. But there is no straightforward recipe.  Success frequently requires multiple people, often multiple teams, and specialized expertise. Unanticipated difficulties are frequent. Timing and coordination become serious concerns.

Complex problems are ones like raising a child. Once you learn how to send a rocket to the moon, you can repeat the  process with other rockets and perfect it. One rocket is like another rocket. But not so with raising a child, the professors point out. Every child is unique. Although raising one child  may provide experience, it does not guarantee success with the next child. Expertise is valuable but most certainly not sufficient. Indeed, the next child may require an entirely different approach from the previous one.

In managing our projects, like raising our children, the effects of our decisions are not immediately apparent. As a result, the consequences of poor performance are often difficult to trace back to the underlying decisions and circumstances they were based upon. If you are a cook baking the same recipe over and over, the special causes of variation that can arise from variation in ingredients, waste in processing, and poor execution can be isolated and controlled with time. In such situations, the cook is reproducing meals from an original recipe, not inventing a new meal each time. Similarly, within manufacturing environments, performing statistical process control is a legitimate strategy, but is reliant upon consistently applying the same manufacturing recipe. The book Profit Beyond Measure, discussing operations at the Highland Ford Plant, describes how these sequences play out in these environments:

Workers in each independent part of the system... cannot receive immediate feedback from workers in the next operation. Hence, many errors and defects either go undetected, or are detected and intentionally ignored. Errors and defects must be remedied through rework at a later time, often at great expense.

As Bob Racynski reports, eliminating unwarranted variation requires discipline:

  1. Data must be collected from homogeneous sources
  2. The assignable causes of variation must be controllable
  3. Detailed process steps must be consistently applied to acquire data over operations
  4. A reliable means to normalize and aggregate data must be available across sampling

It is unusual for Racynski's preconditions to be satisfied across a large collection of projects in a portfolio without sustained investments, passionate leadership, and focused attention over an extended period of time. This complexity has many root causes. Individual performance varies widely. Communications problems increase in a non-linear fashion, as a team's and organization's size expands. Work arrives at different rates and sizes. Projects must orchestrate complex actions for multiple (often changing) targets, across many individuals and groups, across the entire portfolio and throughout each project's lifecycle. Coordinating these actions successfully so they converge on multiple objectives is not unlike the challenge of navigating in 4 dimensions, a problem space similar to landing on an aircraft carrier. To make things worse, this navigation must be performed with unreliable instrumentation. In his book, Why Most Things Fail: Evolution, Extinction and Economics, Paul Ormerod describes it this way:

In order to control a system—any system, whether an economy, a biological system or a machine—we need to be able to do two things:  first, make forecasts which are reasonably accurate in a systematic way over time; and second, understand with reasonable accuracy the effect of changes in policy on the system one is trying to control.  Unless policy-makers know with reasonable confidence what state the system is likely to be in at some point in the future, it is not possible to say what action is required now in order to bring about a more desirable outcome. And unless the authorities understand the impact of their actions, it is not possible to know what should be done in order to bring about any desired outcome.

The scope for failure abounds.  Nevertheless, many people continue to believe that, in order to design effective policies, all we need to do is collect more information and statistics, analyze the data and produce a plan which will solve whatever problem confronts us. But there are deep underlying reasons for the inability to plan and control outcomes successfully... The historical data which we have is dominated by noise rather than by signal and contains very little true information.

And this is on a good day; when we must develop an unprecedented solution for an ambiguous need, as is typical in engineering efforts, we are seeking to not just create a new dish, but may even have to invent a entire new cuisine (when our dinner guests aren't even sure what they are hungry for).

While engineering projects provide many opportunities to attack waste within such development efforts, we cannot eliminate variation in an analogous fashion to how it can be eliminated in manufacturing settings, because the surprises engineering projects offer are the very basis of innovation. In the book, What the Cat Saw, Malcolm Gladwell describes two types of situations that must be addressed as we attempt to plan the many projects in such a portfolio:

The national-security expert Gregory Treverton has famously made a distinction between puzzles and mysteries. Osama bin Laden's whereabouts are a puzzle. We can't find him because we don't have enough information... The problem of what would happen in Iraq after the toppling of Saddam Hussein was, by contrast, a mystery. It wasn't a question that had a simple, factual answer. Mysteries require judgments and the assessment of uncertainty, and the hard part is not that we have too little information but that we have too much.

The distinction is not trivial... If things go wrong with a puzzle, identifying the culprit is easy: it's the person who withheld information. Mysteries, though, are a lot murkier: sometimes the information we've been given is inadequate, and sometimes we aren't very smart about making sense of what we've been given, and sometimes the question itself cannot be answered. Puzzles come to satisfying conclusions. Mysteries often don't.

When we lay out plans for our work (and especially when we are planning work we haven't done before), we often don't know if we're dealing with a puzzle or a mystery. We usually have some information... sometimes even too much of it! But we often don't know how much of this information we should believe or make use of in developing our plans. We also often don't know how to synthesize that information into commitments that we can stand behind. Worse, we may not even know how long it will take to come up with better information. It's like trying to estimate how much gas and time it will take to travel from one coast to the other, when we haven't yet selected a mode of transportation, a route to take, or defined how many people or cargo are going to be transported. If you're willing to agree to travel by train, according to established schedules (each done like copying a recipe), we will get better predictability. But even in such situations, delays persist... and every so often, we'll find ourselves on the long tail.

As Christopher Alexander notes in his classic, Notes on the Synthesis of Form:

To match the growing complexity of problems, there is a growing body of information and specialist experience. This information is hard to handle; it is widespread, diffuse, unorganized. Moreover, not only is the quantity of information itself by now beyond the reach of single designers, but the various specialists who retail it are narrow and unfamiliar with the form-makers' peculiar problems, so that it is never quite clear how the designer should best consult them. As a result, although ideally a form should reflect all the known facts relevant to its design, in fact the average designer scans whatever information he happens on, consults a consultant now and then when faced by extra-special difficulties, and introduces this randomly selected information into forms otherwise dreamt up in the artist's studio of his mind. The technical difficulties of grasping all the information needed for the construction of such a form are out of hand - and well beyond the fingers of a single individual.

At the same time that the problems increase in quantity, complexity, and difficulty, they also change faster than before. New materials are developed all the time, social patterns alter quickly, the culture itself is changing faster than it has ever changed before. In the past - even after the intellectual upheaval of the Renaissance - the individual designer would stand to some extent upon the shoulders of his predecessors. And although he was expected to make more and more of his own decisions as traditions gradually dissolved, there was always still some body of tradition which made his decisions easier.

Now the last shreds of tradition are being torn from him. Since cultural pressures change so fast, any slow development of form becomes impossible. Bewildered, the form-maker stands alone. He has to make clearly conceived forms without the possibility of trial and error over time. He has to be encouraged now to think his task through from the beginning, to 'create' the form he is concerned with, for what once took many generations of gradual development now attempted by a single individual. But the burden of a thousand years falls heavily on one man's shoulders, and this burden has not yet materially been lightened. The intuitive resolution of contemporary design problems simply lies beyond a single individual's integrative grasp.

As a real-world, representative exchange of how such factors interact, disrupting the ability of the participants to realize their ambitions, consider the below exchange, which is drawn from the book Dreaming in Code, by Scott Rosenberg, and describes Mitch Kapor, founder of Lotus Software, as he and the members of his team struggle on the Chandler project. Mitch hardly lacked experience in project management, since he previously had been the CEO of a highly successful, billion dollar company, and was developer of Lotus 1-2-3, one of the best selling products of it's day. The book was published when the Chandler project was 3 years into its development. It seems that books, unlike some software projects, have fixed schedules. When the book went to press, delivery of a product from this Chandler project was still not in sight; development continued on for another 4 years. Although the project was finally released in 2008, it missed any chance to ever gain traction in the marketplace. Instead, it morphed into an open-source hobby for a small development and user community, and Mitch moved on to other projects. Lets look at an excerpt from this book, in which Mitch and his team struggles to decide how to absorb a recent discovery that one of their team members was behind schedule, and would be unable to deliver on their commitment:

Kapor, Lam, and Dusseault are assessing what  this latest slippage means for their long-term plans.  “Are we feature-driven or schedule-driven?” asks  Dusseault.  “It has to be a little bit of both,” Lam says.  “In the real world, it’s always some of both,” Kapor agrees.  “It’s pathological if it’s all one or the other. Our plan is that the feature side is dominant. Of course we’ll trim and adjust in iterative fashion. It’s like a binary star system in which one is bigger than the other. They both influence each other’s orbits.

Let me be the devil’s advocate for a moment,” Parlante says. “Always meeting plan is less important than having a high rate of progress and knowing where you’re at. If we water down the plan so that we meet our goals more, that could make us less ambitious, and we’d get less done.”

“People are bad at estimating the percent that’s done,”  Dusseault suggests. “They’re better at days remaining.”  Kapor says carefully, “Let me underscore that the point of tracking all this is to be able to coordinate, not to praise or blame.”

“More of the current process isn’t going to help Stuart finish,” says Dusseault. “It won’t fix the not-doneness.”  “One of the pitfalls you can fall into,” Parlante says, “is the engineers say ‘I’m done!’ And then Aparna and the design team look at it and say, ‘What are you talking about?’

In other words, the feature may “work,” but there are tons of bugs, and the interface is nothing like the design, and a mountain of labor remains.  Ted Leung, who patches into these meetings over a conference room phone pod, speaks up: “The point of writing this stuff down is, we’re trying to steer. You can’t do that without some information. The question is, how much information do we need?"

If this were an isolated circumstance, we could just write it off to experience. But it's not that isolated - here's another project, described in the book "Barbarians Led by Bill Gates", by the author Marlin Eller. Marlin was Microsoft’s lead developer for the graphics engine used in Microsoft Windows from 1992 to 1995.  The book depicts the disconnects between the grass roots and senior leadership which many other project participants will find hauntingly familiar:

There was a great disconnect between the view from the inside that my compatriots and I were experiencing down in the trenches, and the outside view . . . in their quest for causality [outsiders] tend to attribute any success to a Machiavellian brilliance rather than to merely good fortune.  They lend the impression that the captains of industry chart strategic courses, steering their tanker carefully and gracefully through the straits. The view from the inside more closely resembles white-water rafting. “Oh my God! Huge rock dead ahead! Everyone to the left! NO, NO, the other left!" ...Reality is rarely a simple story and is probably more like a Dilbert cartoon.

Why are our project experiences so easily turned into these caricatures? Frederick Hayek, in his Nobel prize lecture, "The Pretense of Knowledge", summarizes things this way:

Unlike the position that exists in the physical sciences, in economics and other disciplines that deal with essentially complex phenomena, the aspects of the events to be accounted for about which we can get quantitative data are necessarily limited and may not include the important ones. While in the physical sciences it is generally assumed, probably with good reason, that any important factor which determines the observed events will itself be directly observable and measurable, in the study of such complex phenomena... which depend on the actions of many individuals, all the circumstances which will determine the outcome of a process, for reasons which I shall explain later, will hardly ever be fully known or measurable. And while in the physical sciences the investigator will be able to measure what, on the basis of a prima facie theory, he thinks important, in the social sciences often that is treated as important which happens to be accessible to measurement. This is sometimes carried to the point where it is demanded that our theories must be formulated in such terms that they refer only to measurable magnitudes. It can hardly be denied that such a demand quite arbitrarily limits the facts which are to be admitted as possible causes of the events which occur in the real world.

Our minds thus unfortunately fool us (and our customers and leaders) into believing that we can understand things by looking backwards in time, and draw meaningful insights from experiences we think are similar to what lies ahead. We then attempt to use those experiences to shape how we believe the world should work as we move forwards. But our ability to accurately recall past events is quite limited.  This lack of information unfortunately doesn't slow us down in our rewriting of history, and we remain confident in our beliefs, and ignorant of our biases. Unconstrained, our minds fill in these gaps with extrapolations consistent with the legends we've unconsciously chosen to weave for ourselves. And thus, even when we recognize our limitations, we are not able to provide much insurance against such hazards - the pressures of our businesses drive us to accept more risks than we are comfortable with, swallow hard, and hope for the best.

Discovering and focusing on the right information helps to strike the needed balance across desired features, partially implemented abstractions, dynamically changing resource capacity and capability, and perceptions of stakeholder interests, all in the face of fuzzy data, risks, and unknowns. This is a very difficult problem, made even more challenging when additional constraints, such as fixed schedules or budgets, are introduced. And then, when we fail, we revise our judgments of events after the fact. As a result, we forget last year's failures, and are confident we can do even better than we thought we did last time, when we do it next time! Gladwell describes the psychologist Baruch Fischhoff's research on such behavior:

Fischhoff calls this phenomenon "creeping determinism"--the sense that grows on us, in retrospect, that what has happened was actually inevitable--and the chief effect of creeping determinism, he points out, is that it turns unexpected events into expected events. As he writes, "The occurrence of an event increases its reconstructed probability and makes it less surprising than it would have been had the original probability been remembered.

Here is the predictability that truly can be realized: projects will take longer, consume more resources, and will fail to deliver on original intentions. The only question for every project team is to decide what to do about this inevitability. Ironically, predictability is achieved by constantly redefining what we originally had intended, adjusting the experiment to match our results... all a far easier task when our histories are oral rather than written!

Some say the answer is to embrace incremental approaches like agile. Some project environments are indeed amenable to such approaches. But other development efforts (for example, creating a new, complex product within a fixed schedule and budget) are far more like making a difficult mountain climb over unfamiliar terrain, under demanding conditions, than they are like taking a train across the country. The fixed resources, increased pressures on relationships, and hidden technical debt out of the gate, mean you better plan your provisions for longer than you'd hope the climb to take, and be quite flexible as you proceed, as you will inevitably encounter Murphy's law along your journey.

Rating: