Thick as capability bricks


Figure 1

They constructed ladders to reach to the top of the enemy's wall, and they did this by calculating the height of the wall from the number of layers of bricks at a point which was facing in their direction and had not been plastered. The layers were counted by a lot of people at the same time, and though some were likely to get the figure wrong  the majority would get it right .... Thus, guessing what the thickness of a single brick was, they calculated how long their ladder would have to be. - Thucydides, History of The Peloponnesian War.

Successful engineering endeavors discover worthwhile opportunities, select the precious few from among them, and adequately plan their realization and sustainment. As these endeavors look into the future, their crystal balls become ever more hazy. The underlying circumstances on which the business forecasts in those time horizons are based are filled wirh fluffy clouds of uncertainty, rather than executable commitments to worthwhile investments.

Within each planning horizon, businesses must reconcile their resource allocations and performance commitments and respond to emerging situations, much as an operating system allocates processing, i/o channel capacity, and memory resources within (time slices) and in response to priority interrupts.  Once resource utilization passes 90%, the functions of the businesses and operating systems both become susceptible to thrashing, which leads to a state similar to saturation fluid.

Within a business's decision cycles, recognizable stages of product development can be used as granular proxies for planning across multiple product lifecycles. Once patterns of resource profiles have been recognized among the wall of constraints imposed by each environment, decision-makers can use these patterns to size the prize. Such an approach mirror's Thucydides use of bricks as a basis for strategic planning during the Peloponnesian War.

Software intensive products have always presented a dilemma for lifecycle planners. Since software's replication costs approach zero, code bases can be re-purposed to apply them to more and more situations over time. The limits of this replication is user acceptance since new applications require user behaviors to fit into the structure and features offered by the system.. Yet the flexibility that enables this replication also has a dark side, since it can lead to endless improvement cycles which continue long after investment returns have diminished.

The evolution of each code base conforms to one of several patterns once they have been initially released. These patterns were first described by Meir Lehman in his seminal 1980 paper, Programs, Lifecycles, and Laws of Software Evolution. In it, he differentiated software according to whether it was being performed to:

  1. fulfill a specific functional specification (typically implementing proven algorithms)
  2. interact with hardware to deliver features of an embedded system
  3. adapt to the changing preferences of stakeholders working together in support of a collective mission

Each of these types imparts s distinctive demand on resource as a product is defined, built, and enters service. While each type must conform to its environment, the third type (which he named 'E-type', for evolutionary) is most susceptible to feature creep. This change is characterized by what we now know as Lehman's laws of software evolution. The dynamics which unfold in E-type software products are driven by the need to accomplish corrective, adaptive, perfective, and preventative maintenance, and do that concurrently as enhanced features are incorporated.

Organizations often inherit responsibility for products with similar features. These products are used by different parts of the business, where they are expected to provide more efficient capabilities over time. These separate products require investments throughout their lifecycles. Over this period, the continuing obligations of these separate code bases can become both a financial burden and a constraint which obstructs other strategies the organization may wish to pursue.

Portfolios are collections of obligations with different value propositions, and often unquenchable appetites for resources. These obligations consume resources that may be better spent and embraced by leaders towards more inward looking purposes. To further complicate things, An organization's business models for managing its portfolio may take many forms, such as build to order, build to stock, or value added reseller strategies, each seeking to position the business to face into the uncertain future.  Unfortunately, if a portfolio tries to manage this diversity based upon debate rather than principles, hard decisions often end up never being made, and nothing gets sunsetted. The lineage of the product lifecycles used in its evolution determine the frequency and thoroughness that can be expected from its future modernization.


Figure 2

Each stage of a lifecycle model has a natural staffing profile. The appetite for resources varies with time in each of these stages:

  • In the early stages of a project, requirements and architectures are shaped by the flow time it takes to discover and innovate, not by how much labor can be thrown at the problem
  • In the middle stages of a project, there are often more tasks than people available to do them.
  • In a project's late stages, adding extra people to a late project is not likely to bring the project to completion any sooner. Triage and rework typically drives the schedule, especially when only a few understand the system well enough to perform effective troubleshooting.

These interactions mean the ideal staffing profile for development projects takes the form of a Putnam–Norden–Rayleigh curve. The mathematical characterization of this staffing profile has formed the basis of most estimating tools which are systematically used on software intensive projects today. The PNR curve is especially important to consider in planning major initiatives, since the number of people are likely to correlate well with the code production rate and the number of defects which have been discovered and require resolution (see Figure 3). Each phase (PI) in a given profile can represent a different part a product's lifecycle. For example, a software endeavor might invest in inception (P1) and elaboration (P2) in an initial project, separate iterations of construction (P3) and integration (P4) in two subsequent projects, and transition (P5) and sustainment (P6) as a recurring, annual investment. A capital-intensive endeavor might consist of authorization (P1) and preliminary design (P2) in an initial project, detailed design (P3) and build out (P4) in a second project, and manufacturing (P5), depreciation and support (P6) as a recurring, annual investment. The size of these 'capability bricks' in effort and time can be estimated early on, and should begin by identifying which business capabilities will be enabled by proposed features, when these capabilities will next be needed, how much benefit should be expected from having them, and how that value will be captured rather than squandered. As figure 4 demonstrates, the relative size of these phases will vary with the type of product it is and the environment in which it will be used.


Figure 3

Since levels of staffing vary across these stages, there are consequences of forcing a flat staffing profile onto projects. Flat staffing profiles cause waste because some stages will be inadequately staffed, while others will have too much staffing. When organizations have many projects and adopt flat staffing profiles over many projects in their portfolio, the ability to track the performance of individual projects will be seriously eroded, and it will be challenging to drive their team members to focus and finish.

This behavior is especially evident in E-type legacy products in a portfolio, Let's look at how these dynamics play out. Each product is produced by an initial investment which was not adequate to satisfy all of the features expected of it or known to exist. This event is depicted by a dark green milestone labeled release 1. As with any code base, this new software also had quite a few bugs, and it takes a minor release or two to fix them; theses are shown as the 1.1 and 1.2 releases. At that point, follow-on maintenance releases are performed to periodically incorporate enhancements, though each release must also address work traveled from prior releases. While the diagram only shows a 1.3 and 1.4 minor release, there are likely a series of major and minor releases which follow this pattern.

During this period, the features gap continues to grow, as the velocity of these 'dot' releases is lower than that achieved during initial development. Studies indicate these maintenance cycles make up between 50 to 90% of the total lifecycle costs of resources for each product over time. An added affect emerges from the inevitable, self reinforcing tendency for over-commitment, without continuing triage of active code bases, accepting the consequences from deterioration of code brought by shortcuts, inattention, and fire-fighting, and ever-diminishing budget that delivers ever less business value. Such changes are likely to impact how many activities need to be done, and which technologies and development patterns can be employed in doing them.


Figure 4

When an organization has multiple code bases which are in active use that have similar functionality, each of these code bases goes though a cycle of birth, maturation, aging, and decay over the course of its lifecycle. As this pattern plays out, most organizations find themselves at a major decision point, which is depicted on the timeline as at T1. When products reach this point, their development organization and sponsors must decide whether to continue down the path of diminishing returns (and ever-widening unfilled needs) or not. Making a pivot typically requires a more significant investment than that made with a single modernization effort for the code base in question, so the time horizon targeted for when changes are worthwhile moves further to the right than a mere delay would normally involve; decay occurs quickly, and brittleness worsens, rather than healing. Pursuing such modernization usually necessitates agreeing to reduced levels of support to free up resources. These modernization efforts generally are successful in returning measurable but localized benefits,

Modernization is usually used to update a product's architecture and enable a batch of new features. This investment may delay likely obsolescence, but does little to step the long-run appetite for feature incorporation in E-type products. When a portfolio has many of these products, the resource demand from these continuing obligations can eventually saturate a business's affordable capacity to keep its legacy software relevant. It usually requires the business to reallocate resources for modernization from other pursuits. It takes time for everyone to understand the need for the modernization and believe in its value proposition. Delivering on that promise requires accelerating the cadence of development builds and improving the visibility of the rate of progress, so that the underlying constraints to performance can be addressed in a deliberate fashion.

Once organizations realize that parallel maintenance activities are an unsustainable drain on available resources, they have reached the decision point shown in figure 1 at time T2. Consolidation investments are usually too large to be accomplished in one major thrust. This may be an opportunity to pool resources working on separate but similar code bases, and when they exploit the benefits available from more modern technology, enhance the development and maintenance velocity so that the capability gap can finally be closed. There is definitely a sweet spot to be had here, but hitting it will likely depend upon the robustness of the architecture to support both current and future development activities.

PDF icon FLM Lifecycle Patterns.pdf41.02 KB