Averting The Quiet Disaster
For all the lofty goals and complex requirements of an enterprise program, the battle will be won or lost around our daily management of throughput. Mistakes at that granular level can telegraph to doom the entire initiative. Indeed — those are the mistakes that always do.
Assembling the Conceptual Model
The concepts we will be discussing in this post are complex and particular. Lots of theoretical mappings. I’ll be utilizing a library of terms and diagrams to help keep us on the same page along the way. For an overview of the broader concepts, however, please give a quick skim to: https://drewharteveld.medium.com/progressive-elaboration-or-a-box-full-of-bees-ec4cdead2b0e
Two Tiers of Scope Description
I manage scope in at least two tiers: Business Functions, and Functionality Attributes
Business Functions are requirements defined within the context of the business, itself. The Big Ideas, not the details that enable those in the real world.
A few examples:
- “Users will convert via a standard shopping cart process”
- “There will be a workflow that circulates details about the change in scope for leadership approval”
- “Placement Managers will query the database to find candidates who are a perfect match for each opportunity”
Business Functions are purposely lofty and concise — focused on motivation over detail. For those operating in an Agile environment, these may map to Epics, or they may be at a different hierarchical level.
Stakeholders not directly engaged in the development process will likely never penetrate below the Business Function level of detail, and that’s just fine.
Functionality Attributes represent the decomposition of Business Functions to an actionable level. Their mission is to bridge the gap between the lofty Business Functions and the more detailed attributes that will be necessary for actually planning product development work.
A few examples:
- “Estimated shipping charges will update automatically when the user provides their zip code”
- “Users can click any product in the Cart and be taken to that product’s Product Detail page”
- “Determination of which leader must approve the Change in Scope will be adjudicated according to a mapping of business unit, client, and total deal value”
- “Key financials of the Change in Scope will be bundled into a table view, a link for which will be embedded in the Change in Scope Record”
- “Candidate profiles will be mapped to the Relevant Skills taxonomy for easy searching within the database”
You get the idea — we’re down to the level of buildable/ testable functions.
For those operating in an Agile environment, Functional Attributes may equate to your User Stories. Or maybe not, again based on the hierarchical structure you have put in place.
Team members embedded directly within the product development process will be focused on the Functionality Attributes every day.
It is critical to understand that the Business Functions and the Functionality Attributes do not represent different scope. They are just different perspectives — based mostly on granularity — for describing the same scope.
Practically speaking, these concepts are dependent upon each other. Without strong operations around Functionality Attributes, the Business Functions can never move beyond being cool ideas. Similarly, if we don’t have robust thinking around the Business Functions, we are liable to build Functionality Attributes in circles that gain no traction in the marketplace.
In environments where the linkage between Business Function and Functionality Attribute get a lot of attention, it makes sense to carefully manage the lineage between these two hierarchical levels of scope.
Back to the ‘box of bees’ treatise — there is a distinct and involved thought process associated with decomposing Business Functions to the level of Functional Attributes. For the purposes of this post, we’ll just call that “Grooming” [for a deeper examination of the Grooming process, see: https://drewharteveld.medium.com/adequately-groomed-db9a8f28bb1b ].
Just because you can clearly understand and articulate a Business Function does not mean that you can easily infer the associated array of Functional Attributes it encompasses. Yes, even you, brilliant industry veteran. It is a failing of the human mind that we assume just because we understand something at a high level we also have mastery over its details. We do not. Not until we thrash through the analysis involved in the Grooming process. Indeed, artful execution of Grooming is among the most pivotal activities the team will engage in during the program.
No Holy War, Here
Among the main points of differentiation between the Waterfall and Agile product development methodologies is their approach to this scope decomposition process. Waterfall would have us complete all Grooming analysis up-front, in a Detailed Design phase. Agile recommends that we conduct this analysis along the way, constantly folding-in learnings from previous builds/launches. Regardless of which methodology your own enterprise embraces, the underlying need to decompose the scope in this way remains constant.
Effort and Velocity
Two even more granular constructs are Effort and Velocity Units.
Once Functionality Attributes have been defined, the team determines how it will represent the estimated level of effort necessary to complete any Functionality Attribute, represented via some type of Effort Units.
There are many ways to conveniently enumerate Effort Units:
- Working Hours: Some teams define effort in terms of working hours, to aid in time-based planning
- Story Points: Agile recommends a less concrete but more experientially robust concept called ‘Story Points’ to represent effort
Assuming everyone understands the interplay of these concepts and agrees to utilize them in good faith, it really doesn’t matter what proxy you choose. Pick one and socialize for universal understanding, acceptance, and utilization.
Let’s set those scope decomposition concepts aside, briefly, to talk about how much work the team can actually handle. Agile provides a really useful concept for this, called “Velocity”. For the purposes of this dialogue, I’d like to geek that up a notch by referring to it as, Velocity Units.
Velocity Units represent the volume of work that the team can successfully complete during a fixed duration. The unit of measure for Velocity Units must match that defined for Effort Units. If you manage with hours, do so on both sides. If you are a Story Points shop, use Story Points for both Effort and Velocity. Again, the specific units matter less than that they are universally understood, accepted, and utilized.
Traditionally, we gain our understanding of the Velocity Units a team can metabolize based on running analysis of past sprints. Twenty years of Agile practice has taught us that as long as the team remains stable, its velocity remains remarkably stable, as well.
This dialogue takes as a given that the enterprise respects and appreciates the concept of Velocity Units, and understands associated arithmetic as a physical limit to the amount of work the team can metabolize in a scalable fashion during the defined time box. If the enterprise can’t respect that basic architecture, you’ve lost before the starting gun even fires.
The meat of our bottoms-up planning process consists of:
- Conducting analysis around each decomposed Functionality Attribute
- Determining the number and make-up of the Effort Units estimated to be necessary to successfully complete it
- Applying our available Velocity Units against those Effort Units to plan the work
Whew! That’s a load of exposition. Thanks for slogging through it. I promise all that theory will reveal itself in practice in just a few moments.
Pure-form Agile espouses that the basic tools described above, steered by competent Product Owners within an effectively calibrated enterprise, should be adequate for product development to fly like the wind. My experience is that this works pretty much as advertised where new product development is concerned — particularly when a team is dedicated to remaining flexible and charting the course of their product based on feedback from the market. When all the pieces fall into place for this kind of operating model, it is truly something to behold.
In recent years, however, Agile has grown beyond new product development. Programs in the realms of platform integration, maintenance, and process optimization are all being attacked in an Agile fashion [Hallelujah!] and bring with them different demands.
One challenge that is a constant companion on these programs is the demand to forecast progress across the entire program lifecycle, as a key input to budget planning and constraints for interdependent systems in the tech/process ecosystem. This is a tricky area for Agile, but we square our shoulders and plow through it every day.
Because the Functionality Attributes aren’t yet clearly defined at this juncture, the team and leadership normally engage in this planning at the Business Function level. For the purposes of this example, I’ll assume an Agile approach, with a scheduled set of sprints. Based on a top-down estimate about the amount of time/money available — and the thinnest conception of the effort required to tackle the work — a succession of Business Functions are laid across a timeline from the start of the project to its expected completion.
Why are the Business Functions used to draw this timeline, and not the more detailed Functional Attributes? In my experience it is because:
- It is the senior business leaders empowered to complete this analysis. It’s their money, and they demand direct involvement in planning how it will be spent. If they are smart, they allow us to participate in the process. If they’re brilliant, they release its control to the team since that’s where the real expertise lives. Mileage in your own organization may vary.
- In all but the most waterfally-ish environments (which bring their own set of risks and issues), we simply haven’t completed our analysis down to the Functionality Attribute level as of this point in the process.
During the Sprint Planning Process [or in Detailed Design within a Waterfall framework]:
- Each Business Function is decomposed into Functionality Attributes
- Each Functionality Attribute is estimated with Effort Units
- Velocity Units are mapped against those Effort Units to plan the work
I will carry this concept forward to include one additional dimension: proximity to the core of the Business Function. All Functionality Attributes live on a continuum, with some being closer to the core of the Business Function, itself, and others bringing ancillary value such as convenience, usability, and style.
Proximity to the core of the Business Function might seem like an obvious concept, and associated adjudication straightforward. “Of course, we do the most important stuff first.” Nothing could be further from the truth. These conversations get political in a heartbeat. Different Functionality Attributes serve different stakeholders, leaders may have their own favorite features, and humans become quickly attached to their own bright ideas.
Despite the difficulty, however, this adjudication is crucial. In the real world, we never have enough Velocity Units to fulfill all the demand. We absolutely must make choices, and in order to be successful those choices must be driven by proximity to the core of the Business Function.
Countdown to Armageddon
So there you are, engaged in sprint planning or execution, and you find that you have run short of Velocity Units before all Effort Units have been planned or addressed…
Let’s review the current state of the chessboard:
- Leadership has set its own expectations about total duration of the build, and spread Business Functions across that duration
- The team has decomposed Business Functions to Functionality Attributes and estimated Effort Units for each
- Velocity has been established via ground truth, and is represented by Velocity Units
- The current sprint is the last one scheduled to address the Business Function in question
- …and the team has run short of Velocity units, in planning or during execution, to completely address all identified Functionality Attributes
Whether you want to admit it or not, the fate of your entire program hangs in the balance of the decision the team makes at this juncture.
Assuming the team size/structure cannot be adjusted, there are only three options available:
OPTION 1: Adjust the schedule to add additional sprints adequate to address the overhanging Functionality Attribute[s]. If this is the choice, the program baseline should be updated immediately to reflect it, and the associated Functionality Attribute should be noted so we can set expectations for likely scope of the newly added sprint.
If you’ve already scheduled a buffer to meet this sort of contingency, somebody owes you an ice cream cone.
OPTION 2: One or more Business Functions scheduled to be addressed downstream should be de-scoped from the program to make space for this overhang to be addressed. Remember that Business Functions are high-level, so we’re talking about de-scoping some complete block of functionality.
The overhang, and its parent Business Function, can then be safely pushed into a subsequent sprint on the calendar.
OPTION 3: Eject Functionality Attributes from the build during this phase. To be super clear: you still get all your boxes by the end of the project — but this one box will include fewer wrenches than you had originally envisioned.
I’m not talking about, “we’ll do it later,” or even, “We’ll do it later if we have time.” I’m talking about, “We won’t do this until the entire build is complete, and delivered to our users.” After that, the enterprise may choose to reset the clock and build a whole additional layer atop what has already been delivered.
This kind of excruciating give-and-take is at the very root of the iterative product development philosophy.
Hopefully, your Agile kung-fu is strong enough that this overhang is discovered during planning instead of execution. If so, the team can make some smart, sober choices — based on ‘proximity to the core’ — about which Functionality Attributes should be embraced and which ones should be ejected. Regardless, the only way to stay on-timeline is to eject some scope. Because: physics.
Listen, we can go wide or we can go far — but we can’t go both. At least not at the same time. When the priority is to keep the program marching forward toward a defined finish date, the correct answer is ALWAYS to skinny the scope in the service of forward motion. Team members and stakeholders, alike, will be disappointed when their favorite attributes fall off the list. But if Options 1 and 2, above, are not available — Option 3 is all we’ve got to work with. Gather the team, close the door, make the call, and keep your program moving forward.
* * *
It is so easy to let this moment sail by. Most of the team and leadership won’t even have the sensitivity or expertise to recognize its importance. Development teams shoulder this burden themselves every day, because of ignorance, or bravado, or a lack of leadership support, with statements like, “We’ll just work harder” or, “We’ll make up the time downstream”.
But there is no downstream. The entire stream has been planned at the Business Function level, and every cycle within every sprint has been allocated. So where are you thinking those downstream cycles are going to come from? They’re coming from nowhere. You’re putting your team in a pressure cooker, locking the lid, and cranking-up the fire.
Let’s not do that anymore. Let’s have the adult conversation with leadership, based on the underlying physics of the model laid out above. Arithmetic doesn’t lie. We avert or embrace the quiet disaster right here, together, by staring it squarely in the eye and making some binding decisions.
Many years ago, I accepted a job with a Financial Services company based primarily on a conversation with the CIO of the business unit for which I would be working. It all came down to one statement that he made during the meeting, “[Drew, we all know that a majority of technology projects fail. To the outside observer, these appear to fail at the end. But we know the truth. The truth is that they failed in the beginning, we just weren’t smart enough to recognize and mitigate until it was too late.]” With that statement, I knew he got it. And his statement still holds true, today.
This post is a novella, with more than its share of exposition and swooping arrow diagrams, but the underlying concept is pretty simple. In our hearts, those of us who are living in the crucible of the program know when our progress hits a barrier of available bandwidth. In that moment, we have the option to confront it directly, in terms that are clearly stated and transparent all the way up the chain of command, or choose to let the problem slide quietly by to accumulate amid the froth of our operational debt. Let’s do our best to choose the former, keeping our programs healthy today and into the future.
Originally published at https://www.linkedin.com.