A conversation with Murray Cantor, co-founder and CTO of Aptage


This is part of my series on thought leaders in the innovation space. Check out the other articles here.

Murray Cantor is the co-founder and CTO of Aptage, which builds forecasting tools for agile teams. He’s known in the innovation space for his agile management techniques and exploring the economics of investing in and managing innovative endeavors. What many people don’t know is that during his time at UC Berkeley he hosted a weekly music show called The Monday Blues on KALX. The two pursuits may seem at opposite ends of the spectrum, but for Murray’s path, they have an intriguing complement.

Murray Cantor is the co-founder and CTO of Aptage

Blues is a genre that is difficult to define. The lack of common characteristics among the various types of blues is often attributed to the fact the genre took its shape from the idiosyncrasies of individual performers. That same variability is the reality for many innovation projects as well. Goals and outcomes differ between teams or organizations and projects, and are shaped by the contributions and skillsets of individuals across departments. Murray shared with me how he found a way to measure and manage the variability for these unique projects that all fall under the umbrella of innovation.

The Pitfalls of the ‘Software Factory’

It’s not uncommon to attempt to understand a new paradigm with an existing, familiar metaphor. Many techniques employed to manage software projects to date treat the practice as, what Murray calls, “the software factory.” They assume that what works for manufacturing will translate to software. So why don’t they work?

According to Murray, software projects lack a key quality required for techniques like Six Sigma to work — consistency of artifacts. “You need to be building the same things over and over again. And then you can decide from that whether you have a controllable process. That means you really understand all the variables involved in delivering what you’re delivering.”

Murray with Brad Holtz, CEO, Cyon Research discussing “The Simplicity Behind Complexity: Do Flow and Coupling Explain Everything?
Murray with Brad Holtz, CEO, Cyon Research discussing “The Simplicity Behind Complexity: Do Flow and Coupling Explain Everything?

He offers the example of food manufacturing. In order for the manufacturing process to be controlled, manufacturers need to be aware of any variables that could impact food quality. The Six Sigma technique can determine whether there’s an unknown variable affecting a repeated process. “Software is nothing like that. Every work item, every task is something different than the last one.” Murray argues that even when you get to the most granular level of a user story or requirement, for example, each varies so greatly from the next that traditional control mechanisms are ineffective.

Computer set up side by side

In the face of a perceived high rate of failure for software projects, many in the software world have sought to apply standard civil engineering techniques to correct what they believed was a crisis in software management. “The premise was that the other disciplines were better than software. The more mature disciplines had techniques.”

Novel Software Projects

Knowing there were numerous examples of failed construction and civil engineering projects, Murray was unconvinced. He conducted a little study of his own and found some conclusions that were contrary to the failure statistics plaguing software projects.

“The more novel the projects got the more unlikely it was that it was going to complete on time and meet the standard criteria — on time, on budget, original scope. That is almost impossible for a novel project. And even that criteria is probably the wrong criteria anyhow.”

Instead, he found that, if you normalized the data accounting for the amount of uncertainty in software projects, “it turns out the software was just about as good as everybody else. And the reason software seemed to fail more is that we were doing more innovative stuff than other kinds of projects.”

Techniques for Software Project Types

Murray finds that identifying the appropriate technique depends on first acknowledging that software projects come in three types.

  1. Change requests or bug fixes
  2. Adding new features to an existing platform
  3. Building a brand new platform
Man working on laptop
People working in office

For change requests and bug fixes, techniques like Lean make sense due to the minimal amount of uncertainty. “When someone hands you a change request, it’s usually pretty detailed. And the code that needs to be changed is pretty well understood.” When adding new features, “spiral development techniques where you continue to interact with the stakeholders to be sure you’re building the right thing” make sense. And when the occasional opportunity to build a brand new platform arises, modeling and prototyping techniques can be helpful. “There are a lot of organizations that don’t know how to do that, so that’s where consultants are really helpful.”

Adding new features to an existing platform is where Murray finds people run into trouble with over-planning.

“If the goal is to plan your work and work your plan, you’re going to fail. One of the insights of agile development is that you’ve got to work out granularity to the level at which you understand things. Your inability to do that is consistent with the amount of uncertainty you have.”

A good agile development practice is to first plan the epic, or high-level user story, for which you have the most knowledge. Design sprints can also be a useful tool at this stage to explore the space or run experiments to reduce uncertainty. With uncertainty reduced, work can be broken down more granularly. But even at this stage, Murray cautions against over-planning: “The idea that a story has to be framed in one sprint is not as important as the understanding that you plan at the level of your knowledge. If you can’t plan at the level you need to, you need to get more knowledge in order to do that planning.”

Writing on whiteboard

Thoughts on Earned Value Management

Another stumbling block in software management is the practice of earned value management (EVM) or the concept that value delivered is proportional to effort expended.

“If I have ten people working on requirements analysis, I don’t get ten times the amount of requirements analyzed. You’re actually only measuring whether you spent the money on the thing. You haven’t really measured the value you’ve gotten.”

Instead, Murray recommends an approach where the earned value is assigned to a certain milestone. “So maybe measuring requirements analysis isn’t so much whether their complete, but whether they’re stable.”

The concept of EVM can also incentivize people to complete the simpler tasks first: “We pull the low hanging fruit because, if the goal is to claim earned value rather than to increase the likeliness of project success, you do different things. If you want to increase the likelihood of project success, what you do is work on reducing the uncertainty.”

The EVM approach makes sense for projects like laying concrete or paving a road where “once those things are done, the building you have is that much more valuable because those things have been done.” But it doesn’t work for innovation projects because it pushes the most risk-laden activities to the end of the project where surprises often result in late delivery.

Ongoing Risk Analysis

Murray also sees the standard approach to risk management, rather than ongoing risk analysis, as a source of false security for software projects. “Risk management is when people sit around and imagine all the bad things that might go wrong, what the probability and impact are, and develop a risk mitigation plan. And then they put that on a shelf.” Alternatively, risk analysis is an ongoing process of leveraging new information to reduce the uncertainty of meeting a goal or measurement for a project.

“The idea is working with uncertainty and reducing uncertainty by looking at the information that we have gained is the big picture in all of this.”

Murray looks to a risk analysis perspective for measuring innovation as well. “An innovative effort begins without complete information about the best design, how much effort it will take to deliver, or even what is really required. Since there is missing information, it is impossible to make firm predictions.”

Man walking between boulders

Measuring Uncertainty

For R&D projects Murray recommends measuring the amount of uncertainty in the measures that matter to the project: time, cost to delivery, predicted benefits received after delivery, return on investment, etc. He calls on his mathematics expertise to measure the uncertainty using methods from applied probability.

“The goal of having a customer delighted with what they get is going to be a probability distribution. Neither I, nor the customer, probably knows exactly what they want or could have. We’re going to learn that together.The amount of time it’s going to take to get to the goal of customer satisfaction or time to delivery is a probability distribution.” Rather than a firm prediction at the beginning of a project, Murray approaches innovation measurement as a range that is updated with greater specificity as more information is gained.

This concept also hits on Murray’s definition of innovation as something new — to the organization or the world. If there’s uncertainty about how long an endeavor is going to take and the amount of leverage required to accomplish it, this lack of information is a good indication that something is new and, thus, innovative.

“In construction if you’re building yet another strip [mall], and you’ve built 100 of them before, you can be pretty tight in your probability distribution. If you’re building the Sydney Opera House, no one’s ever built anything like that before. So it’s no surprise the ability to actually estimate how long it will take isn’t very good.”

Planning for Uncertainty

How do you structure programs around such uncertainty? The crucial first step is embracing, rather than avoiding, the uncertainty. “Too often people treat uncertainty as something to be avoided. This leads to either denial, recriminations, or the organization becomes incapable of delivering innovation.”

This is why Murray believes that standard investment models don’t make sense for innovation projects. “There’s what’s called a CAP M model for investing where they do risk adjustment with the rate of return. Those are well suited for financial instruments where the future values [are] what we buy today. And then the risk is how much that money is really worth given inflation.” The problem is the model fails when future values are based on new endeavors that are full of uncertainty.

“There is no innovation without uncertainty. A successful [innovation] program requires that all the experts and subject matter experts understand that, to be innovative, they need to embrace and manage uncertainty.”

Overlapping Uncertainties

Murray works with stakeholders and SMEs to elicit and capture uncertainties and treats future values, again, not as firm predictions but distributions. “I assemble stakeholders that include the development team management leads who know the development effort time and cost. I also include the business analysts or product management who provide the expected benefit stream and the IT and customer service leads who can project the after-delivery costs. In each case, we use their models capturing uncertainties in their parameters. Their separate interests, concerns and, especially, their uncertainties can be integrated into an overall likelihood model of the return on investment.”

As the project continues, stakeholders update their models based on new learnings about completion rates, requirements churn, and any available evidence from pre-sales and reliability testing. “The ongoing frank discussion of the residual risk results in a learning organization that can deliver novel efforts.”

“The ongoing frank discussion of the residual risk results in a learning organization that can deliver novel efforts.”

Murray’s work with a laboratory equipment company puts this approach in action. “They know that, no matter what, they’re not going to lose a certain fraction of their customers. So they know the worst case.” There is some movement in the marketplace and the company can also make estimates on how much of the market they can capture. “So now we have some models of worst case [and] likely case of the future benefits and costs. And the more innovative it is, each of those has wider distributions.” So as innovation increases so does the variability in outcomes within the worst and likely cases of future costs and benefits.

You may have guessed by now that Murray’s innovation silver bullet is reducing or quantifying uncertainty. He ascribes to Lord Kelvin’s maxim that “to measure is to know” encouraging organizations to measure the uncertainty around setting their organizational goals.

Mathematics & Innovation

His passion for applying mathematical theorems to predict innovation outcomes began at UC Berkeley where he realized that, for him, a Ph.D. in Mathematics was not about the academic math he learned but, rather, the empowerment he gained to observe the world and come up with new ways to understand it through mathematics.

Math on blackboard

Murray credits his adviser at Berkeley, the youngest faculty member to receive tenure in the Math Department at the time, for inspiring his approach to making mathematical discussions accessible to a wider audience. “When I try to explain things to people, I want them to believe that on a clear day they would have thought of it themselves. Not be impressed with me, but just be impressed that the ideas themselves are self-evident if you just step back and think about them.”

Eventually, he moved on to work with IBM on the graphics subsystem of the RISC System 3000. During his time with IBM, he developed many of his techniques of identifying and working off uncertainty, starting with the more difficult tasks first, and determining project risks through prototyping and simulations. And it was at IBM under the Rational brand that he developed his ideas around innovation outcomes as distributions rather than firm predictions.

Of Murray’s many success stories, one that he’s most proud of is his work on a satellite ground station. Mission planning and prioritization for satellite positioning is an involved process that often falls victim to over-planning. “They would plan something five years out, they would have the integration day, and whatever the pieces were that were supposed to come together never did.”

Murray observed that the various primary and subcontractors weren’t working together to identify and resolve the uncertainty of delivery. He met with stakeholders in DC and pitched the idea of incremental integrations as a means of working off uncertainty. “My first integration was just to see if I could do a sort of ‘Hello World’ test with a stubbed out version of the system architecture because then I knew the architecture would fit together in the end.”

For Murray, this project served as a proof of concept for the effectiveness of his techniques for identifying uncertainty and resolving risks early in a project.


If you want to read my other articles about innovation experts and practitioners, please check them all out here.