Eureka! Ranch + Innovation Intelligence + Risk and ROI

Systems Redesign + Risk Assessment

How to Evaluate a Systems Redesign Before You Build It

Odds. Not opinions. Here is how to actually assess whether a redesign is worth pursuing.

AI has created a flood of internal redesign ideas. Every team has a new vision for how a process could work, what a system could do, or how work should be restructured. Most of those ideas will not pan out. The question is not which ones feel promising. The question is which ones have the best odds, and how do you actually calculate that before committing time and money.

The answer is not color-coded traffic lights or high/medium/low ratings. Those labels hide more than they reveal. What works is breaking the risk into three roughly independent sets of questions, putting real probability estimates behind each one, and multiplying them together to get an overall number you can reason about and compare.

Three Questions, Not One

Most risk conversations collapse everything into a single "is this risky?" judgment. That is a mistake. A redesign can be technically sound and organizationally impossible. It can have full internal alignment and still fail to persuade the people it is built to serve. The risks are distinct and need to be assessed separately.

T Technology Risk

Will it actually work?

O Org Risk

Will the organization adopt it?

M Market Risk

Will stakeholders embrace it?

Risk 01

Technology Risk

Is the solution technically feasible in your specific environment, with your data, your constraints, and your timeline? Not in theory. In practice. Has this approach been proven in comparable contexts? What are the most likely failure points and how recoverable are they?

Example: A team of five evaluates an AI document processing system. Independent estimates: 85%, 70%, 75%, 80%, 65%. Average: 75%. The spread matters — the two lower estimates come from people with direct implementation experience who are pricing in integration complexity the others have not fully considered. That is useful information, not noise.

Risk 02

Organizational Risk

This is where technically sound redesigns quietly die. Does this align with where the organization is actually going? Is there genuine executive commitment, or just initial enthusiasm? Does it require behaviors to change that are deeply embedded in how work gets done? Is there realistic capacity to execute and sustain it?

Example: Same team, same idea. Org adoption estimates: 60%, 50%, 70%, 55%, 45%. Average: 56%. Lower and more spread out than the technical estimates. The 70% is the internal champion. The 45% manages the team whose workflow changes most. Both are right about their constituencies. This spread tells you exactly where the change management work needs to happen.

Risk 03

Market Risk

Will the people this redesign is built to serve actually value it? Is the advantage obvious or does it require a sophisticated argument to explain? Does it solve a problem they feel urgently, or one the team has identified on their behalf? Can you show the benefit before full implementation, or does the case rest entirely on projected future value?

Example: Stakeholder embrace estimates: 70%, 80%, 65%, 75%, 60%. Average: 70%. The 80% comes from someone close to the external customers who benefit most. The 60% manages internal teams who experience more disruption than benefit in the near term. Stakeholder embrace is never uniform. The spread shows you where the persuasion work lives.

Turn the Estimates Into a Number

Multiply the three averages together. What you get is an overall probability of success that accounts for all three risks having to go right simultaneously. This is not a perfect model. It is a structured way of forcing the question that never gets asked: if all three of these have to work out, what are the actual odds?

75% Technology
×
56% Org
×
70% Market
=
29% Overall Odds

A 29% overall probability is not a reason to kill the idea. It is a reality check of where the idea is now. From here teams can make more informed action plans to go learn more, typically tackling the biggest risks first. Or they can prioritize a different idea with better odds and revisit this idea later. 

 

It is also worth noting that this math assumes rough independence. In reality, these risks can interact. I find that keeping it simple helps with adoption and while it is more directional than precise, it is very useful.

 

Use a Team. Do Not Estimate Alone.

Any single person's estimate is shaped by their role, their optimism bias, and what they can see from where they sit. The champion of an idea sees higher odds than the person whose workflow it disrupts. Neither is wrong. They are each seeing part of the picture. The process below surfaces all of it.

1

Everyone estimates independently

Each person assigns a probability to each risk dimension and writes their reasoning. No one sees anyone else's estimates yet. This prevents anchoring to the first confident voice in the room.

2

Share all estimates and rationales

Everyone sees the full range of estimates and reasoning. A wide spread signals genuine uncertainty or information asymmetry. A tight range signals confidence. Both are useful to know before committing.

3

Revise and repeat

Each person can adjust their estimate in light of what they just learned. Within three rounds, the standard deviation narrows and consensus forms. The team also gets faster: by a second session, they often reach consensus in one or two rounds because they have learned to surface their reasoning more efficiently.

Do Not Abuse the Process

Evaluating every idea sounds good until you are reading your hundredth variation on the same process tweak. Stakeholders have a finite capacity for serious evaluation. Spend it wisely.
Organizations using our risk assessment tool in Jump Start Your Brain typically evaluate a focused batch of around a dozen of their most distinct ideas over a few days, then months pass before the next session. Save the full assessment for ideas that are genuinely different, where the answer is not already obvious, and where the stakes of getting it wrong are significant.

One Last Thing

This framework works equally well for new product innovations and process improvements, not just systems redesigns. Technology, org, and market are our names for the three risk dimensions. Your organization may have better labels. The names matter less than the discipline of separating the questions and estimating each one honestly.

The goal is not to produce a verdict. It is to understand where the risk actually lives so you can make a smarter decision about whether and how to pursue the idea, and so implementation teams know exactly where to focus their energy before things go wrong rather than after.

High, medium, low tells you almost nothing. A number tells you what to do next.

That is the whole point.

Eureka! Ranch helps organizations build the skills, systems, and culture needed to turn bold ideas into real results.