Systems Redesign + Risk Assessment
How to Evaluate a Systems Redesign Before You Build It
Odds. Not opinions. Here is how to actually assess whether a redesign is worth pursuing.
AI has created a flood of internal redesign ideas. Every team has a new vision for how a process could work, what a system could do, or how work should be restructured. Most of those ideas will not pan out. The question is not which ones feel promising. The question is which ones have the best odds, and how do you actually calculate that before committing time and money.
The answer is not color-coded traffic lights or high/medium/low ratings. Those labels hide more than they reveal. What works is breaking the risk into three roughly independent sets of questions, putting real probability estimates behind each one, and multiplying them together to get an overall number you can reason about and compare.
Three Questions, Not One
Most risk conversations collapse everything into a single "is this risky?" judgment. That is a mistake. A redesign can be technically sound and organizationally impossible. It can have full internal alignment and still fail to persuade the people it is built to serve. The risks are distinct and need to be assessed separately.
Will it actually work?
Will the organization adopt it?
Will stakeholders embrace it?
Risk 01
Technology Risk
Is the solution technically feasible in your specific environment, with your data, your constraints, and your timeline? Not in theory. In practice. Has this approach been proven in comparable contexts? What are the most likely failure points and how recoverable are they?
Risk 02
Organizational Risk
This is where technically sound redesigns quietly die. Does this align with where the organization is actually going? Is there genuine executive commitment, or just initial enthusiasm? Does it require behaviors to change that are deeply embedded in how work gets done? Is there realistic capacity to execute and sustain it?
Risk 03
Market Risk
Will the people this redesign is built to serve actually value it? Is the advantage obvious or does it require a sophisticated argument to explain? Does it solve a problem they feel urgently, or one the team has identified on their behalf? Can you show the benefit before full implementation, or does the case rest entirely on projected future value?
Turn the Estimates Into a Number
Multiply the three averages together. What you get is an overall probability of success that accounts for all three risks having to go right simultaneously. This is not a perfect model. It is a structured way of forcing the question that never gets asked: if all three of these have to work out, what are the actual odds?
A 29% overall probability is not a reason to kill the idea. It is a reality check of where the idea is now. From here teams can make more informed action plans to go learn more, typically tackling the biggest risks first. Or they can prioritize a different idea with better odds and revisit this idea later.
It is also worth noting that this math assumes rough independence. In reality, these risks can interact. I find that keeping it simple helps with adoption and while it is more directional than precise, it is very useful.
Use a Team. Do Not Estimate Alone.
Any single person's estimate is shaped by their role, their optimism bias, and what they can see from where they sit. The champion of an idea sees higher odds than the person whose workflow it disrupts. Neither is wrong. They are each seeing part of the picture. The process below surfaces all of it.
Everyone estimates independently
Each person assigns a probability to each risk dimension and writes their reasoning. No one sees anyone else's estimates yet. This prevents anchoring to the first confident voice in the room.
Share all estimates and rationales
Everyone sees the full range of estimates and reasoning. A wide spread signals genuine uncertainty or information asymmetry. A tight range signals confidence. Both are useful to know before committing.
Revise and repeat
Each person can adjust their estimate in light of what they just learned. Within three rounds, the standard deviation narrows and consensus forms. The team also gets faster: by a second session, they often reach consensus in one or two rounds because they have learned to surface their reasoning more efficiently.
Do Not Abuse the Process
Evaluating every idea sounds good until you are reading your hundredth variation on the same process tweak. Stakeholders have a finite capacity for serious evaluation. Spend it wisely.
One Last Thing
This framework works equally well for new product innovations and process improvements, not just systems redesigns. Technology, org, and market are our names for the three risk dimensions. Your organization may have better labels. The names matter less than the discipline of separating the questions and estimating each one honestly.
The goal is not to produce a verdict. It is to understand where the risk actually lives so you can make a smarter decision about whether and how to pursue the idea, and so implementation teams know exactly where to focus their energy before things go wrong rather than after.
High, medium, low tells you almost nothing. A number tells you what to do next.
That is the whole point.
Eureka! Ranch helps organizations build the skills, systems, and culture needed to turn bold ideas into real results.