Risk Anatomy of an AI Use Case v1

Hey all - I’ve been thinking about how to extend the Scalable Governance approach I’ve been describing through my posts to describe how to design governance of low-code solutions that enables people, keeps data secure, and the organisation safe, based on size of risk to also include AI use cases.

To do this I had to start considering how we might describe the risk associated with a use case that uses a single, or multiple, AI ‘units’. I started by trying to break down different parts of an AI Use Case. This diagram shows the parts in generic terms.

Diagram of the risk anatomy of an AI use case in generic terms

Let’s translate that into the actual elements of an AI use case. I augmented my initial thoughts with the ‘Action Types’ from an awesome piece of work on the different types of AI solutions and how they’re used - The Agentic Scope of Work by Kwame Nyanning.

I’ve broken an AI use case down in to these parts and presented it like a mathematical formula to show where I see things being a multiplier of risk.

Diagram of the risk anatomy of an AI use case

Let me explain these elements as I’m considering them:

Let’s start in the middle with the AI activity and it’s outputs:

Spread of non-deterministic outputs - Where as 1+1 = 2 in a deterministic model. The outputs from AI are statistical. Due to this they can produce a range of results. I’ve shown this spread in the picture.

Variation in Potential Outcomes - If we combine the spread of non-deterministic outputs and business context that gives us a variation in Potential Outcomes for our business decision, outcome, or process. These could be very narrow or they could be broad.

The spread of these elements and variation in outcomes I see being controlled by these elements:

Input Prompt Quality - This is our feed into the AI process - it could be a manual prompt that we type, data, or other multi-modal input (e.g. picture, audio, video) and we need to focus on the quality of this to get our desired output.

Grounding Data / Knowledge - These are the files, documents, data, that our AI is going to use to base it’s reasoning and decision making on. We choose what these are, their content, and their quality.

We can then put different levels of control by having a check on what’s fed into our AI activity, or to check the outputs before their used for a next step:

Gate Keeper - I’ve called it this as it’s a ‘check’ either before the AI process, or after before onward use. This could be a ‘human in the loop’ making the decision or it could be fed to code following clear rules, or another AI, or zero checks.

The size of risk around our use case is also going to be dependent on how we’ll use those outputs:

Action Type on Outcome - What are we going to do with the thing that comes out of that AI step? Is it just information that’s been summarised? Is it creating content? Is it performing activities? Is it reasoning and making decisions and acting on those decisions? Is it managing a complex end to end process with multiple parts?

The level of impact is going to be driven by the context around what we’re doing, where, and why, in the business?

Business Context - I see this as the business decision that’s being made, the business outcome that’s being being driven, or the business process being executed. It could be low or high so multiplies the risk of the ‘AI unit’.

And that risk is multiplied up exponentially by the number of steps we might put in a chain

Number of steps in a chain - Is this a single isolated activity? Or will the outcomes of that first step get fed into a second AI step… or a third? Or a fourth? Or an xth?

All of these elements of a use case need to be governed by effective foundations built on Technology, Data, Process, and People.

Diagram showing risk amplification due to chained AI actions if left without Gatekeeping

The reason my hypothesis has ‘Number of Steps’ raises the use case unit to a power is that I hypothesise that if, in a non-deterministic model, there’s a spread of results then every additional step in that chain of ‘AI Units’ the range of potential outcomes would get wider and wider.

If we compare this to evolution from an egg we could get to the one outcome being a chicken… at the other end of the scale we could end up with a Velociraptor! If we’re trying to build an island park where people can visit dinosaurs that may be fine… but a petting zoo… Not so good! 😁

So we need to place some controls and governance around our use cases. In the same way that I’ve done with the Scalable Governance model around other Low-code use cases I’ve started defining how that model adapts to an AI use case, and what controls and standards we might put in place to control these. This will be in a follow up post to this one (subscribe to my blog or follow me on LinkedIn to make sure you see that one too!)

Diagram showing the connection between quality of inputs and variation in the results

However at a high level, besides the underlying governance foundations, when thinking of an individual use case the higher quality and specific our prompt is, the better the quality and specificity of the data we’re grounding it on, and what form of ‘gatekeeper’ we have in the loop, the narrower our spread of results will be, the lower variation of business outcomes, and the lower risk in the ongoing use of that data.

I’m still trying to piece this together and wrap my head around it so I have no doubt that it will evolve more. To help with this I’d love to hear your feedback on this hypothesis and if there are other criteria you consider when evaluating an AI use case.

Previous
Previous

Copilots governance model v1

Next
Next

Power Platform solution hacked?