Copilots governance model v1
In my last post I shared a way I’ve been using to start to breakdown and describe the risk associated with an AI use case because to be able to govern a use case we need to be able to describe it and the risks, and understand the controls we have available to us to be able to govern these elements.
Here’s a picture from that post to remind you what those elements are we looked out - There’s a description of each of these in the original post.
To simplify how we define our governance we also need a way to categorise different sizes of risk so we can apply different governance processes and standards in a simpler way - You can remind yourself about the principles behind the Scalable Governance Model here 😁
In this example we’ve described risk ‘sizes’ of Small, Medium, Large, and Extra-Large but the model is full adaptable to add more. In the post about it you’ll see examples of the criteria we might use to describe your low-code use case to align them to this model and assess risks such as data, financial, operational, and support risks…. Let’s have a look at how we might then bring AI Risk Anatomy and Scalable governance together! :)
To govern our AI use case we need to take the same approach as for other Low-Code use cases and combine the dimensions to describe the risk with some responses to help us align a use case to a risk size.
You’ll see how I’ve used the risk anatomy items as the different dimensions, and against each size (S / M / L / XL) added some example responses that would steer our use case to be a specific ‘size’. The actual dimensions and levels would be agreed with the organisation implementing dependent on their risk appetite at that point of their journey (recognising that these aren’t set in stone and will avoid with the organisation’s maturity and needs.)
We can also use Kwame Nyanning’s description of different types of agent to think about how we could use this as a criteria to describe risk size too.
So using these descriptors we now we have a framework that we can use to ask questions about our AI use case and classify the size of risk we associate with them. Remember it’s just a framework to enable a conversation. Not all use cases will fit perfectly and smoothly into it. It gives us a common language to discuss the risks and even if only 80% fit this makes our governance much simpler for everyone. Remember : It’s just a way to start a conversation and can evolve as our thinking does. We should still have flexibility to be pragmatic 🧐
OK… We can now describe the risk… So what? What are we going to do to mitigate the risk? This is where our controls, governance, and standards come into play. We can think about these from a Tech / Platform, Data, Process, and People perspective, and also through the lens of what are the foundational controls we need to have in place vs what controls are possibly going to vary depending on the size of risk for a specific use case?
Once we have our foundational controls in place we can then start aligning those use case level risk controls to our scalable governance model to describe what levels of implementation we might expect to see for each of those controls at different risk sizes.
We can also decide which of the controls we will firmly enforce, and which have more nuance and flexibility around how we might implement them.
This is very much a V1 of these ideas and how they can be implemented to govern AI use cases and will certainly evolve. It hopefully shows you how this type of framework can be employed, in the same way that we’ve done for Power Platform and other low-code platforms, to help us communicate the rules and aid people in knowing how to do the right thing, as well as implementing risk mitigation through platform, data, process, and people controls.
To get into the detail on some of these topics and how I’ve described them in the past take a look through my other Blog Posts and if you’d like to hear me speak about this topic keep an eye on my events page as I’ll hopefully be confirming some more speaking events soon!
I’d love to hear your thoughts and ideas! Join in the conversations on LinkedIn and subscribe to my mail to learn when new blog posts are published.