Through the Lens
There is a difference between using AI and understanding what it takes to deploy it responsibly in an enterprise.
That distinction is getting lost. Quickly.
Chatting with an AI assistant on your phone, generating a report summary in Copilot, asking a chatbot to draft an email – these are genuinely useful things. They are also not the same as designing a governed, secure, enterprise-grade AI system. The people who can do the former are now, in large numbers, advocating for – and in some cases leading – the latter. Without the guardrails. Without the architecture. With full confidence that the technology is accessible enough that the hard questions can be figured out later.
They cannot be. And if you have spent any meaningful time delivering enterprise technology, you have seen exactly where that assumption ends up.
Across enterprise technology delivery, the same pattern has repeated with every major platform wave: a powerful capability arrives, governance is treated as a follow-on activity, and the cost of that decision arrives at the worst possible moment. AI is not a new story. It is the same story – with a larger blast radius.
Here are a handful of patterns from many I have encountered that illustrate exactly how this plays out.
SMS integrations – when automation runs without a governor
Back in 2015, a delivery team was building a fault and case management system on Microsoft Dynamics CRM 2016 for a major UK energy network operator. Part of the solution included a console application that triggered SMS notifications to customers via an SMS gateway – firing alerts based on power cut events logged in the system.
The logic made sense in isolation. In practice, the application began firing multiple messages for the same event. Loop conditions. Overlapping triggers. No coordinated opt-out enforcement. Customers were receiving repeated messages they had not asked for – it escalated internally, customers were contacted directly to explain what had happened, and the team spent time unpicking automations that should never have fired in the first place.
Under UK GDPR, every one of those messages requires a documented lawful basis, explicit consent, and a functioning opt-out mechanism. The right to withdraw consent must be as easy as giving it. Those regulatory obligations did not arrive to make life harder. They arrived because enterprises, left to govern their own communication automation, consistently did not.
The technology worked. The governance around it did not.
CRM data hoarding – collecting everything, planning nothing
When CRM platforms became powerful and accessible, the default instinct was to capture as much as possible. Names, contact details, purchase history, interaction logs, inferred preferences – all of it, everywhere, indefinitely.
Nobody planned for what happened when a customer asked for it all to be removed.
GDPR Article 17 – the right to erasure – was the regulatory forcing function that made organisations confront data they had collected without a retention strategy, stored without classification, and distributed to third-party processors without adequate contractual controls. A valid erasure request under UK GDPR requires knowing every place that data lives: CRM, marketing tools, integration logs, backups, archived exports. The work of answering that question correctly is significant when it has never been thought through.
This is governance debt – every week you operate without a data classification strategy, you are borrowing against a future cost that compounds. AI agents today are consuming vast quantities of internal and customer data to build knowledge bases, ground responses, and take actions. The same reckoning is coming. The question is whether organisations design for it now or scramble for it later.
Power Pages OData – the danger of assuming defaults are safe
In 2021, security researchers at UpGuard identified a widespread misconfiguration pattern across Microsoft Power Apps portals – now known as Power Pages. The issue was not a flaw in the product itself. It was an assumption made by the people configuring it.
Power Apps portals included the ability to enable OData feeds for retrieving list data. If table permissions were not explicitly toggled on, anonymous internet users could query that data freely. The default state – before Microsoft updated it – required administrators to actively enable table permissions after turning on the OData feed. Many did not, because the assumption was that enterprise software defaults to secure.
38 million records were exposed across 47 organisations as a result – including COVID-19 contact tracing data, vaccination appointments, and social security numbers. Microsoft responded by updating the default behaviour so that table permissions are enabled by default, and released tooling to help administrators audit their existing configurations.
The lesson here is not about the product. Microsoft identified the issue, fixed the default, and provided remediation tooling. The lesson is about the assumption – that because something is enterprise software from a major vendor, it must be configured safely out of the box. That assumption is now being carried into AI agent deployments.
Canvas Apps – a polished front end over an unsecured foundation
Canvas Apps genuinely transformed enterprise UX on the Power Platform. They made it possible to build clean, intuitive interfaces quickly, without writing traditional application code. The problem was what some teams assumed about what sat underneath.
Dataverse security operates at multiple levels: organisation, business unit, team, and user. The most common failure is security roles configured only for primary tables, with lookup and related tables left fully open. The second failure is access control enforced at the Canvas App layer – conditional logic built into the app itself – rather than at the Dataverse layer.
A direct API call bypasses the app entirely and retrieves everything the service account can reach. A polished interface does not imply a secured back end. Dataverse security roles are the enforcement mechanism. The Canvas App is cosmetic. When that distinction is not understood – and it frequently is not – you have an attractive front door on an unlocked building.
Middleware and APIM – integration without a gateway
Azure API Management exists to sit in front of backend services and enforce authentication, IP filtering, rate limiting, and audit logging at the gateway layer. In a well-architected integration, nothing reaches a backend service directly from the public internet.
In practice, I have seen custom integrations in Dynamics 365 and Power Platform programmes bypass APIM entirely – built quickly to meet a deadline, run over accessible public IPs, with no centralised logging and no monitoring in place. The integration works. The project closes. The endpoint stays live. Nobody thinks to decommission it because nobody documented it properly in the first place.
Exposed endpoints do not announce themselves. They are found – sometimes months or years later – by someone who knows where to look. By that point, the team that built it has moved on.
AI – the same pattern, with a larger blast radius
Every one of those failures shared the same root cause. A powerful capability was deployed at speed. Governance was treated as something that would fall into place. It did not.
AI agents in 2026 repeat this pattern – but with greater consequence:
- Agents inherit the full permission scope of the user or service account they operate under, unless explicitly and deliberately scoped down (Microsoft Learn)
- 78% of AI users at work are bringing their own tools without employer approval, according to Microsoft’s 2024 Work Trend Index – shadow AI is not a future risk, it is a present condition
- Prompt injection attacks can redirect agents to exfiltrate data or bypass controls via inputs the agent was never designed to handle
- AI-generated code introduces security flaws at scale – missing input validation, broken access controls, insecure object references – because the model does not understand the security context of the system it is writing for
- Gartner predicts 40% of agentic AI projects will be cancelled by 2027, largely due to governance gaps and poor strategic planning
There is also a speed asymmetry that makes this structurally harder than previous waves. AI agents can act in milliseconds. Human governance review operates in days or weeks. The gap between how fast an agent can cause a problem and how fast anyone can detect and respond is architectural – not procedural. It cannot be closed by reviewing logs once a month.
Here is a practical question worth asking right now:
if someone asked you to produce a complete audit trail of every decision your AI agent made last week, could you?
If the answer is not an immediate yes, the governance foundation is not in place.
If you are the person making AI deployment decisions rather than building the agents – that question is yours to answer, not your architect’s.
Microsoft’s own Agentic AI maturity model defines Level 100 – the entry level – as: no AI-specific security or governance processes, agents operating without formal oversight, risk assessment, or compliance checks. Most enterprises deploying AI agents today are at this level.
What responsible governance actually looks like
Level 300 on Microsoft’s maturity model – the minimum for responsible scaling – requires a central agent registry, agents classified by purpose and criticality, environment separation between development, test, and production, standardised approval checkpoints before any agent goes live, audit logging, and documented data handling policies for agent knowledge bases.
None of that is bureaucracy. Every item on that list maps directly to a failure mode described above. This is what closing governance debt looks like – before it becomes a crisis.
Microsoft has recognised this gap. Microsoft Agent 365 brings a central agent registry, DLP for Copilot Studio agents, Zero Trust conditional access, shadow AI detection, and lifecycle approval flows into one place. Useful – but controls without a governance strategy are just features waiting to be misconfigured. The platform does not make the decision to govern. You do.
The point
Consumer AI fluency is not enterprise AI architecture. Knowing how to write a prompt is not the same as knowing how to govern an agent.
Every pattern above was avoidable. Not because the technology was bad – it was not – but because the questions were not asked early enough. Who can access this? What data does this touch? What happens if this fires when it should not? Who is responsible when it goes wrong?
Those questions are not harder to answer before you build than after. They are significantly easier.
If your AI rollout plan has a governance section, check when it starts. If it is after go-live, you already know what this article is about.
Build the guardrails before you build the agent.
About the author
I work at the intersection of enterprise architecture and AI-assisted Business Applications delivery. As a 3× Microsoft FastTrack Recognised Solution Architect, governance is not an abstract concern – it is something I build into every engagement, from the first design session.
If you are working through AI governance in your organisation and want to compare notes, I am always open to a conversation. Find me on LinkedIn or read more on my blog at mgrb.in.