Microsoft 365 Copilot Chat bug ignored data loss prevention policies for three months, processing confidential emails despite explicit security restrictions
Microsoft reported $10bn in AI product revenue for Q3, with Copilot contributing meaningfully to growth targets
NHS England confirmed deployment of the affected tool, though stated patient information was not exposed
Bug first surfaced in January but only received public disclosure three months later
A code error in Microsoft's workplace AI assistant has spent months quietly ignoring corporate email security settings, accessing and summarising messages explicitly marked as confidential. The tech giant confirmed it has fixed the bug in its Microsoft 365 Copilot Chat tool, which allowed the AI to process sensitive emails from users' drafts and sent folders despite data loss prevention policies designed to block exactly this kind of access. Yet the three-month gap between discovery and disclosure raises uncomfortable questions about how Microsoft handles security incidents in products marketed explicitly on their superior security controls.
Microsoft insists the breach was contained. According to a company spokesperson, the system 'did not provide anyone access to information they weren't already authorised to see'. Yet the very existence of sensitivity labels and data loss prevention policies suggests organisations wanted these messages ring-fenced from AI processing entirely, even for authorised users.
Professional reviewing data security on computer screen
The bug reportedly first surfaced in January, according to tech outlet Bleeping Computer, which obtained internal Microsoft service alerts. That three-month gap between discovery and public disclosure raises uncomfortable questions about how the company handles security incidents in its enterprise products, particularly when those products are marketed explicitly on their superior security controls compared to consumer AI tools.
Enjoying this article?
Get stories like this in your inbox every week.
When 'secure by design' isn't
Microsoft 365 Copilot Chat sells itself as the enterprise-grade answer to workplace AI. Companies pay for it specifically because it promises stricter security protections than free consumer alternatives. The pitch is straightforward: all the productivity gains of generative AI, none of the data governance nightmares.
This incident undercuts that positioning. A support notice confirmed that Britain's health service had deployed the tool, raising questions about how many other organisations were running software with this flaw without realising it.
A support notice shared on the NHS England IT dashboard attributed the problem to a 'code issue', confirming that Britain's health service had deployed the tool. NHS England told the BBC that patient information had not been exposed and that processed emails remained with their original creators, though verifying such claims definitively after the fact presents obvious challenges.
Corporate email security and data protection concept
What's particularly revealing is Microsoft's framing: the company stated that whilst 'access controls and data protection policies remained intact', the behaviour 'did not meet our intended Copilot experience, which is designed to exclude protected content'. Translation: the security infrastructure technically worked, but the AI simply ignored it.
The pressure cooker of AI deployment
The episode illustrates a widening gap between deployment velocity and organisational readiness. Nader Henein, a data protection and AI governance analyst at Gartner, described 'this sort of fumble' as unavoidable given how frequently vendors release new AI capabilities. According to Henein, organisations using these products often lack the tools needed to manage each iteration.
Under normal circumstances, he noted, enterprises would disable a problematic feature until proper governance caught up. But that measured approach has become 'near-impossible' because of what he called 'the torrent of unsubstantiated AI hype' creating pressure to adopt.
That pressure is real and measurable. Microsoft reported $10bn in AI product revenue for its fiscal third quarter, with Copilot contributing meaningfully to that figure. The company has staked considerable strategic capital on positioning itself as the workplace AI leader, racing against Google, Anthropic, and others to embed generative AI across enterprise software stacks. Features ship fast. Security testing, apparently, struggles to keep pace.
Enterprises are being asked to trust that AI systems won't accidentally hoover up sensitive information, whilst simultaneously being told to expect bugs because of rapid development cycles. That's a difficult position for risk managers in regulated industries.
Professor Alan Woodward, a cyber-security expert at the University of Surrey, argued the incident demonstrates why such tools should be private-by-default and opt-in only. 'There will inevitably be bugs in these tools, not least as they advance at break-neck speed,' he told the BBC. Even unintentional data leakage, he cautioned, will happen.
What enterprises should watch
Microsoft has rolled out what it describes as a configuration update globally for enterprise customers. But the incident sets a precedent worth noting: even when organisations implement explicit controls to exclude content from AI processing, those controls can fail silently for months.
Business technology and workplace AI deployment
CIOs evaluating workplace AI tools should be asking harder questions about testing protocols, disclosure timelines, and whether 'enterprise-grade security' means vendors have actually validated that their AI respects existing data governance policies. The NHS deployment is especially instructive: if a public sector organisation managing some of the most sensitive personal data in Britain was running a tool with this flaw, how many private enterprises were doing the same without realising it?
The competitive dynamics driving AI feature releases show no signs of slowing. Microsoft has confirmed that the bug allowed Copilot to summarize customers' confidential emails for weeks, adding to concerns about rapid deployment cycles. Microsoft, Google, and others are embedding generative AI deeper into productivity software every quarter, adding capabilities that touch increasingly sensitive corporate systems. Each new feature represents another surface area where code issues could bypass security controls.
Enterprises that moved quickly on AI adoption to avoid being left behind must grapple with whether they've built sufficient governance infrastructure to audit what these tools are actually doing with their data. Microsoft's fix addresses this specific bug, but the underlying tension between deployment speed and security validation isn't going anywhere.
Demand vendor transparency on testing protocols and incident disclosure timelines before deploying workplace AI tools that access sensitive corporate data
Build internal governance infrastructure capable of auditing AI behaviour independently, rather than relying solely on vendor assurances that security controls are working as intended
Expect the tension between rapid AI feature deployment and security validation to intensify as Microsoft, Google, and competitors race to embed generative AI deeper into enterprise software stacks
Multi-award winning serial entrepreneur and founder/CEO of Venntro Media Group, the company behind White Label Dating. Founded his first agency while at university in 1997. Awards include Ernst & Young Entrepreneur of the Year (2013) and IoD Young Director of the Year (2014). Co-founder of Business Fortitude.