Two hours. One prompt injection. 46.5 million internal chat messages extracted. On 14 January 2026, McKinsey’s internal AI system “Lilli” was compromised in what security experts now call the largest documented AI security incident in history. The breach forces every company running AI systems to re-examine its defences.
What Happened
The Timeline
McKinsey had launched “Lilli” in 2023 as an internal knowledge management tool. The system was based on a Large Language Model trained on McKinsey’s extensive knowledge base and linked to internal documents, consulting reports, and communication histories. Over 30,000 McKinsey consultants used Lilli daily for research, analysis, and document creation.
Phase 1: Reconnaissance The security researchers first identified the system’s architecture. Lilli used a Retrieval Augmented Generation (RAG) setup: the model accessed an extensive vector database indexing McKinsey’s entire institutional knowledge with every query.
Phase 2: Prompt Injection Through a multi-stage prompt injection attack, they managed to extract the system prompts and bypass the model’s security measures. The decisive factor was an indirect injection via a document disguised as an “internal research report.”
Phase 3: Data Exfiltration Once past the guardrails, the researchers could instrumentalise the RAG system to systematically retrieve data from the vector database. In just two hours, 46.5 million messages were extracted, including consulting documents, client correspondence, strategic analyses and internal communications.
What Was Exposed
The exposed data encompassed:
- Strategic consulting documents: Confidential analyses for Fortune 500 companies
- Client communication: Internal discussions about client engagements
- Financial information: Non-public financial data of McKinsey clients
- Personnel information: Internal evaluations and career data of McKinsey consultants
- M&A information: Confidential due diligence materials
The Technical Causes
The incident had several interlocking technical causes:
1. Insufficient Input Validation
Input validation for the RAG system was designed for classic injection attacks (SQL, XSS), not for LLM-specific prompt injection. The security architecture treated the LLM layer like a conventional application layer. A fundamental error.
2. Excessive RAG System Permissions
The RAG system had read access to virtually the entire data estate. There was no granular access control at the document level. When a consultant asked a question, the system could potentially access all indexed documents, not just those the user was authorised to view.
3. Missing Anomaly Detection
There was no monitoring that would have detected unusual access patterns. The exfiltration of millions of documents over a two-hour period generated no alerts.
4. No Knowledge Base Segmentation
Highly sensitive data (M&A, client data) was indexed in the same vector database as general information. There was no separation by confidentiality level.
Significance for Swiss Companies
Why This Incident Affects Everyone
The relevance of the McKinsey Lilli incident extends far beyond the consulting industry. It illustrates vulnerabilities that can exist in virtually any company with AI systems:
1. The RAG Problem Is Universal
Many Swiss companies deploy RAG systems, from customer support chatbots accessing knowledge bases to internal research tools. If these systems are not properly secured, a single prompt injection attack can expose the entire indexed data estate.
2. The Data Dimension Exceeds Traditional Breaches
In a traditional data breach, a database with defined content is compromised. With AI systems using RAG architecture, a single exploit can potentially access an organisation’s entire consolidated knowledge. The damage dimension is qualitatively different.
3. Swiss Financial Centre Particularly Exposed
Swiss banks, insurers and asset managers process highly sensitive financial data. Many of these institutions are experimenting with or already deploying AI systems. FINMA has not yet issued specific guidelines for AI security. The McKinsey incident could accelerate this.
4. nDSG and EU AI Act Create Liability Risks
Under the revised Swiss Data Protection Act (nDSG) and the EU AI Act, companies can be held liable for inadequate security measures in AI systems. A McKinsey-like incident at a Swiss company would result not only in reputational damage but also in significant legal consequences.
Industries at Elevated Risk
| Industry | Typical AI Applications | Specific Risk |
|---|---|---|
| Banking/Finance | Client advisory, risk assessment, compliance monitoring | Banking secrecy, FINMA regulation |
| Pharma | Research data analysis, patent research, clinical trial data | Trade secrets, patient data |
| Insurance | Claims assessment, underwriting, customer interaction | Personal health data |
| Legal advisory | Contract analysis, legal research, due diligence | Attorney-client privilege, client data |
| Public administration | Citizen services, document processing | Specially protected personal data |
The Regulatory Consequences
EU Perspective
The McKinsey incident falls in a phase where EU AI Act obligations for General Purpose AI (GPAI) have just come into force (August 2025). The EU Commission is examining whether:
- McKinsey’s Lilli should be classified as a high-risk AI system
- Transparency obligations for GPAI were complied with
- Adequate security tests were conducted before deployment
Fines under the EU AI Act (up to EUR 35 million or 7% of global annual turnover) create a new liability framework that could cause McKinsey considerable costs.
Swiss Perspective
Switzerland does not yet have a specific AI law, but existing regulation applies:
- nDSG: The revised Data Protection Act requires appropriate technical and organisational measures to protect personal data. An insufficiently secured AI system can constitute a violation.
- FINMA: The financial sector is subject to special requirements for operational resilience. AI systems with access to client data fall under these requirements.
- Sector-specific regulation: Healthcare (EPD), telecommunications, and other sectors have their own data protection requirements.
The Federal Council announced in February 2026 that it would present a draft for Swiss AI regulation modelled on the EU AI Act. The McKinsey incident has accelerated this discussion.
Lessons for Practice
What Companies Should Do Immediately
1. Review RAG Systems
If your company deploys a RAG system, whether for customer support, internal research or document management, you should immediately examine:
- What data is in the index? Is there segmentation by confidentiality level?
- Are access controls implemented at the document level?
- Are user queries checked for prompt injection patterns?
- Is there monitoring for anomalous access patterns?
2. Enforce Data Classification
The McKinsey incident could have been less severe if highly sensitive data had not been indexed in the same vector database as general information. Implement strict data classification:
- What data may flow into AI systems?
- What data must remain in separate, more strongly secured environments?
- How is compliance monitored?
3. Commission AI Security Tests
The McKinsey incident shows that traditional security tests (network pentests, webapp audits) do not cover AI-specific vulnerabilities. Companies need specialised AI red teaming assessments.
For a detailed technical analysis of the attack techniques and countermeasures, we recommend the article on the RedTeam Partners Blog, which explains the attack methodology and the protective measures to be derived from it in detail.
4. Expand the Incident Response Plan to Include AI Scenarios
Most incident response plans do not cover AI-specific incidents. Supplement your plan with scenarios such as:
- Prompt injection attack on customer-facing AI systems
- Data exfiltration via RAG systems
- Manipulation of AI-supported decision processes
- Shadow AI incident (employee exposes data via private AI tool)
Medium-Term Measures
5. Prepare EU AI Act Compliance
Even if Switzerland does not (yet) fall directly under the EU AI Act, companies with EU ties, whether through clients, subsidiaries or market presence, should take the August 2026 deadline for high-risk AI systems seriously.
6. Build AI Governance Structures
The McKinsey incident shows that AI security is not solely a technical task. Governance structures are needed that define responsibilities, processes, and control mechanisms.
7. Vendor Assessment for AI Services
If you use AI services from third-party providers, review their security architecture. Ask about:
- How is the RAG system secured?
- What prompt injection protection measures are implemented?
- Is there tenant separation?
- What certifications does the provider hold?
The Market Responds
Acceleration of the AI Security Industry
The McKinsey incident has caused demand for specialised AI security services to surge. According to industry observers, enquiries for AI red teaming in Switzerland have more than tripled since January 2026.
At the same time, numerous providers are entering the market offering “AI security” without deep expertise. For companies, it is therefore all the more important to look for demonstrable qualifications such as CREST certification when selecting a provider.
A detailed guide to selecting an AI red teaming provider can be found in our guide AI Red Teaming Providers in Switzerland: Selection Criteria 2026.
Regulatory Responses
- FDPIC (Federal Data Protection and Information Commissioner): Has announced a statement on AI systems and data protection obligations
- FINMA: Is examining expanded requirements for AI systems in the regulated financial sector
- EU Commission: Has included the case as a reference in the ongoing development of AI Act guidelines
What Companies Can Learn: Five Principles
Principle 1: Zero Trust for AI Systems
Treat AI systems like any other external service, applying the principle of “never trust, always verify.” Every interaction with the model must be validated, regardless of whether it originates from an internal user.
Principle 2: Least Privilege for RAG Access
RAG systems should be configured according to the principle of least privilege. When a marketing employee uses the system, it must not be able to access M&A documents, even if the vector database contains them.
Principle 3: Defence in Depth
Do not rely on a single security layer. Combine:
- Input validation (prompt injection detection)
- Output filtering (preventing data exfiltration)
- Access controls (document-based permissions)
- Monitoring (anomaly detection)
- Rate limiting (restricting query volume)
Principle 4: Assume Breach
Plan for the eventuality that your AI security measures fail. Ensure that:
- Damage remains limited (through segmentation)
- You detect the incident quickly (through monitoring)
- You can respond rapidly (through prepared incident response processes)
- You learn from the incident (through post-incident analysis)
Principle 5: Regular Security Testing
AI systems are constantly evolving, and so are the attack methods targeting them. A one-time security test is not sufficient. Plan for at least annual AI red teaming assessments, more frequently for customer-critical systems.
What This Incident Means for the Industry
The McKinsey Lilli incident is not simply another data breach. It marks a turning point in the discussion about AI security. For the first time, an AI-specific attack, not a classic infrastructure hack, has hit one of the largest and best-secured corporations in the world.
For Swiss companies, the message is clear: AI security is no longer an optional add-on. It belongs on the management agenda, in the IT security budget, and in the audit plan.
The good news: the lessons from the McKinsey incident are concrete and actionable. Those who act now can secure their AI systems before a similar incident occurs.
For a broader assessment of AI security risks for Swiss companies, we recommend our guide AI Security for Companies: What You Need to Know in 2026.
Alpine Excellence is an independent editorial platform. This article does not constitute investment or legal advice.