In January 2026, McKinsey’s internal AI assistant “Lilli” was compromised through a prompt injection attack, exposing 46.5 million chat messages including client data and strategic documents. If a firm with McKinsey’s security budget can be breached through its own AI tool, the question for every Swiss company deploying LLMs is not whether AI-specific risks exist, but how quickly they can be addressed. Every AI integration opens new attack surfaces that differ fundamentally from traditional IT security risks.

This guide provides an independent assessment of the most critical AI security risks for Swiss companies, with concrete recommendations for decision-makers.

The New Threat Landscape: Why AI Security Is Different

Traditional cybersecurity protects systems from unauthorised access. AI security must additionally address a fundamentally new attack category: the manipulation of models through their own input layer. An LLM is not a deterministic system. It interprets inputs, and that very interpretation can be exploited.

The consequence: firewalls and endpoint protection are of little use when the attack vector is a cleverly worded sentence in a customer chat.

The McKinsey Lilli Incident as a Wake-Up Call

In January 2026, McKinsey’s internal AI system “Lilli” was compromised. Through a prompt injection attack, security researchers managed to extract 46.5 million internal chat messages within , including strategic consulting documents, client data, and confidential analyses.

The incident illustrates three central points:

  1. Even world-class organisations are vulnerable. McKinsey invests heavily in IT security. Yet the AI-specific attack surface was not adequately protected.
  2. The damage dimension is enormous. In traditional breaches, individual databases are compromised. With AI systems, a single exploit can expose an organisation’s entire knowledge base.
  3. Regulatory consequences follow. McKinsey now faces investigations in multiple jurisdictions, including the EU and Switzerland.

For a detailed technical analysis of the incident and its implications, we recommend the article on cybersecurityswitzerland.ch.

Prompt Injection: The Number One Vulnerability

Prompt injection is the most significant attack vector against LLM-based systems. It involves hiding instructions in user inputs that cause the model to bypass its security guidelines.

Direct Prompt Injection

A user enters direct instructions that override the model’s system prompts. Example: a customer chatbot is supposed to answer only product questions but is manipulated through clever phrasing into revealing internal pricing information or system configurations.

Indirect Prompt Injection

Even more dangerous: attackers place hidden instructions in documents, emails, or websites that the AI system processes. The model “reads” these instructions and executes them without the user noticing.

The Numbers Are Alarming

According to the OWASP LLM Top 10 Report 2025, 73% of all production LLM deployments are susceptible to at least one form of prompt injection. This figure refers to systems tested in real , not laboratory experiments.

What does this mean for Swiss companies?

  • Every AI-powered customer touchpoint is potentially a vulnerability
  • Internal AI tools that access corporate data can be abused for data exfiltration
  • Automated workflows with LLM components can be manipulated to trigger unintended actions

You can find a detailed technical explanation of prompt injection attacks with Swiss case studies on cybersecurityswitzerland.ch.

AI Tools as a Business Risk

Claude Code and the New Productivity

Claude Code, Anthropic’s AI assistant for the command line, has evolved beyond pure software development into a versatile business tool. Companies use it for:

  • Data analysis and reporting: financial reports, market analyses, competitive comparisons
  • Document processing: contracts, compliance documents, internal policies
  • Communication: email drafts, client correspondence, LinkedIn outreach
  • Code and automation: scripts for CRM integrations, data migrations, API connections

This versatility carries significant security implications. When an employee processes confidential financial data through an AI tool, the question arises: where is this data stored? Who has access? Is it used for training?

Shadow AI: The Invisible Risk

“Shadow AI” refers to the use of AI tools by employees without the knowledge or approval of the IT department. The numbers are concerning:

  • 67% of employees in knowledge-based professions use AI tools at work
  • Only 18% do so with explicit approval or knowledge of the IT department
  • This means: in most Swiss companies, corporate knowledge is already flowing through AI systems over which the company has no control

The most common Shadow AI scenarios:

ScenarioRiskFrequency
Contract texts in ChatGPT for summarisationConfidential business information to third-party providersVery common
Financial data in Claude for analysisRegulatory-relevant data outside controlled environmentsCommon
Customer data for outreach automationGDPR/nDSG violationsCommon
Code with API keys in CopilotCredentials in training dataMedium
Internal strategy documents for summarisationCompetitively critical information exposedCommon

API Key Management

Many AI integrations require API keys that grant extensive access to corporate systems. The most common mistakes:

  • API keys committed to code repositories (publicly on GitHub)
  • Single API keys shared across teams
  • No rotation or expiration dates for API keys
  • No monitoring of API usage for anomalous patterns

Regulatory Requirements: EU AI Act and Switzerland

The EU AI Act: Timeline and Relevance

The EU AI Act entered into force on 1 August 2024. The transition periods are staggered:

RegulationDeadlineStatus
Prohibitions for unacceptable AI risksFebruary 2025In force
Transparency obligations for General Purpose AIAugust 2025In force
Obligations for high-risk AI systemsAugust 2026Deadline running
Full enforcementAugust 2027Pending

Significance for Swiss Companies

Even though Switzerland is not an EU member, the EU AI Act directly affects Swiss companies:

  • Market presence in the EU: Anyone offering AI-powered products or services in the EU is subject to the AI Act
  • EU clients: Swiss service providers using AI for EU clients must demonstrate compliance
  • Autonomous adoption: Switzerland is expected to develop its own AI regulation based on the EU AI Act
  • Competitiveness: Companies that operate EU AI Act-compliant today will have a market advantage tomorrow

Consequences of Non-Compliance

Fines under the EU AI Act are substantial:

  • Up to EUR 35 million or 7% of global annual turnover for prohibited AI practices
  • Up to EUR 15 million or 3% of turnover for violations of high-risk requirements
  • Up to EUR 7.5 million or 1.5% for incorrect information to authorities

When Your Company Needs AI Red Teaming

AI red teaming is the systematic testing of AI systems for security vulnerabilities through simulated attacks. Not every company needs this immediately, but the following indicators suggest it is warranted:

Immediate Action Required

  • You deploy LLMs with customer contact (chatbots, automated email responses, customer portals)
  • Your AI systems have access to confidential data (financial data, customer data, strategic documents)
  • You process personal data through AI (GDPR/nDSG relevance)
  • You offer AI-powered products in the EU market (AI Act compliance)

Medium-Term Action Required

  • You use AI internally for decision support (HR, lending, risk assessment)
  • Your supply chain contains AI components (supplier tools, automated ordering systems)
  • You are planning major AI investments (test before scaling)

No Immediate Need (but Awareness Required)

  • You use AI only passively (search functions, recommendation systems without sensitive data)
  • Your AI usage is limited to isolated standard tools without data access

What AI Red Teaming Covers

A professional AI red teaming assessment typically examines:

1. Prompt Injection Testing

  • Direct and indirect injection attacks
  • Multi-turn attacks across multiple interactions
  • Payload injection via documents and data sources

2. Data Exfiltration

  • Can training data or system prompts be extracted through clever prompts?
  • Can access to connected databases be gained through the AI system?
  • Do side-channel attacks via the API work?

3. Jailbreaking and Guardrail Bypass

  • Can the model’s safety features be circumvented?
  • Can content filters be bypassed?
  • Is the role separation (System/User/Assistant) resilient?

4. Supply Chain Analysis

  • Are the models and libraries used trustworthy?
  • Are there known vulnerabilities in the AI infrastructure?
  • How secure are third-party integrations?

5. Compliance Assessment

  • Does the system meet EU AI Act requirements?
  • Are transparency obligations fulfilled?
  • Is the documentation adequate?

Provider Selection: What to Look For

The market for AI security services in Switzerland is still young. This makes careful provider selection all the more important. Here are the decisive criteria:

CREST Certification

CREST (Council of Registered Ethical Security Testers) is the international gold standard for security testers. CREST certification means:

  • Verified technical competence of testers
  • Proven methodology and processes
  • Regular recertification
  • Adherence to an ethics code

Why this matters: AI red teaming requires access to sensitive systems and data. A CREST-certified provider offers assurance that this access is handled responsibly.

OWASP LLM Top 10 Expertise

Providers should demonstrate that they work with the OWASP LLM Top 10 framework. This covers:

  1. Prompt Injection
  2. Insecure Output Handling
  3. Training Data Poisoning
  4. Model Denial of Service
  5. Supply Chain Vulnerabilities
  6. Sensitive Information Disclosure
  7. Insecure Plugin Design
  8. Excessive Agency
  9. Overreliance
  10. Model Theft

EU AI Act Compliance Knowledge

The provider should not only test technically but also cover the regulatory dimension. Ask about:

  • Experience with AI Act conformity assessments
  • Knowledge of risk categorisation
  • Ability to create compliance roadmaps

Industry-Specific Experience

AI security in the financial sector differs from healthcare or e-commerce. The ideal provider understands the specific regulatory and operational requirements of your industry.

Providers in Switzerland

The Swiss market for specialised AI security services is manageable. Many traditional cybersecurity firms are expanding their portfolio to include AI topics, but few have deep expertise in LLM security.

RedTeam Partners is one of the few providers in Switzerland with CREST certification and a specific focus on AI red teaming. The company was founded by former offensive security specialists and offers both technical AI red teaming and strategic AI security consulting. Their methodology is based on the OWASP LLM Top 10 framework and explicitly addresses EU AI Act requirements.

For a detailed market overview and selection criteria, we recommend our comparison guide AI Red Teaming Providers in Switzerland.

Cost Framework

Costs for AI security measures vary significantly depending on scope and complexity:

MeasureTypical Cost RangeDuration
AI Security Audit (basic)CHF 8,000 – 15,0003–5 days
AI Red Teaming (standard)CHF 15,000 – 35,0005–10 days
AI Red Teaming (complete)CHF 35,000 – 80,00010–20 days
EU AI Act Compliance AssessmentCHF 10,000 – 25,0005–10 days
Ongoing AI security monitoringCHF 3,000 – 8,000/monthContinuous

Detailed cost comparisons for penetration tests and security audits can be found on cybersecurityswitzerland.ch.

Recommendations for Swiss Companies

Implement Immediately (0–30 Days)

  1. Inventory AI usage: Record all AI tools and systems , both official and unofficial
  2. Create a Shadow AI policy: Define clear rules for AI usage and communicate them
  3. API key audit: Review all active API keys, their permissions, and usage
  4. Data classification: Determine which data may flow into which AI systems

Short-Term (1–3 Months)

  1. Adopt an AI security policy: Formal policy for AI use in the company
  2. Employee training: Awareness training for AI-specific risks
  3. Supplier assessment: Review the security practices of your AI providers
  4. Initial security tests: Have your customer-facing AI systems externally tested

Medium-Term (3–12 Months)

  1. AI red teaming: thorough security review of all AI systems
  2. EU AI Act roadmap: Create a compliance plan for the August 2026 deadline
  3. Incident response plan: Integrate AI-specific scenarios into your IR plan
  4. Continuous monitoring: Implement automated monitoring of your AI systems

Checklist: AI Security for Decision-Makers

  • Is a complete inventory of all AI tools and systems available?
  • Is there a formal AI usage policy?
  • Have employees been trained on AI security risks?
  • Are API keys securely managed and regularly rotated?
  • Is it clearly defined which data may flow into AI systems?
  • Have customer-facing AI systems been tested for prompt injection?
  • Is there a compliance plan for the EU AI Act?
  • Is AI security part of the incident response plan?
  • Are AI suppliers regularly audited for security?
  • Is there a regular AI security testing cycle?

Further Resources

Where This Leaves Swiss Companies

AI security is no longer a future topic. It is an operational necessity. The McKinsey Lilli incident has shown that even excellently positioned organisations are vulnerable. For Swiss companies, the EU AI Act adds an additional regulatory dimension.

The good news: with a systematic approach, AI risks can be effectively managed. The first step is always the inventory: knowing which AI systems are in use and what data they process. Building on that, targeted security measures can be implemented.

Those who invest in AI security today are not only protecting their data and reputation but also securing their competitiveness in an increasingly regulated environment.

Last updated: March 2026. This guide is regularly reviewed and updated. Alpine Excellence is an independent editorial platform and receives no compensation for provider recommendations.