Claude Code creates LinkedIn outreach sequences for your marketing team. ChatGPT summarises the quarterly reports. Copilot writes the API integration for your CRM. What was unthinkable two years ago is now everyday work in Swiss companies. This is also a security risk that most IT departments have not yet grasped.
This guide analyses the concrete security risks arising from enterprise use of AI tools and provides practical recommendations for effective risk management.
The Reality: AI Tools Are Already Everywhere
Far More Than Coding Assistants
Public perception of tools like Claude Code or GitHub Copilot is strongly focused on software development. The reality in companies looks different. AI tools are now used for a broad spectrum of business processes:
Sales and Marketing
- LinkedIn prospecting and outreach automation
- Lead qualification and CRM data enrichment
- Content creation for social media, blogs, and newsletters
- Competitive analyses and market research
Finance and Controlling
- Summarising and analysing quarterly reports
- Budget comparisons and variance analyses
- Financial forecasting and scenario modelling
- Invoice processing and categorisation
Human Resources
- Application screening and pre-selection
- Drafting job advertisements
- Creating onboarding documentation
- Evaluating employee surveys
Legal and Compliance
- Reviewing and summarising contract drafts
- Monitoring regulatory changes
- Creating compliance documentation
- Due diligence research
IT and Operations
- Writing scripts and automations
- Creating system documentation
- Log analyses and troubleshooting
- Generating data migration scripts
The problem: each of these use cases potentially involves sensitive corporate data. And in most cases, neither the IT department nor management is aware that this data is flowing through external AI systems.
Shadow AI: The Invisible Threat
The Numbers
The discrepancy between official AI strategy and actual AI usage in companies is striking:
- 67% of employees in knowledge-based professions regularly use AI tools at work
- Only 18% do so with explicit approval or knowledge of the IT department
- This means: over half of all knowledge workers use AI tools at work without the company’s knowledge
These figures come from a survey of European companies with 50 to 5,000 employees. For Switzerland, the values are likely at the upper end due to the high degree of digitalisation.
Why Shadow AI Is So Dangerous
Shadow AI differs from traditional shadow IT in one crucial respect: data does not merely leave the controlled environment; it is processed by a system that can “learn” from it. Specifically:
-
Data in training data: Depending on the provider and configuration, inputted data may flow into training data. This means confidential business information could theoretically appear in responses to other users.
-
No audit trails: When an employee enters confidential data into a personal ChatGPT account, there is no way for the company to trace this.
-
No data classification: Employees make ad hoc decisions about which data is “safe enough” , without formal criteria.
-
No version control: AI-generated outputs are often fed directly into business processes without documenting that AI was involved.
Known Vulnerabilities and CVEs
Security research into AI tool vulnerabilities has intensified significantly in 2025 and 2026. Two particularly relevant CVEs illustrate the risks:
CVE-2025-59536: Prompt Injection via Documents
This vulnerability affects AI systems that process documents (PDF analysis, email summaries, document chat). Through specially crafted documents, attackers can:
- Read the AI model’s system prompts
- Instruct the model to extract sensitive data from the context and send it to external endpoints
- Bypass the model’s guardrails and trigger unforeseen actions
Affected enterprise scenarios:
- An employee uploads a “job application” (PDF) into an AI-powered HR tool. The document contains hidden prompt injection instructions
- An email with a manipulated attachment is summarised by an AI assistant. Internal context data is exfiltrated in the process
- A counterparty’s contract draft contains hidden instructions that compromise the reviewing company’s AI system
CVE-2026-21852: API-Level Vulnerability in AI Toolchains
This more recent vulnerability affects the API layer of AI toolchains and enables:
- Unauthorised access escalation via manipulated API calls
- Bypassing rate limiting and access controls
- Extraction of system configurations and model parameters
Relevance for companies:
- Affects companies that have integrated AI APIs into their own applications
- Particularly critical in multi-tenant setups where multiple customers use the same system
- Can lead to exfiltration of other tenants’ data
Both CVEs have been patched by the affected providers, but the underlying attack patterns remain relevant and will continue to appear in new variants.
Tool-Specific Risk Profiles
ChatGPT / OpenAI API
| Aspect | Risk Assessment | Details |
|---|---|---|
| Data processing | Medium | Enterprise plans with training data opt-out available |
| API security | Medium | Solid API security, but misconfigurations common |
| Compliance | Medium-High | SOC 2 certified, GDPR compliance documented |
| Shadow AI risk | Very high | Broad adoption, easy access via consumer accounts |
| Prompt injection | High | Susceptible to multi-turn and indirect injection |
Claude / Anthropic API
| Aspect | Risk Assessment | Details |
|---|---|---|
| Data processing | Low-Medium | Strict data policy, no use for training via API |
| API security | Low-Medium | Solid API security architecture |
| Compliance | Medium | SOC 2, growing compliance documentation |
| Shadow AI risk | High | Growing adoption, Claude Code as CLI tool hard to control |
| Prompt injection | Medium | Constitutional AI provides additional protection, but not immune |
GitHub Copilot
| Aspect | Risk Assessment | Details |
|---|---|---|
| Data processing | Medium | Business plans with data protection guarantees |
| Code security | High | May suggest insecure code that gets adopted |
| Secrets exposure | High | Risk of API keys and credentials flowing into suggestions |
| Supply chain | High | May suggest vulnerable or outdated code from training data |
| Compliance | Medium | Licence compliance for generated code unclear |
Microsoft Copilot (M365)
| Aspect | Risk Assessment | Details |
|---|---|---|
| Data processing | Low-Medium | Processing within the M365 tenant |
| Access rights | Very high | Inherits existing M365 permissions, which reveals oversharing |
| Compliance | Low-Medium | Integrated into existing Microsoft compliance |
| Shadow AI risk | Medium | Tied to enterprise licence, therefore more controllable |
| Data leakage | Medium | May inadvertently aggregate data from other areas of the tenant |
Supply Chain Risks
The AI Ecosystem as an Attack Surface
Modern AI implementations rarely consist of a single tool. A chain of components is typical:
Input Data → Pre-processing → AI Model (API) → Post-processing → Output → Business Process
↑ ↑ ↑
Third-party Model Provider Plugins/
Libraries Integrations
Every component in this chain is a potential attack point:
- Model supply chain: The origin and integrity of the model used. Has it been tampered with? Does it contain backdoors?
- Library dependencies: Python packages, JavaScript modules, and other dependencies can be compromised
- Plugin ecosystem: Many AI tools support plugins that receive extensive permissions
- Data sources: RAG systems (Retrieval Augmented Generation) are only as secure as their data sources
Real Attack Vectors
- Manipulated models on Hugging Face: Models on public platforms can be trojanised
- Poisoned training data: Targeted manipulation of training data leading to predictably faulty behaviour
- Compromised plugins: Third-party plugins that collect more data than declared
- API intercepting: Man-in-the-middle attacks on API communication between enterprise systems and AI providers
Practical Recommendations: Building AI Tool Governance
Step 1: Inventorisation (Week 1–2)
Create a complete inventory of all AI tools in the company:
- Official tools: Licensed and approved AI services
- Shadow AI: Anonymous survey of employees about unofficial AI usage
- Embedded AI: AI features in existing software (e.g., Microsoft Copilot in M365, Salesforce Einstein)
- API integrations: Custom integrations with AI APIs
Step 2: Risk Assessment (Week 2–4)
For each identified tool, assess:
- What data flows into the tool?
- How is this data processed and stored by the provider?
- What contractual data protection guarantees exist?
- Who has access to the tool and with what permissions?
- Are there audit logs?
Step 3: Create Policy (Week 3–5)
An AI usage policy should cover at minimum the following points:
Data Classification for AI Usage
| Data Class | Description | AI Usage Permitted? |
|---|---|---|
| Public | Already published information | Yes, all tools |
| Internal | General business information | Yes, approved tools with enterprise licence |
| Confidential | Customer data, financial data, strategic documents | Only approved tools with DPA and enterprise licence |
| Strictly confidential | M&A documents, trade secrets, special personal data | No, no external AI processing |
Approved Tools and Configurations
- List of approved AI tools with specific configuration requirements
- Specifications for API key management (rotation, permissions, monitoring)
- Requirements for enterprise licences vs. personal accounts
Usage Rules
- No personal data without prior DPA review
- No entry of credentials, API keys, or passwords
- AI-generated outputs must be labelled as such
- Four-eyes principle for AI-generated decision bases
Step 4: Technical Controls (Week 4–8)
- Network level: Monitoring data traffic to AI provider APIs
- Endpoint level: Application whitelisting for AI tools on corporate devices
- API level: Central API gateway for all AI API calls with logging and rate limiting
- DLP integration: Data loss prevention rules for AI data flows
Step 5: Training and Awareness (Ongoing)
- Onboarding module on AI security for new employees
- Quarterly updates on new risks and policy changes
- Practical examples: “What happens when I enter this data into ChatGPT?”
- Positive communication: promote safe AI use, do not prohibit
API Key Security: An Underestimated Risk
API keys are the entry tickets to AI services, and they are treated with alarming carelessness.
Common Mistakes
-
Keys in source code: API keys are written directly into code and end up publicly via Git repositories. GitHub scans reveal thousands of exposed API keys daily.
-
Shared keys: A single API key is used across teams. In the event of compromise, there is no way to revoke access granularly.
-
No rotation: API keys are created once and never renewed. In the event of an undetected leak, the attacker has permanent access.
-
Excessive permissions: API keys receive full access rights although only limited permissions are needed.
-
No monitoring: API usage is not monitored. Anomalies such as suddenly increased volume or access from unusual regions go undetected.
Best Practices
- Secrets management: Use dedicated tools like HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault
- Granular keys: Separate API keys per application and environment with minimal permissions
- Automatic rotation: Keys should be rotated at least quarterly
- Usage monitoring: Alerting on anomalous usage patterns
- Emergency revocation: Process for immediate deactivation of compromised keys
Cost Estimation: What AI Tool Governance Costs
| Measure | One-Time Costs | Ongoing Costs |
|---|---|---|
| Inventorisation and risk assessment | CHF 5,000 – 15,000 | n/a |
| Policy creation | CHF 3,000 – 8,000 | CHF 1,000 – 2,000/year (updates) |
| Technical controls (setup) | CHF 10,000 – 30,000 | CHF 2,000 – 5,000/month |
| Training programme | CHF 5,000 – 12,000 | CHF 2,000 – 5,000/year |
| External AI security test | CHF 15,000 – 35,000 | CHF 15,000 – 35,000/year |
| Total range (SME) | CHF 38,000 – 100,000 | CHF 30,000 – 80,000/year |
This investment is quickly put into perspective when considering the potential damages. The McKinsey Lilli incident (in which 46.5 million messages were exposed) will likely cost the company many times more: legal costs, reputational damage, customer attrition, and regulatory fines.
When External Expertise Is Needed
An AI security audit by a specialised provider is advisable when:
- Your company deploys AI tools with customer contact
- You have integrated AI APIs into your own applications
- You process personal or regulated data through AI
- You need to meet compliance requirements (EU AI Act, nDSG, industry-specific regulation)
- You want to assess risks before a major AI investment
When selecting a provider, look for CREST certification and demonstrated experience with LLM security testing. A detailed comparison can be found in our guide AI Red Teaming Providers in Switzerland.
Governance, Not Prohibition
AI tools deliver real productivity gains. Prohibiting their use is neither realistic nor sensible. But using them without adequate governance structures is a risk no company can afford.
The key lies in a pragmatic approach: enable safe usage, make risks transparent, and set clear guardrails. Technical controls alone are not enough; it requires a combination of policies, training, technology, and regular review.
Companies that build solid AI governance today are creating not only security but also the foundation for responsible and competitive AI use.
Last updated: March 2026. This guide is regularly reviewed and updated. Alpine Excellence is an independent editorial platform and receives no compensation for provider recommendations.