How This Briefing Works
This report opens with key findings, then maps the gaps between what Deepseek discloses and what BLACKOUT observed at runtime. From there: what it means for your organization, what to do about it, and the detection data and evidence underneath.
Key Findings
Claims vs. Observed Behavior
pending
“Unknown”
Requires claims extraction via CDT
What This Means For You
What To Do About It
Role-specific actions based on observed behavior
If You Use Deepseek
- →Audit Deepseek ML data collection to verify behavioral patterns are not captured for external model training
- →Disable cross-domain visitor profiling and require strict property-specific ML model isolation
- →Review DPA for AI model training data restrictions and prohibit behavioral data sharing with external ML systems
- →Implement consent-conditional Deepseek initialization to prevent pre-acceptance behavioral capture
- →Establish data retention limits to prevent long-term ML training dataset accumulation
If You're Evaluating Deepseek
- →Request Deepseek deployment without cross-domain visitor profiling or external ML model data sharing
- →Require contractual guarantee that behavioral data remains property-specific and does not train shared ML models
- →Verify Deepseek does not employ automated decision-making that affects user experience without consent
- →Assess alternative AI analytics platforms with transparent ML model governance and data isolation guarantees
- →Demand pricing concessions reflecting restricted deployment without cross-property ML training data collection
Negotiation Leverage
- →VRS 80 classification with 100% CAC subsidization justifies 40% discount if cross-domain ML training is permanently disabled
- →60% legal tail risk from AI automated decision-making demands indemnification for GDPR Article 22 violations
- →Require contractual guarantee that behavioral data does not train ML models accessible to external demand networks
- →Request quarterly attestation that AI models remain property-specific and do not feed cross-customer intent prediction
- →Negotiate data processing transparency including ML model architecture disclosure and training data isolation verification
Runtime Detections
BLACKOUT observed this vendor's JavaScript executing in a live browser and classified each hostile behavior using our BTI-C (Behavioral Threat Intelligence — Capability) taxonomy. These are not theoretical risks — each code below was triggered by something we watched this vendor's code actually do.
Evasion infrastructure, auditor bypass
Impact: Deepseek tracking infrastructure operates through background ML data collection that continues after consent rejection.
Keystroke/mouse tracking
Impact: Mouse movements, scroll patterns, and interaction timing captured to train AI engagement prediction models.
Identity stitching
Impact: Visitor behavior profiles synchronized across properties to build comprehensive cross-site training datasets for ML models.
Device identification
Impact: Browser and device fingerprinting used to reconnect visitors across sessions for longitudinal ML model training data collection.
IOC Manifest
Indicators of compromise across 3 categories. Use for detection rules, CSP policies, or Pi-hole blocklists.
Ecosystem & Supply Chain
Evidence Artifacts
Artifacts collected during analysis, available with evidence-tier access.
Complete network capture with all requests and responses
51 detection signatures across scripts, domains, cookies, and network endpoints