How This Briefing Works
This report opens with key findings, then maps the gaps between what Openai discloses and what BLACKOUT observed at runtime. From there: what it means for your organization, what to do about it, and the detection data and evidence underneath.
Key Findings
Pre-Consent Activity
Openai was observed loading and executing before user consent was obtained on 92% of sites where it was detected.
Claims vs. Observed Behavior
pending
“Unknown”
Requires claims extraction via CDT
What This Means For You
What To Do About It
Role-specific actions based on observed behavior
If You Use Openai
- →Review OpenAI DPA: confirm whether API interaction data (prompts, responses, usage patterns) is contractually prohibited from use in model training or retained beyond request fulfillment
- →Audit API integrations: identify sensitive data types being transmitted through prompts including customer information, trade secrets, and confidential business intelligence
- →Query OpenAI: provide complete documentation of data retention policies, model training data inclusion criteria, and mechanisms for verifying prompt deletion
- →Assess competitive exposure: determine if proprietary prompt engineering and domain-specific implementations could be reverse-engineered from model behaviors trained on your API usage
If You're Evaluating Openai
- →Demand contractual zero-retention guarantee: all prompts, responses, and API interaction metadata must be purged immediately after request completion with no model training inclusion
- →Require monthly certification that no customer API data has been used for model training, benchmarking, or any purpose beyond direct request fulfillment
- →Negotiate intellectual property protections: proprietary prompts and implementation strategies must receive trade secret protections preventing competitive intelligence harvesting
- →Replace with self-hosted LLMs (Llama, Mistral) or privacy-preserving AI providers (Anthropic with explicit no-training guarantees) that eliminate competitive intelligence exposure through model training contribution
Negotiation Leverage
- →OpenAI API integration processes end-user interactions without adequate consent disclosures, triggering GDPR/CPRA data processing obligations. Users interacting with AI features have no visibility into OpenAI backend processing. Legal exposure: Our counsel requires written confirmation that all end-user data processed through APIs receives explicit consent disclosures and that OpenAI qualifies as legitimate service provider rather than independent data controller.
- →Intellectual property exposure through prompt retention creates trade secret misappropriation risk. Proprietary prompt engineering, domain-specific implementations, and workflow automation logic become OpenAI training data. Quantify exposure: Provide complete documentation of prompt retention policies, model training data inclusion criteria, and contractual mechanisms protecting customer intellectual property from competitive harvesting.
- →Model training data contribution subsidizes competitor AI capabilities. Your API usage improves OpenAI models available to all customers including direct market rivals. Demand transparency: What percentage of model improvement derives from customer API data vs. other sources, and what mechanisms prevent our proprietary implementations from benefiting competitors through shared model access?
- →If OpenAI refuses to implement zero-retention API processing with absolute prohibition on model training data inclusion, demand immediate migration to privacy-preserving alternatives. The intellectual property exposure and competitive intelligence leakage through model training contribution exceeds any AI infrastructure convenience, particularly as self-hosted and privacy-first alternatives mature.
Runtime Detections
BLACKOUT observed this vendor's JavaScript executing in a live browser and classified each hostile behavior using our BTI-C (Behavioral Threat Intelligence — Capability) taxonomy. These are not theoretical risks — each code below was triggered by something we watched this vendor's code actually do.
Evasion infrastructure, auditor bypass
Impact: Modifies API response quality and model behaviors based on usage pattern analysis, systematically degrading performance for high-value use cases to encourage enterprise upgrades
Keystroke/mouse tracking
Impact: Captures user interaction patterns with AI-powered features including prompt iteration styles, refinement behaviors, and workflow sequences to profile organizational AI sophistication
Full session replay
Impact: Records complete AI interaction sessions including multi-turn conversations, prompt engineering evolution, and use case development for model training and competitive intelligence
Identity stitching
Impact: Synchronizes API usage patterns across organizational implementations to build unified intelligence about enterprise AI strategy and deployment approaches
Ignoring CMP signals
Impact: Processes end-user interactions with AI features without direct user disclosure or consent, operating through backend API integrations invisible to data subjects
Device identification
Impact: Creates persistent organizational fingerprints based on API usage patterns, prompt styles, and implementation characteristics to enable competitive benchmarking
Long-lived identifiers
Impact: Maintains long-term retention of prompts, responses, and usage patterns despite customer data deletion requests, citing model training as legitimate business purpose
IOC Manifest
Indicators of compromise across 4 categories. Use for detection rules, CSP policies, or Pi-hole blocklists.
Ecosystem & Supply Chain
Evidence Artifacts
Artifacts collected during analysis, available with evidence-tier access.
Complete network capture with all requests and responses
196 detection signatures across scripts, domains, cookies, and network endpoints