Your Trade Secrets Are Already at Risk: AI Scraping, Prompt Injection, and Federal Criminal Exposure
A company builds a proprietary pricing algorithm over three years. Millions in R&D. Restricted access. NDAs everywhere. Then a competitor points an AI scraping bot at their platform, pulls the outputs, reverse engineers the logic, and launches a competing product in six weeks.
Was that a crime? Under the Defend Trade Secrets Act, possibly. Under the Computer Fraud and Abuse Act, maybe. Under wire fraud statutes, it depends on how the data moved. The legal lines here are not settled. They are moving fast, and the government is watching.

AI-powered scraping tools can extract proprietary data at a scale and speed that traditional trade secret law never anticipated.
The Defend Trade Secrets Act in an AI World
The Defend Trade Secrets Act (18 U.S.C. § 1836) gives trade secret owners a federal civil cause of action when their secrets are misappropriated. It also carries criminal penalties under the Economic Espionage Act (18 U.S.C. §§ 1831-1839) for theft of trade secrets. The statute defines "improper means" broadly: theft, bribery, misrepresentation, breach of a duty to maintain secrecy, or espionage through electronic or other means.
That last phrase is where AI scraping lands. When a bot systematically extracts data that a company has taken reasonable measures to protect, the question becomes whether the extraction qualifies as "improper means" under the DTSA. The answer is not automatic. The government or a civil plaintiff has to show that the information qualifies as a trade secret, that the owner took reasonable steps to protect it, and that the method of acquisition was improper.
But here is what matters for anyone facing these allegations or bringing them: the statute was written to be technology-neutral. "Electronic means" covers a lot of ground. AI scraping fits within the text. The question is always going to be about the specific facts.
The DTSA defines "improper means" to include acquisition by "espionage through electronic or other means." AI-driven scraping tools that bypass access controls or extract protected data can fall squarely within this definition.
Prompt Injection: The New Attack Vector
Prompt injection is different from traditional scraping. Instead of pulling data from a website, the attacker feeds adversarial inputs to an AI system to force it to reveal information it was designed to protect. This could mean extracting training data, bypassing safety filters, or getting a proprietary model to output its own system instructions.
From a trade secret perspective, this matters when the AI system contains or was trained on confidential information. If a model was built using proprietary datasets, customer lists, pricing structures, or internal processes, and a prompt injection attack causes that model to disclose those inputs, the attacker may have acquired trade secrets through improper means.
The legal theory is straightforward: the model was a container for protected information, the attacker used deception to extract it, and the extraction was unauthorized. Courts have not yet built a deep body of case law on this specific scenario. But the statutory language of the DTSA and the Uniform Trade Secrets Act (UTSA) does not require the "container" to be a filing cabinet or a database. If the information meets the definition of a trade secret, the method of extraction is what determines liability.

Model Extraction and Reverse Engineering
Model extraction is the process of querying a proprietary AI model repeatedly, collecting its outputs, and using those outputs to build a copy or near-copy of the model. The attacker does not need access to the source code or training data. They just need enough queries and the right analytical approach.
This is where the DTSA and CFAA overlap. If the model itself constitutes a trade secret, and the querying pattern constitutes "improper means," the model extraction can support both a civil claim and a criminal prosecution. The Economic Espionage Act makes it a federal crime to steal trade secrets knowingly and with intent to benefit someone other than the owner.
Reverse engineering has traditionally been a defense to trade secret claims. If you buy a product and take it apart, that is generally permitted. But model extraction through an API is not the same as buying a product off the shelf. The terms of service for most AI platforms explicitly prohibit this kind of systematic querying. And when the extraction bypasses rate limits, uses fake accounts, or employs automated tools to evade detection, the "legitimate reverse engineering" defense gets much harder to maintain.
The CFAA Angle: Unauthorized Access in the AI Context
The Computer Fraud and Abuse Act (18 U.S.C. § 1030) makes it a federal crime to access a computer "without authorization" or to "exceed authorized access." For years, courts have debated what "without authorization" means in the context of web scraping. The Supreme Court's 2021 decision in Van Buren v. United States narrowed the scope of "exceeds authorized access" to situations where someone accesses information they are not entitled to access at all, rather than using information they can access for an improper purpose.
But Van Buren did not resolve every question. When an AI scraping bot bypasses technical access controls, ignores robots.txt directives, circumvents CAPTCHAs, or uses stolen credentials, the "without authorization" element becomes much stronger. The government can argue that the technical controls defined the boundaries of authorization, and the bot crossed those boundaries.
For defendants facing CFAA charges in AI scraping cases, the critical question is whether the access was truly unauthorized or whether the defendant accessed publicly available information through automated means. That distinction can be the difference between a federal felony and lawful competitive intelligence.

What "Reasonable Measures" Means in 2026
Every trade secret claim, civil or criminal, requires the owner to show they took "reasonable measures" to protect the information. In 2026, that standard has evolved. Courts are looking at whether companies implemented technical controls like authentication, encryption, rate limiting, and API access restrictions. They are looking at whether employees and contractors signed NDAs. They are looking at whether the company monitored for unauthorized access.
For prosecutors, this element cuts both ways. If the alleged victim had no real protections in place, no authentication, no rate limits, no monitoring, the government's trade secret case weakens. The information may not qualify as a trade secret at all if it was effectively left in the open.
For defendants, this is often where the strongest arguments live. If the data was accessible without bypassing any technical control, if the API was open, if the terms of service were ambiguous, the defense can argue that the "trade secret" was not treated as one by its own owner. That argument does not guarantee acquittal. But it directly attacks a required element of the government's case.
"Reasonable measures" in 2026 means more than NDAs. Courts expect technical controls: authentication, encryption, rate limiting, access logging, and API restrictions. If a company left its proprietary data accessible through an open API with no protections, its trade secret claim is significantly weakened.
Wire Fraud and the AI Misappropriation Overlay
When AI-assisted trade secret theft involves deception transmitted over the internet, wire fraud (18 U.S.C. § 1343) enters the picture. If an attacker creates fake accounts to bypass access controls, misrepresents their identity to gain API access, or uses social engineering to obtain credentials, each of those communications can support a separate wire fraud count.
Federal prosecutors like wire fraud charges because each act of wire communication is a separate count, each carrying up to 20 years. In AI scraping cases, this can mean hundreds of counts if the attacker made hundreds of deceptive API calls or account registrations. The sentence exposure escalates rapidly.
For defendants, wire fraud charges in AI cases often hinge on whether there was an actual misrepresentation. Using an automated tool to make legitimate API requests is not inherently fraudulent, even if it violates terms of service. The government has to prove a scheme to defraud and a material misrepresentation. Terms of service violations alone may not get there.

Wire fraud charges in AI trade secret cases can multiply fast, with each deceptive API call or fake account registration potentially constituting a separate federal count.
What Defendants Need to Know
If you are facing federal charges related to AI scraping, prompt injection, or model extraction, here is what matters.
First, the government has to prove every element. For trade secret charges, they need to show the information was actually a trade secret, that you knew it, and that you acquired it through improper means. For CFAA charges, they need to show the access was unauthorized. For wire fraud, they need a scheme to defraud with material misrepresentation. Each element is a potential defense.
Second, intent matters. The Economic Espionage Act requires knowing conduct. If you believed you were accessing publicly available information, if you relied on the absence of technical controls, if you had no knowledge that the data was protected, those facts are relevant to your defense.
Third, these cases move fast. Federal investigators in AI cases often obtain search warrants for devices, cloud accounts, and communications early in the investigation. They may seize equipment before you even know you are a target. The window to preserve your rights and shape the narrative is narrow.
What Victims Need to Establish
If your company's trade secrets were stolen through AI scraping or prompt injection, you need to build your case carefully.
Document that the information qualifies as a trade secret. Show the economic value. Show the measures you took to protect it. The more robust your technical controls and contractual protections, the stronger your position.
Preserve all evidence of the unauthorized access. Server logs, API call records, anomalous query patterns, and IP addresses are all critical. Work with forensic experts early. Digital evidence degrades or gets overwritten.
Consider both civil and criminal avenues. A DTSA civil action gives you injunctive relief and damages. A criminal referral to the FBI or DOJ can result in prosecution under the Economic Espionage Act or the CFAA. These paths are not mutually exclusive, and in many cases the criminal investigation strengthens the civil claim.

The Stakes Are Real and the Law Is Catching Up
AI did not create trade secret law. But it created new ways to violate it. Scraping bots, prompt injection attacks, and model extraction techniques are not theoretical. They are happening now, across industries, and federal prosecutors are paying attention.
The companies building these tools and the individuals using them need to understand that the existing federal criminal statutes apply. The DTSA, the CFAA, and wire fraud laws were written broadly enough to cover AI-era conduct. The penalties are severe: up to 10 years for trade secret theft, up to 10 years for CFAA violations, up to 20 years per count for wire fraud.
If you are facing federal charges in an AI trade secret case, or if you are a victim trying to pursue one, the legal strategy needs to start now. These cases involve complex technical evidence, evolving case law, and prosecutors who are building expertise in this area fast.
Federal charges in AI trade secret cases move fast. If you need a federal criminal defense attorney who understands both the technology and the law, call Aaron M. Cohen, 24 hours a day, at 561.542.5494. The consultation is free. The time to act is now.

Aaron M. Cohen
Principal Attorney
Aaron M. Cohen is a nationally recognized criminal defense attorney with over 30 years of experience representing individuals and entities in complex criminal investigations and prosecutions across the United States.
View Attorney Profile