FDA just cited a manufacturer for over-relying on AI.
The lesson is older than AI.
On April 2, 2026, the FDA issued what appears to be its first warning letter citing a drug manufacturer for inappropriate use of artificial intelligence. The recipient, Purolea Cosmetics Lab in Livonia, Michigan, had used AI agents to generate product specifications, procedures, and master production records. They did not review the output. They did not validate the claims. And when investigators flagged that process validation had never been performed, the firm replied that the AI never told them it was required.
Strip away the AI framing and what’s left is a familiar story: a quality unit that stopped reviewing its own output.
What the warning letter actually says
The warning letter to Purolea covers a long list of CGMP violations, including insanitary manufacturing conditions, untested components, no microbiological release testing, and unapproved new drug claims for products labeled as homeopathic shingles and herpes treatments. The AI section sits inside that broader picture.
Two passages are worth reading carefully.
First, on document creation: if you use AI as an aid in document creation, you must review the AI generated documents to ensure they were accurate and actually compliant with CGMP. Failure to do so is a violation of 21 CFR 211.22(c).
Second, on overreliance: investigators found the firm had not conducted process validation prior to distribution, as required under 21 CFR 211.100. The firm replied they were not aware of the legal requirement, as the AI agent never told them it was required.
FDA closed the section by stating that any output or recommendations from an AI agent must be reviewed and cleared by an authorized human representative of the firm's quality unit, in accordance with section 501(a)(2)(B) of the FD&C Act and 21 CFR 211.22 and 211.100.
Why this matters for medical device manufacturers
This was a CDER drug warning letter, not a device letter. But the underlying logic translates directly. Whether the regulation is 21 CFR Part 211 or 21 CFR Part 820, whether the standard is CGMP or ISO 13485, the principle is the same: the quality unit owns the output.
A few specific places this applies in a device QMS that manufacturers could use AI:
-
SOPs and work instructions
-
Design control documentation, including design inputs, design outputs, and verification protocols
-
Risk management files and ISO 14971 hazard analyses
-
Supplier quality agreements and incoming inspection criteria
-
CAPA investigations, root cause narratives, and effectiveness checks
-
Management review materials and trending summaries
In each of these, AI can shorten the time to a workable draft. None of them are exempt from human review by qualified personnel. ISO 13485 clause 5 on management responsibility still applies. The signature on the document is still a person's signature, and it still means what it has always meant.
The failure mode to watch for
The most quotable line in the letter is the one about process validation. The firm did not perform a legally required activity because the AI did not surface the requirement.
That is not an AI problem. That is a competency problem with an AI-shaped delivery mechanism. A quality unit that does not know process validation is required for finished pharmaceuticals would have made the same error reading a textbook, attending a webinar, or copying a procedure from a sister facility. The AI did not invent the gap. It exposed one that was already there.
The lesson is not that AI is dangerous. The lesson is that AI raises the floor on output volume without raising the floor on competency. A small team can now produce in a week what used to take a month. If the people approving that output cannot evaluate it against the regulation, the speed is the risk.
What to do now
If your firm is using AI anywhere in QMS document creation, or considering it, a few practical steps:
-
Document the human-in-the-loop. Your procedures should specify who reviews AI-assisted output, what they review against, and what evidence of review is captured. A reviewer initial on a printed draft is fine. A blank approval workflow is not.
-
Treat AI output as a draft, not a deliverable. The same way a junior engineer's first pass goes through a senior reviewer, AI output should enter the same review path. Skipping the path because the prose looks polished is the trap.
-
Verify regulatory claims independently. If an AI summarizes a clause, a guidance document, or a standard, the reviewer needs to confirm the citation against the source. Hallucinated requirements and missing requirements are both possible.
-
Train the reviewers, not just the users. The risk concentrates at the approval step. The people signing off on AI-assisted SOPs, design documents, and CAPAs need current knowledge of the underlying regulations, not just familiarity with the tool.
-
Capture the prompt context where it matters. For higher-risk documents, knowing what the AI was asked, and what source material it was given, helps the reviewer assess whether the output is fit for purpose.
The bigger picture
FDA did not say firms cannot use AI. The agency was explicit that AI can serve as an aid in document creation, provided a qualified human reviews the output. That position is consistent with how FDA has historically treated every other form of automation, from electronic batch records to statistical software. The tool is permitted. The accountability is not transferable.
For device manufacturers, this letter is an early signal of how investigators are likely to probe AI use in future inspections. Expect questions about who reviewed the document, what they reviewed it against, and what training they have to make that review meaningful. Have answers ready.
If you are building AI into your QMS workflows and want a second set of eyes on the review controls, Rook Quality Systems can help you scope the gaps and put a defensible process in place. Reach out to start a conversation.