The FTC is reportedly in at least the exploratory phase of investigating OpenAI over whether the company’s flagship ChatGPT conversational AI made “false, misleading, disparaging or harmful” statements about people. It seems unlikely this will lead to a sudden crackdown, but it shows that the FTC is doing more than warning the AI industry of potential violations.
The Washington Post first reported the news, citing access to a 20-page letter to OpenAI asking for information on complaints about disparagement. The FTC declined to comment, noting that its investigations are nonpublic.
In February, the regulator announced a new Office of Technology to take on tech sector “snake oil,” and shortly after that warned companies making claims around AI that they are subject to the same truth requirements as anyone else. “Keep your AI claims in check,” they wrote — or the FTC will.
Though the letter reported by the post is hardly the first time the agency has taken on any of AI’s many forms, it does seem to announce that the world’s present undisputed leader in the field, OpenAI, must be ready to justify itself.
This kind of investigation doesn’t just appear out of thin air — the FTC doesn’t look around and say “That looks suspicious.” Generally a lawsuit or formal complaint is brought to their attention and the practices described by it imply that regulations are being ignored. For example, a person may sue a supplement company because the pills made them sick, and the FTC will launch an investigation on the back of that because there’s evidence the company lied about the side effects.
In this case there’s a high probability that a lawsuit like this one, in which an Australian mayor complained to OpenAI that ChatGPT said he had been accused of bribery and sentenced to prison, among other things. (That matter is ongoing and of course the jurisdiction is wrong, but there are almost certainly more like it.)
Publishing such things could amount to defamation or libel or simply “reputational damage” as the FTC’s current letter to OpenAI reportedly calls them. It is almost certainly ChatGPT at issue because it is the only really public product in OpenAI’s portfolio that could do such a thing — GPT-4 and the other APIs are locked down a bit too much (and are too recent) to be considered.
It’s hardly a slam dunk: the technical aspects alone call into question whether this counts as publishing or speech or even anything but a private communication — these would all have to be proven.
But it’s also not a wild thing to ask a company to explain. It’s one thing to make a mistake, another to systematically and undetectably invent details about people, at huge scales, and not say anything about it. If Microsoft Word’s spell-checker occasionally added “convicted criminal” in front of people’s names, you’d better believe there would be an uproar.
Though the FTC has been handed a few high-profile defeats lately in the form of its anti-merger efforts directed at Meta and Microsoft being shot down, it has also nailed tech companies for privacy issues and even AI-adjacent violations.