What the FTC Investigation Into OpenAI Means for the Future of AI

While U.S. lawmakers play catch-up on AI regulation, the federal agency isn’t waiting around.

Written by Ellen Glover
Published on Aug. 22, 2023
The U.S. Capitol building against a background of cascading binary code
Image: Shutterstock / Built In

The United States Federal Trade Commission is in the midst of an expansive investigation into OpenAI, exploring whether its wildly successful ChatGPT bot has harmed consumers through its collection of data and its creation of false information. The probe, initially reported on by the Washington Post, is the first major U.S. investigation into OpenAI. 

OpenAI has become the face of the artificial intelligence industry, as its ChatGPT product is the fastest-growing consumer app of all time. This prompted tech giants to roll out their own competing chatbots in a veritable arms race, ushering in a worldwide generative AI boom. And while the company is certainly no stranger to legal controversy, this FTC investigation marks its greatest regulatory hurdle to date.

Many are left wondering what this could mean for AI regulation going forward, particularly in the United States, where it remains very much in its “early days.”

Up to this point, Congress has shied away from creating new laws intended to shape industry use of AI. And anytime legislation that would regulate AI in the private sector has been proposed, it’s struggled to gain traction. So no federal laws that specifically limit the use of artificial intelligence, or regulate the risks associated with the technology, are on the books.

“The FTC’s investigation is almost like a counterbalance to the more slow action of Congress and U.S. lawmakers,” Patrick K. Lin, a technology law researcher and author of Machine See, Machine Do, told Built In. “While I see the FTC and Congress as very separate actors in all of this, I think the FTC investigation is a really welcome development.”

More on Tech and the LawData Privacy Laws Every Company Should Know

 

With Congress Slow to Act, Federal Agencies Show Urgency

The U.S. has been lax toward AI, compared to other governments. The E.U. is expected to pass new AI legislation by the end of the year. Several countries within the bloc have taken steps to limit or even bar citizens’ use of U.S. companies’ AI products under its existing privacy laws. And China has already imposed several new laws addressing certain aspects of the development, deployment and use of AI systems.

The most sweeping AI regulation that’s come out of the U.S. federal government to date has come in the form of the AI Bill of Rights, a list of suggestions for AI companies to follow that the Biden-Harris administration published in October of 2022. Following that, the CEOs of several leading AI companies (including OpenAI) made a “voluntary commitment” to reduce the harms of AI several months later. But neither of those moves are enforceable by law.

So far, federal agencies have been picking up a lot of the slack in the United States, relying on laws that already exist to implement guidelines and initiatives for the use of AI in industries under their jurisdictions. For example, the U.S. Copyright Office has stated that it will not offer full copyright for texts, images and videos created by AI, since they were not created by a human. And the U.S. Department of Justice’s Civil Rights Division said that existing civil rights laws apply to biased AI. 

“We’re seeing more and more indications of federal agencies taking their role in AI enforcement very seriously,” Ravit Dotan, an AI ethics advisor, researcher and speaker, told Built In. “They do not need to wait for federal regulations specifically about AI, and they’re showing us that they’re not actually waiting.”

Find out who's hiring in Chicago.
See jobs at top tech companies & startups in Chicago
View All Jobs

 

FTC to ‘Make an Example’ Out of OpenAI

Indeed, the FTC, which is the agency responsible for protecting consumers from deceptive or unfair business practices, is one of the U.S. federal government’s most zealous proponents of AI regulation today. It has published dozens of reports outlining its concerns with artificial intelligence, and issued several warnings that consumer protection laws apply to the AI industry, even as Congress and the Biden administration work to nail down new legislation. 

This investigation into OpenAI appears to be the first indication of just how the FTC plans to enforce those warnings, Lin said. “I think the plan here is to make an example of them.”

“The FTC has a lot of influence and power here.”

In general, FTC investigations take a year or two to complete, Jessica Rich, a former director of the FTC’s Bureau of Consumer Protection, told Built In. And any information obtained over the course of an investigation is not made public, so it will likely be hard to monitor how this OpenAI probe progresses. But if the FTC finds evidence of behavior that rises to the level of unfair or deceptive conduct, it can levy fines or put the company under some sort of consent decree to dictate how it handles its data in the future. The agency also has a history of making companies delete certain data.

“The FTC has a lot of influence and power here,” Rich said, adding that it could also come up with some new form of relief or audit that could be influential in how both the agency and U.S. legislators “control the problems with AI.”

Related ReadingAI-Generated Content and Copyright Law: What We Know

 

Is AI Legislation on the Horizon?

While the United States has lagged behind other governments in drafting AI regulations to date, a flurry of activity in Washington indicates the federal government is serious about catching up. But Dotan, who is working on a paper about both current and proposed AI regulation in the U.S., has not seen any indication that a sweeping, robust piece of AI legislation akin to the E.U.’s AI Act is anywhere on the horizon — even in light of this landmark OpenAI probe.

“This FTC investigation isn’t the first occurrence where we’ve seen agencies enforce the law on AI companies, but it’s certainly made a lot of noise,” Dotan said. “It’s a great illustration of how AI companies fall under the purview of existing laws. [They] don’t need to wait for special AI regulation.”

“This FTC investigation isn’t the first occurrence where we’ve seen agencies enforce the law on AI companies, but it’s certainly made a lot of noise.”

Still, more AI regulation appears to be imminent. Senate Majority Leader Chuck Schumer has vowed to make addressing AI a priority in Congress, and announced a monthslong series of “AI Insight Forums” with industry experts in order to get legislators up to speed on how the technology works and what its most pressing risks are. And Dotan said she’s noticed a “major uptick” in regulation efforts in 2023. In fact, she anticipates that Congress could have as many as seven federal laws pertaining to AI drafted by the end of 2024, resulting in a kind of patchwork of proposed legislation.

Lawmakers have introduced bills that pertain to everything from monitoring public health security risks of AI, including its use in the making of bioweapons, to banning the use of AI to make nuclear launch decisions. Senators have also introduced legislation that would create a new commission charged with regulating AI and social media, as a result of OpenAI CEO Sam Altman’s testimony in a May 2023 Congressional hearing (one of many AI-related hearings in the last several months).

For the most part, these bills are in their earliest stages, and do not have the support needed to advance yet — much less affect the AI industry. 

This FTC investigation into OpenAI, on the other hand, could have a much more immediate impact on how other AI companies operate going forward. In its civil investigative demand letter (CID), the agency asked dozens of questions about how the company trains its models and handles its data. And it requested various documents related to the development and training of its large language models, its data governance procedures, as well as organizational charts showing how certain responsibilities are allocated at the company. 

“It essentially provides this checklist of AI governance practices,” Lin said. “It would be really wise for every other AI company, whether it’s competing with OpenAI or is a smaller startup, to take a read through this checklist and say, ‘Are we doing these things? Are we documenting these things? If the FTC comes knocking on our door next, is this stuff that we can actually provide?’”

 

Frequently Asked Questions

The Federal Trade Commission is investigating OpenAI over possible violations of consumer protection law. The agency is looking at how the company handles data, and whether its ChatGPT bot has harmed people by generating false information about them.


When contacted for this story, an OpenAI spokesperson declined to comment but referred Built In to a post made by Sam Altman to the social media platform X. The Federal Trade Commission declined to comment.

Explore Job Matches.