News

Lina Khan: We Must Regulate A.I. Here’s How.

It’s both exciting and unsettling to have a realistic conversation with a computer. Thanks to the rapid advance of generative artificial intelligence, many of us have now experienced this potentially revolutionary technology with vast implications for how people live, work and communicate around the world. The full extent of generative A.I.’s potential is still up for debate, but there’s little doubt it will be highly disruptive.

The last time we found ourselves facing such widespread social change wrought by technology was the onset of the Web 2.0 era in the mid-2000s. New, innovative companies like Facebook and Google revolutionized communications and delivered popular services to a fast-growing user base.

Those innovative services, however, came at a steep cost. What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data.

These business models drove companies to develop endlessly invasive ways to track us, and the Federal Trade Commission would later find reason to believe that several of these companies had broken the law. Coupled with aggressive strategies to acquire or lock out companies that threatened their position, these tactics solidified the dominance of a handful of companies. What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.

The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.

As companies race to deploy and monetize A.I., the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices. As these technologies evolve, we are committed to doing our part to uphold America’s longstanding tradition of maintaining the open, fair and competitive markets that have underpinned both breakthrough innovations and our nation’s economic success — without tolerating business models or practices involving the mass exploitation of their users. Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market.

While the technology is moving swiftly, we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.

Enforcers and regulators must be vigilant. Dominant firms could use their control over these key inputs to exclude or discriminate against downstream rivals, picking winners and losers in ways that further entrench their dominance. Meanwhile, the A.I. tools that firms use to set prices for everything from laundry detergent to bowling lane reservations can facilitate collusive behavior that unfairly inflates prices — as well as forms of precisely targeted price discrimination. Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully. The F.T.C. is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition.

And generative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply. Chatbots are already being used to generate spear-phishing emails designed to scam people, fake websites and fake consumer reviews —bots are even being instructed to use words or phrases targeted at specific groups and communities. Scammers, for example, can draft highly targeted spear-phishing emails based on individual users’ social media posts. Alongside tools that create deep fake videos and voice clones, these technologies can be used to facilitate fraud and extortion on a massive scale.

When enforcing the law’s prohibition on deceptive practices, we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.

Lastly, these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination — unfairly locking out people from jobs, housing or key services. These tools can also be trained on private emails, chats and sensitive data, ultimately exposing personal details and violating user privacy. Existing laws prohibiting discrimination will apply, as will existing authorities proscribing exploitative collection or use of personal data.

The history of the growth of technology companies two decades ago serves as a cautionary tale for how we should think about the expansion of generative A.I. But history also has lessons for how to handle technological disruption for the benefit of all. Facing antitrust scrutiny in the late 1960s, the computing titan IBM unbundled software from its hardware systems, catalyzing the rise of the American software industry and creating trillions of dollars of growth. Government action required AT&T to open up its patent vault and similarly unleashed decades of innovation and spurred the expansion of countless young firms.

America’s longstanding national commitment to fostering fair and open competition has been an essential part of what has made this nation an economic powerhouse and a laboratory of innovation. We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices.

Lina M. Khan is the chair of the Federal Trade Commission.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

Related Articles

Back to top button