Business

A String of Lawsuits Takes Aim at Regulators

When Meta sued the Federal Trade Commission last week — the social networking giant’s latest effort to block new restrictions on its monetization of user data — it used an increasingly common argument against government regulators: The complaint alleged that the structure of the F.T.C. was unconstitutional and that its in-house trials were invalid.

The lawsuit is the latest in a growing campaign to weaken regulators that could upend enforcement at a suite of agencies — including the F.T.C., the Securities and Exchange Commission and the Internal Revenue Service.

Such arguments would have been unthinkable not long ago. As Justice Elena Kagan put it while hearing a case making similar claims, “Nobody has had the, you know, chutzpah.”

Companies are testing new dynamics and limits. “Today this is a very serious complaint about issues the Supreme Court is wrestling with, but 10 years ago it would have been seen as gobbledygook jurisprudence,” Jon Leibowitz, a former F.T.C. chair, said of the Meta filing. The conservative majority on the Supreme Court since 2020 has restricted administrative power and considered challenges to agency proceedings long taken for granted as valid. The justices have also made it easier to mount challenges to the agencies’ structure and authority. Meta relied on those changes to bring its case against the F.T.C.

In a letter to Meta on Friday, nine House Democrats called the case “frivolous” and said the company wanted to “destroy America’s bedrock consumer protection agency.”

Meta is one of several businesses making challenges. On the same day that Meta filed its suit, the Supreme Court heard arguments in a case that asks whether in-house trials at the S.E.C. are legal. Industry groups like the U.S. Chamber of Commerce and executives like Elon Musk and Mark Cuban weighed in, filing amicus briefs urging the court to find against the S.E.C. The biotech company Illumina, which is tussling with the F.T.C. over its merger with the multi-cancer test maker Grail, has challenged the agency’s constitutionality in a federal appeals court.

The cases raise various complaints about how agencies are set up and operate. Challengers say, among other arguments, that there’s no consistent criteria for deciding which cases agencies try in house or in federal court, that the in-house tribunals violate a defendant’s right to a jury trial and that agencies act as prosecutors and judges. “There is a constitutional limit to what Congress can ‘admini-strize,’” Jay Clayton, S.E.C. chair during the Trump administration, told DealBook. He believes administrative courts are not always an appropriate venue. “For me, trying insider-trading cases — the same or very close to classic wire fraud — in S.E.C. courts with S.E.C.-appointed judges and no right to a jury is a step too far.” (The S.E.C. declined to comment.)

Where the justices draw the line will be apparent by the term’s end in June, the deadline for deciding the S.E.C. case. But even if they find for the S.E.C., companies like Meta are lining up with more cases to undermine agencies. If the companies convince courts that in-house tribunals are invalid, enforcers across the government will have far less power and control over proceedings and will be forced to prosecute many more matters in federal courts, adding a significant burden on the justice system. Such a ruling may also lead to changes in how agencies are set up, perhaps eliminating the need for a slate of bipartisan commissioners — a potential outcome that prompted at least one former enforcer to predict that companies may yet regret their campaign to dismantle agencies. — Ephrat Livni

IN CASE YOU MISSED IT

Corporate donors give university leaders a failing grade. The heads of Harvard, the Massachusetts Institute of Technology and the University of Pennsylvania were roundly criticized after testifying before Congress about antisemitism on campus. Big donors, politicians and commentators slammed the legalistic answers, with some calling for Penn to fire its president, Elizabeth Magill, after she dodged a question about whether she would discipline students for calling for the genocide of Jews. She apologized a day later.

Britain’s competition regulator will examine Microsoft’s ties to OpenAI. The Competition and Markets Authority said it had started an “information gathering process,” making it the first watchdog to investigate the relationship after the Windows maker took a nonvoting seat on OpenAI’s board. OpenAI, the start-up behind ChatGPT, was thrown into turmoil after the board fired Sam Altman, the company’s C.E.O., before reinstating him in response to staff and investor pressure.

Nikki Haley’s star is rising. Reid Hoffman, the tech entrepreneur and big Democratic donor, gave $250,000 to a super PAC supporting the former governor of South Carolina. Haley is emerging as the leading Republican to take on the front-runner, Donald J. Trump, for the presidential nomination. More corporate donors are holding fund-raising events for her as her rivals, including Gov. Ron DeSantis of Florida, struggle to maintain support.

Google unveils its A.I. update, but some see a glitch. The search giant was forced to play catch-up after OpenAI released ChatGPT last year, but had high hopes that Gemini, its updated chatbot, would help. Google released Gemini with a slick video to show off its talents, but commentators pointed out that the video had been edited to look better than reality.

The race to regulate A.I.

On Friday, European Union lawmakers agreed on sweeping legislation to regulate artificial intelligence. The A.I. Act is an attempt to address the risks that the technology poses to jobs, misinformation, bias and national security.

Adam Satariano, The Times’s European tech correspondent, has been reporting on efforts by regulators to set guardrails around A.I. He talked with DealBook about the challenges of regulating a quickly developing technology, how different countries have approached the challenge and whether it’s even possible to create effective safeguards for a borderless technology with vast applications.

What are the different schools of thought when it comes to regulating A.I., and what are the merits of each approach?

How much time do we have? The E.U. has taken what it calls a “risk-based” approach, where they define different uses of A.I. that could have the most potential harm to individuals and society — think of an A.I. used to make hiring decisions or to operate critical infrastructure like power and water. Those kinds of tools face more oversight and scrutiny. Some critics say that policy falls short because it is overly prescriptive. If something is not listed as “high risk,” then it isn’t covered.

The E.U. approach leaves a lot of potential gaps that policymakers have been trying to fill. For instance, the most powerful A.I. systems made by OpenAI, Google and others will be able to do lots of different things way beyond just powering a chatbot. There’s been a very hard-fought debate over how to regulate that underlying technology.

How would you describe the meaningful differences in the way the U.S., the E.U., Britain and China are approaching regulation? And what are the prospects for collaboration, given events like Britain’s recent A.I. safety summit but also the apparent fears each country has of what the other is doing?

A.I. shows the broader differences between the U.S., E.U. and China on digital policy. The U.S. is much more market-driven and hands-off. America dominates the digital economy, and policymakers are reluctant to create rules that will threaten that leadership, especially for a technology as potentially consequential as A.I. President Biden signed an executive order putting some limits on A.I. use, particularly as it applies to national security and deepfakes.

The E.U., a more regulated economy, is being much more prescriptive about rules toward A.I., while China, with its state-run economy, is imposing its own set of controls with things like algorithm registries and censorship of chatbots.

Britain, Japan and many other countries are taking a more hands-off, wait-and-see approach. Countries like Saudi Arabia and the United Arab Emirates are pouring money into A.I. development.

What are their big concerns?

The future benefits and risks of A.I. are not fully known — to the people creating the tech, or the policymakers. That makes it hard to legislate. So a lot of work is going into looking at the direction of travel for the technology and putting safeguards in place, whether to protect critical infrastructure, prevent discrimination and bias or stop the development of killer robots.

How effectively can A.I. be regulated? The technology seems to be advancing far more quickly than regulators can devise and pass rules to check it.

This is probably the fastest I have seen policymakers around the world respond to a new technology. But it hasn’t resulted in much concrete policy yet. The technology is advancing so quickly that it is outpacing the ability of policymakers to come up with rules. Geopolitical disputes and economic competition also increase the difficulty of international cooperation, which most believe will be essential for any rules to be effective.


Quote of the day

“Don’t be coy when it comes to disclosing these matters.”

— Advice from Securities Times, a state-owned newspaper in China, to board directors over how to communicate the disappearance of a company chairman or chief executive. Such announcements have become increasingly frequent, as Beijing has sought to assert greater control over the economy and the private sector.

Michael J. de la Merced contributed reporting.

Thanks for reading! We’ll see you Monday.

We’d like your feedback. Please email thoughts and suggestions to [email protected].

Related Articles

Back to top button