When Meta sued the Federal Trade Commission last week (the social media giant’s latest effort to block new restrictions on its user data monetization — used an increasingly common argument against government regulators: The complaint alleged that the FTC’s structure was unconstitutional and that its internal judgments were invalid.
The lawsuit is the latest in a growing campaign to weaken regulators that could disrupt enforcement across an array of agencies, including the FTC, the Securities and Exchange Commission and the Internal Revenue Service.
Such arguments would have been unthinkable not long ago. As Justice Elena Kagan said while she was hearing a case in which similar claims were made: “No one has had, you know, the nerve.”
Companies are testing new dynamics and limits. “Today, this is a very serious complaint about issues the Supreme Court is wrestling with, but 10 years ago it would have been seen as nonsense jurisprudence,” Jon Leibowitz, former FTC chairman, said of the filing. Goal. The conservative majority on the Supreme Court since 2020 has restricted administrative power and deemed valid challenges to agency procedures that were long taken for granted. Judges have also made it easier to raise challenges to the structure and authority of agencies. Meta relied on those changes to present its case against the FTC.
in a letter to goal On Friday, nine House Democrats called the case “frivolous” and said the company wanted to “destroy America’s critical consumer protection agency.”
Meta is one of several companies facing challenges. The same day Meta filed his lawsuit, the Supreme Court heard arguments in a case asking whether internal lawsuits at the SEC are legal. Industry groups like the U.S. Chamber of Commerce and executives like Elon Musk and Mark Cuban weighed in, filing amicus briefs urging the court to rule against the SEC. Biotech company Illumina, which is fighting the FTC over its multi-cancer test merger. Grail, maker of the agency, has challenged the constitutionality of the agency in a federal appeals court.
The cases raise various complaints about how agencies are created and operate. Opponents say, among other arguments, that there are no consistent criteria for deciding which cases agencies try internally or in federal court, that domestic courts violate a defendant’s right to a jury trial, and that agencies act as prosecutors and judges. . “There is a constitutional limit to what Congress can ‘administer,’” Jay Clayton, SEC chairman during the Trump administration, told DealBook. He believes that administrative courts are not always the right place. “To me, trying insider trading cases (the same or very similar to classic wire fraud) in SEC courts with judges appointed by the SEC and without the right to a jury is a step too far.” (The SEC declined to comment.)
Where the judges draw the line will be evident at the end of the term in June, the deadline to decide the SEC case. But even if they rule in favor of the SEC, companies like Meta are preparing to file more cases to undermine the agencies. If companies convince the courts that domestic courts are invalid, authorities across the government will have much less power and control over proceedings and will be forced to prosecute many more matters in federal courts, adding to the burden. significant to the justice system. Such a ruling could also lead to changes in the way agencies are created, perhaps eliminating the need for a bipartisan slate of commissioners, a potential outcome that spurred at least one former executor Companies may still regret their campaign to dismantle the agencies. – Efrat Livni
IN CASE YOU HAVE MISSED IT
Corporate donors give university leaders a failing grade. The heads of Harvard, the Massachusetts Institute of Technology and the University of Pennsylvania were heavily criticized after testifying before Congress about anti-Semitism on campus. Major donors, politicians and commentators criticized the legalistic responses, with some calling for Penn to fire its president, Elizabeth Magill, after she dodged a question about whether she would discipline students for calling for the genocide of Jews. She apologized a day later.
Britain’s competition regulator will examine Microsoft’s links to OpenAI. The Competition and Markets Authority said it had started a “information gathering process” making it the first watchdog to investigate the relationship after the Windows maker took a non-voting seat on OpenAI’s board of directors. OpenAI, the startup behind ChatGPT, was plunged into turmoil after the board fired Sam Altman, the company’s CEO, before reinstating him in response to pressure from staff and investors.
Nikki Haley’s star is on the rise. Reid Hoffman, the tech entrepreneur and major Democratic donor, donated $250,000 to a super PAC supporting the former South Carolina governor. Haley is emerging as the leading Republican to face her front-runner, Donald J. Trump, for the presidential nomination. Further corporate donors are organizing fundraisers for her like her rivals, including Governor Ron DeSantis of FloridaThey struggle to maintain support.
Google unveils its AI update, but some see a problem. The search giant was forced to play catch-up after OpenAI launched ChatGPT last year, but it had high hopes that Gemini, its updated chatbot, would help. Google launched Gemini with a clever video to show off its talents, but commentators noted that the the video had been edited to look better than reality.
The race to regulate AI
On Friday, European Union lawmakers agreed to sweeping legislation to regulate artificial intelligence. The AI Act is an attempt to address the risks the technology poses to jobs, misinformation, bias, and national security.
Adam Satariano, The Times’ European technology correspondent, has been reporting on regulators’ efforts to set up barriers around AI. He spoke with DealBook about the challenges of regulating a rapidly developing technology, how different countries have approached the challenge, and whether it’s even possible to create effective solutions. safeguards for borderless technology with vast applications.
What are the different schools of thought when it comes to regulating AI and what are the merits of each approach?
How much time do we have? The EU has adopted what it calls a “risk-based” approach, where they define different uses of AI that could have the greatest potential harm to individuals and society: think of an AI used to make hiring decisions or to operate critical infrastructure such as energy. and water. Those types of tools face greater oversight and scrutiny. Some critics say the policy falls short because it is too prescriptive. If something is not listed as “high risk,” then it is not covered.
The EU’s approach leaves many potential gaps that policymakers have been trying to fill. For example, more powerful AI systems created by OpenAI, Google, and others will be able to do many different things beyond simply powering a chatbot. There has been a hotly contested debate about how to regulate that underlying technology.
How would you describe the significant differences in the way the US, EU, Britain and China approach regulation? And what are the prospects for collaboration, given events like The recent British summit on AI safety But also the apparent fears each country has about what the other is doing?
AI shows the widest differences between the US, the EU and China on digital policy. The United States is much more market-driven and hands-off. The United States dominates the digital economy, and policymakers are reluctant to create rules that threaten that leadership, especially for a technology as potentially consequential as AI. President Biden signed an executive order that places some limits on the use of AI, particularly as it relates to national security and deepfakes. .
The EU, a more regulated economy, is being much more prescriptive about AI regulations, while China, with its state economy, is imposing its own set of controls with things like algorithm registrations and chatbot censorship.
Britain, Japan and many other countries are taking a more passive, wait-and-see approach. Countries like Saudi Arabia and the United Arab Emirates are pouring money into AI development.
What are your big concerns?
The future benefits and risks of AI are not fully known by either the people creating the technology or policy makers. That makes legislation difficult. Therefore, a lot of work is being done to determine the direction the technology will take and put safeguards in place, whether to protect critical infrastructure, prevent discrimination and bias, or stop the development of killer robots.
How effectively can AI be regulated? Technology appears to be advancing much faster than regulators can devise and pass rules to control it.
This is probably the quickest reaction I’ve seen from policymakers around the world to a new technology. But it has not yet resulted in many concrete policies. Technology is advancing so rapidly that it is outpacing policymakers’ ability to come up with rules. Geopolitical disputes and economic competition also increase the difficulty of international cooperation, which most believe will be essential for any regulation to be effective.
Quote of the day
“Don’t be shy when it comes to revealing these matters.”
— Advice from Securities Times, a state-run newspaper in China, to board directors on how to report the disappearance of a company’s president or CEO. These ads have become increasingly frequentas Beijing has sought to exert greater control over the economy and the private sector.
Michael J. de la Merced contributed with reports.
Thank you for reading! See you on Monday.
We would like to receive your comments. Email your ideas and suggestions to dealbook@nytimes.com.