Opinion: The risks of AI could be catastrophic. We should empower company workers to warn us | CNN (2024)

Editor’s Note: Lawrence Lessig is the Roy L. Furman Professor of Law and Leadership at Harvard Law School and the author of the book”They Don’t Represent Us: Reclaiming Our Democracy.” The views expressed in this commentary are his own. Read moreopinionat CNN.

CNN

In April, Daniel Kokotajlo resigned his position as a researcher at OpenAI, the company behind Chat GPT. He wrote in a statement that he disagreed with the way it is handling issues related to security as it continues to develop the revolutionary but still not fully understood technology of artificial intelligence.

Opinion: The risks of AI could be catastrophic. We should empower company workers to warn us | CNN (1)

Lawrence Lessig

On his profile page on the online forum “LessWrong,” Kokotajlo — who had worked in policy and governance research at Open AI — expanded on those thoughts, writingthat he quit his job after “losing confidence that it would behave responsibly” in safeguarding against the potentially dire risks associated with AI.

And in a statement issued around the time of his resignation, he blamed the culture of the company for forging ahead without heeding the warning about the dangers it might be unleashing.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,”Kokotajlo wrote.

OpenAI pressed him to sign an agreement promising not to disparage the company, telling him that if he refused, he would lose his vested equity in the company. The New York Times has reported thatequity was worth $1.7 million. Nevertheless, he declined, apparently choosing to reserve his right to publicly voice his concerns about AI.

When news broke about Kokotajlo’s departure from OpenAI and the alleged pressure from the company to get him to sign a non-disclosure agreement, the company’s CEO Sam Altman quickly apologized.

“This is on me,”Altman wrote on X, (formerly known as Twitter),“and one of the few times I’ve been genuinely embarrassed running openai; I did not know this was happening and I should have.” What Altman didn’t reveal is how many other company employees/executives might have been forced to sign similar agreements in the past. In fact, for many years and according to former employees, the company hasthreatened to cancelemployees’ vested equity if they didn’t promise to play nice.

OpenAI revealed GPT-4o on Monday. SOPA Images/LightRocket/Getty Images Related article Opinion: Scarlett Johansson has a point

Altman’s apology was effective, however, in tamping down attention to OpenAI’s legal blunder of requiring these agreements.The company was eager to move on and most in the press were happy to oblige. Few news outlets reported the obvious legal truth that such agreements wereplainly illegal under California law. Employees had for years thought themselves silenced by the promise they felt compelled to sign, but a self-effacing apology by a CEO was enough for the media, and the general public, to move along.

We should pause to consider just what it means when someone is willing to give up perhaps millions of dollars to preserve the freedom to speak. What, exactly, does he have to say? And not just Kokotajlo, but themany other OpenAI employees who have recently resigned, many now pointing to serious concerns about the dangers inherent in the company’s technology.

I knew Kokotajlo and reached out to him after he quit; I’m now representing him and 10 other current and former OpenAI employees on a pro bono basis. But the facts I relate here come only from public sources.

Many people refer to concerns about the technology as a question of “AI safety.” That’s a terrible term to describe the risks that many people in the field are deeply concerned about. Some of the leading AI researchers, includingTuring Prize winner Yoshua Bengio andSir Geoffrey Hinton,the computer expert and neuroscientist sometimes referred to as “the godfather of AI,” fear the possibility of runaway systems creating not just “safety risks,” but catastrophic harm.

Video Ad Feedback

Decoding generative artificial intelligence

23:06 - Source: CNN

And while the average person can’t imagine how anyone could lose control of a computer (“just unplug the damn thing!”), we should also recognize that we don’t actually understand the systems that these experts fear.

Companies operating in the field of AGI —artificial general intelligence,which broadly speaking refers to thetheoretical AI research attempting to create software with human-like intelligence, including the ability to perform tasks that it is not trained or developed for —are among the least regulated, inherently dangerous companies in America today. There is no agency that has legal authority to monitor how the companies develop their technology or the precautions they are taking.

Instead, we rely upon the good judgment of these corporations to ensure that risks are adequately policed. Thus, as a handful of companies race to achieve AGI, the most important technology of the century, we are trusting them and their boards to keep the public’s interest first. What could possibly go wrong?

CNN/Ian Berry Related article Opinion: We’ve reached a turning point with AI, expert says

This oversight gap has now led a number of current and former employees at OpenAI to formally ask the companies to pledge to encourage an environment in which employees are free to criticize the company’s safety precautions.

Their “Right to Warn” pledge asks companies:

First, to commit to revoking any “non-disparagement” agreement. (OpenAI has already promised to do as much; reports are that other companies may have similar language in their agreements that they’ve not yet acknowledged.)

Second, it asks companies to pledge to create an anonymous mechanism to give employees and former employees a way to raise safety concerns to the board, to regulators and to an independent AI safety organization.

Third, it asks companies to support a “culture of open criticism,” to encourage employees and former employees to speak about safety concerns so long as they protect the corporation’s intellectual property.

Finally — perhaps most interestingly — it asks companies to promise not to retaliate against employees who share confidential information when raising risk-related concerns, but pledges that employees would first channel their concerns through a confidential and anonymous process — if, and when, the company creates it. This is designed to create the incentive to build a mechanism to protect confidential information while enabling warnings.

Get our free weekly newsletter

Such a “Right to Warn” would be unique in the regulation of American corporations. It is justified by the absence of effective regulation, a condition that could well change if Congress got around to addressing the risks that so many have described. And it is necessary because ordinary whistleblower protections don’t cover conduct that is not itself regulated.

The law — especially California law — would give employees a wide berth to report illegal activities; but when little is regulated, little is illegal. Thus, so long as there is no effective regulation of these companies, it is only the employees who can identify the risks that the company is ignoring.

Even if the AI companies endorsed a “Right to Warn,” no one should imagine that it would be easy for any current or former employee to call out an AI company. Whistleblowers are not favorite co-workers, even if they are respected by some. And even with formal protections, the choice to speak out inevitably has consequences for their future employment opportunities — and friendships.

Obviously, it is not fair that we rely upon self-sacrifice to ensure that private corporations are not putting profit above catastrophic risks. This is the job of regulation. But if these former employees are willing to lose millions for the freedom to say what they know, maybe it is time that our representatives built the structures of oversight that would make such sacrifices unnecessary.

Opinion: The risks of AI could be catastrophic. We should empower company workers to warn us | CNN (2024)

FAQs

What is the catastrophic risk of artificial intelligence? ›

Existential risks, or global catastrophic risks, are risks that could cause the collapse of human civilization. Prominent examples of human-driven global catastrophic risks include but are not limited to: nuclear war. pandemics, bioterrorism, and other threats related to advances in biotechnology.

What are the risks of artificial intelligence? ›

Real-life AI risks

There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.

What are the risks of AI in employment? ›

One of the critical dangers of AI is the potential for bias and discrimination. AI systems are trained on vast amounts of data, which can inadvertently perpetuate existing biases present in that data. This bias can result in unfair treatment in areas such as hiring, loan approvals, or criminal justice.

How does AI affect humans negatively? ›

The increasing reliance on AI for tasks ranging from mundane chores to complex decision-making can lead to human laziness. As AI systems take over more responsibilities, individuals might become less inclined to develop their skills and knowledge, relying excessively on technology.

What is a catastrophic risk in AI? ›

To summarize: AI risk theorists maintain that we have grounds to think AI systems might seek and acquire power in a way that leads to catastrophe, and grounds to think we might deploy such systems anyway. This is the Problem of Power-Seeking.

What would an AI catastrophe look like? ›

So as soon as they're strong enough to have a fairly large chance of success, the AI systems might attempt to disempower humans — perhaps with cyberwarfare, autonomous weapons, or by hiring or coercing people — leading to an existential catastrophe.

How will AI affect workers? ›

Research shows that AI can help less experienced workers enhance their productivity more quickly. Younger workers may find it easier to exploit opportunities, while older workers could struggle to adapt. The effect on labor income will largely depend on the extent to which AI will complement high-income workers.

Is AI good or bad for the workplace? ›

They show that without robust new regulation, AI could make the world of work an oppressive and unhealthy place for many. “Things don't have to be this way. If we put the proper guardrails in place, AI can be harnessed to genuinely enhance productivity and improve working lives.”

Is AI a threat to employment in future? ›

As AI systems become increasingly sophisticated, there's a looming threat that they could render certain jobs obsolete, leading to widespread unemployment and economic upheaval.

Why is AI a threat? ›

If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

What did Elon Musk say about AI? ›

Probably none of us will have a job,” Musk said about AI at a tech conference on Thursday. While speaking remotely via webcam at VivaTech 2024 in Paris, Musk described a future where jobs would be “optional.”

How does AI take risks instead of humans? ›

An example of AI taking risks in place of humans would be robots being used in areas with high radiation. Humans can get seriously sick or die from radiation, but the robots would be unaffected. And if a fatal error were to occur, the robot could be built again.

What is the existential risk of AI? ›

One way that could happen is that humans die. But the other way that can happen is if we no longer engage in meaningful human activity, if we no longer have embodied experience, if we're no longer connected to our fellow humans. That, I think, is the existential risk of AI.

What is high risk AI? ›

Summary. This article outlines how to classify high-risk AI systems. An AI system is considered high-risk if it is used as a safety component of a product, or if it is a product itself that is covered by EU legislation. These systems must undergo a third-party assessment before they can be sold or used.

How is AI harmful to the environment? ›

As datasets and models become more complex, the energy needed to train and run AI models becomes enormous. This increase in energy use directly affects greenhouse gas emissions, aggravating climate change.

What is an example of AI taking risks instead of humans? ›

An example of AI taking risks in place of humans would be robots being used in areas with high radiation. Humans can get seriously sick or die from radiation, but the robots would be unaffected.

References

Top Articles
Latest Posts
Article information

Author: Jerrold Considine

Last Updated:

Views: 5939

Rating: 4.8 / 5 (78 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Jerrold Considine

Birthday: 1993-11-03

Address: Suite 447 3463 Marybelle Circles, New Marlin, AL 20765

Phone: +5816749283868

Job: Sales Executive

Hobby: Air sports, Sand art, Electronics, LARPing, Baseball, Book restoration, Puzzles

Introduction: My name is Jerrold Considine, I am a combative, cheerful, encouraging, happy, enthusiastic, funny, kind person who loves writing and wants to share my knowledge and understanding with you.