U.N. experts want AI 'red lines.' Here's what they might be.

How do we know when AI has gone too far?
 By 
Chris Taylor
 on 
At the UN podium, a woman with short hair and glasses speaks.
Maria Angelita Ressa, Nobel Peace Prize winner 2021, tells the United Nations why AI "red lines" are needed. Credit: Timothy A. Clary / AFP

The AI Red Lines initiative launched at the United Nations General Assembly Tuesday — the perfect place for a very nonspecific declaration.

More than 200 Nobel laureates and other artificial intelligence experts (including OpenAI co-founder Wojciech Zaremba), plus 70 organizations that deal with AI (including Google DeepMind and Anthropic), signed a letter calling for global "red lines to prevent unacceptable AI risks." However, it was marked as much by what it didn't say as what it did.

"AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy," the letter said, laying out a deadline of 2026 for its recommendation to be implemented: "An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks."


You May Also Like

Recommended deals for you

Apple AirPods Pro 3 Noise Cancelling Heart Rate Wireless Earbuds $219.99 (List Price $249.00)

Apple iPad 11" 128GB Wi-Fi Retina Tablet (Blue, 2025 Release) $274.00 (List Price $349.00)

Amazon Fire HD 10 32GB Tablet (2023 Release, Black) $69.99 (List Price $139.99)

Sony WH-1000XM5 Wireless Noise Canceling Headphones $248.00 (List Price $399.99)

Blink Outdoor 4 1080p Security Camera (5-Pack) $159.99 (List Price $399.99)

Fire TV Stick 4K Streaming Device With Remote (2023 Model) $24.99 (List Price $49.99)

Shark AV2511AE AI Robot Vacuum With XL Self-Empty Base $249.99 (List Price $599.00)

Apple Watch Series 11 (GPS, 42mm, S/M Black Sport Band) $339.00 (List Price $399.00)

WD 6TB My Passport USB 3.0 Portable External Hard Drive $138.65 (List Price $179.99)

Dell 14 Premium Intel Ultra 7 512GB SSD 16GB RAM 2K Laptop $999.99 (List Price $1549.99)

Products available for purchase through affiliate links. If you buy something through links on our site, Mashable may earn an affiliate commission.

Fair enough, but what red lines, exactly? The letter says only that these parameters "should build upon and enforce existing global frameworks and voluntary corporate commitments, ensuring that all advanced AI providers are accountable to shared thresholds."

The lack of specifics may be necessary to keep a very loose coalition of signatories together. They include AI alarmists like 77-year-old Geoffrey Hinton, the so-called "AI godfather" who has spent the last three years predicting various forms of doom from the impending arrival of AGI (artificial general intelligence); the list also includes AI skeptics like cognitive scientist Gary Marcus, who has spent the last three years telling us that AGI isn't coming any time soon.

What could they all agree on? For that matter, what could governments already at loggerheads over AI, mainly the U.S. and China, agree on, and trust each other to implement? Good question.

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

Probably the most concrete answer by a signatory came from Stuart Russell, veteran computer science professor at UC Berkeley, in the wake of a previous attempt to talk red lines at the 2023 Global AI Safety Summit. In a paper titled "Make AI safe or make safe AI?" Russell wrote that AI companies offer "after-the-fact attempts to reduce unacceptable behavior once an AI system has been built." He contrasted that with the red lines approach: ensure built-in safety in the design from the very start, and "unacceptable behavior" won't be possible in the first place.

"It should be possible for developers to say, with high confidence, that their systems will not exhibit harmful behaviors," Russell wrote. "An important side effect of red line regulation will be to substantially increase developers’ safety engineering capabilities."

In his paper, Russell got as far as four red line examples: AI systems should not attempt to replicate themselves; they should never attempt to break into other computer systems; they should not be allowed to give instructions on manufacturing bioweapons. And their output should not allow any "false and harmful statements about real people."

From the standpoint of 2025, we might add red lines that deal with the current ongoing threats of AI psychosis, and AI chatbots that can allegedly be manipulated to give advice on suicide.

We can all agree on that, right?

Trouble is, Russell also believes that no Large Language Model (LLM) is "capable of demonstrating compliance", even with his four minimal red-line requirements. Why? Because they are predictive word engines that fundamentally don't understand what they're saying. They are not capable of reasoning, even on basic logic puzzles, and increasingly "hallucinate" answers to satisfy their users.

So true AI red line safety, arguably, would mean none of the current AI models would be allowed on the market. That doesn't bother Russell; as he points out, we don't care that compliance is difficult when it comes to medicine or nuclear power. We regulate regardless of outcome.

But the notion that AI companies will just voluntarily shut down their models until they can prove to regulators that no harm will come to users? This is a greater hallucination than anything ChatGPT can come up with.

Chris Taylor
Chris Taylor

Chris is a veteran tech, entertainment and culture journalist, author of 'How Star Wars Conquered the Universe,' and co-host of the Doctor Who podcast 'Pull to Open.' Hailing from the U.K., Chris got his start as a sub editor on national newspapers. He moved to the U.S. in 1996, and became senior news writer for Time.com a year later. In 2000, he was named San Francisco bureau chief for Time magazine. He has served as senior editor for Business 2.0, and West Coast editor for Fortune Small Business and Fast Company. Chris is a graduate of Merton College, Oxford and the Columbia University Graduate School of Journalism. He is also a long-time volunteer at 826 Valencia, the nationwide after-school program co-founded by author Dave Eggers. His book on the history of Star Wars is an international bestseller and has been translated into 11 languages.

Mashable Potato

Recommended For You

The 8 best earbuds of 2025, tested by audio experts
bose quietcomfort ultra earbuds on table next to macbook

Starbucks Red Cup Day is finally here: How to get your free cup on Nov. 13
Starbucks red cup

Don't break the bank: The best budget earbuds, tested by our audio experts
A pair of Bose QuietComfort earbuds

'Red Dead Redemption' comes to PS5, Xbox, iOS, and Android: How to get it
John Marston aiming a gun in Red Dead Redemption

More in Tech
Amazon warns of major Black Friday impersonation scam targeting millions of users
Woman receives a phone call from an unknown number,

160+ Amazon Black Friday deals still live: AirPods, Kindles, Lego, and LG TVs are still available
Ninja air fryer oven, Apple Watch, Echo Spot, Bose headphones, and Lego Star Wars box on a busy pink background that indicates black friday sales



Everything to know about Best Buy Black Friday 2025: Doorbusters, best deals still live, ad highlights
shoppers at a best buy store on black friday

Trending on Mashable
NYT Connections hints today: Clues, answers for November 29, 2025
Connections game on a smartphone

Streaming just got cheaper: Black Friday deals still live on Hulu, HBO Max, Apple TV, Disney+, and more
Disney+, Hulu, HBO Max, Peacock, and Prime Video logos with colorful background and black friday icon

Wordle today: Answer, hints for November 29, 2025
Wordle game on a smartphone

The 23 best Black Friday PlayStation game deals still live (updated)
helldivers II, clair obscur, and silent hill f on pink background

NYT Strands hints, answers for November 29, 2025
A game being played on a smartphone.
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!