Close Menu
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
What's Hot

Titanium Court mashes together genres and cultural references to tell a strange, funny tale

April 23, 2026

Rivian begins production on the R2 electric SUV

April 23, 2026

Funniest cat videos in the world | show cat videos on youtube #funny #comedy #shorts

April 23, 2026
Facebook X (Twitter) Instagram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
KittyBNK
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
KittyBNK
Home » OpenAI’s new safety team is led by board members, including CEO Sam Altman
Tech News

OpenAI’s new safety team is led by board members, including CEO Sam Altman

May 28, 2024No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
OpenAI’s new safety team is led by board members, including CEO Sam Altman
Share
Facebook Twitter LinkedIn Pinterest Email

OpenAI has created a new Safety and Security Committee less than two weeks after the company dissolved the team tasked with protecting humanity from AI’s existential threats. This latest iteration of the group responsible for OpenAI’s safety guardrails will include two board members and CEO Sam Altman, raising questions about whether the move is little more than self-policing theatre amid a breakneck race for profit and dominance alongside partner Microsoft.

The Safety and Security Committee, formed by OpenAI’s board, will be led by board members Bret Taylor (Chair), Nicole Seligman, Adam D’Angelo and Sam Altman (CEO). The new team follows co-founder Ilya Sutskever’s and Jan Leike’s high-profile resignations, which raised more than a few eyebrows. Their former “Superalignment Team” was only created last July.

Following his resignation, Leike wrote in an X (Twitter) thread on May 17 that, although he believed in the company’s core mission, he left because the two sides (product and safety) “reached a breaking point.” Leike added that he was “concerned we aren’t on a trajectory” to adequately address safety-related issues as AI grows more intelligent. He posted that the Superalignment team had recently been “sailing against the wind” within the company and that “safety culture and processes have taken a backseat to shiny products.”

A cynical take would be that a company focused primarily on “shiny products” — while trying to fend off the PR blow of high-profile safety departures — might create a new safety team led by the same people speeding toward those shiny products.

Former OpenAI head of alignment Jan Leike (Jan Leike / X)

The safety departures earlier this month weren’t the only concerning news from the company recently. It also launched (and quickly pulled) a new voice model that sounded remarkably like two-time Oscar Nominee Scarlett Johansson. The Jojo Rabbit actor then revealed that OpenAI Sam Altman had pursued her consent to use her voice to train an AI model but that she had refused.

In a statement to Engadget, Johansson’s team said she was shocked that OpenAI would cast a voice talent that “sounded so eerily similar” to her after pursuing her authorization. The statement added that Johansson’s “closest friends and news outlets could not tell the difference.”

OpenAI also backtracked on nondisparagement agreements it had required from departing executives, changing its tune to say it wouldn’t enforce them. Before that, the company forced exiting employees to choose between being able to speak against the company and keeping the vested equity they earned.

The Safety and Security Committee plans to “evaluate and further develop” the company’s processes and safeguards over the next 90 days. After that, the group will share its recommendations with the entire board. After the whole leadership team reviews its conclusions, it will “publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

In its blog post announcing the new Safety and Security Committee, OpenAI confirmed that the company is currently training its next model, which will succeed GPT-4. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company wrote.

Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Titanium Court mashes together genres and cultural references to tell a strange, funny tale

April 23, 2026

Rivian begins production on the R2 electric SUV

April 23, 2026

Meta will show parents the topics of their teens’ AI conversations

April 23, 2026

NASA targets a September launch for its next big space telescope

April 22, 2026
Add A Comment
Leave A Reply Cancel Reply

What's New Here!

Will President Trump Delay Altseason 2026?

January 5, 2026

Why luxury watch theft surged in the past year: around 80,000 timepieces worth US$1.3 billion are stolen or missing overall, with Rolex the most targeted brand followed by Omega and Breitling

August 22, 2023

What’s Next for Ethereum, Solana, and XRP Prices?

February 10, 2026

iPhone SE vs. iPhone 16 Pro Max: Speed Test Showdown

October 21, 2024

The best cheap Windows laptops for 2024

October 10, 2024
Facebook X (Twitter) Instagram Telegram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
© 2026 kittybnk.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.