Close Menu
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
What's Hot

Dare To Try This Crazy Egg And Ham Trick?🤪🍳 #funnycat #catmemes #trending

May 12, 2025

The Beats Pill portable speaker drops back down to a record-low price

May 12, 2025

How to Clear Safari History on iPhone and iPad

May 12, 2025
Facebook X (Twitter) Instagram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
KittyBNK
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
KittyBNK
Home » OpenAI’s new safety team is led by board members, including CEO Sam Altman
Tech News

OpenAI’s new safety team is led by board members, including CEO Sam Altman

May 28, 2024No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
OpenAI’s new safety team is led by board members, including CEO Sam Altman
Share
Facebook Twitter LinkedIn Pinterest Email

OpenAI has created a new Safety and Security Committee less than two weeks after the company dissolved the team tasked with protecting humanity from AI’s existential threats. This latest iteration of the group responsible for OpenAI’s safety guardrails will include two board members and CEO Sam Altman, raising questions about whether the move is little more than self-policing theatre amid a breakneck race for profit and dominance alongside partner Microsoft.

The Safety and Security Committee, formed by OpenAI’s board, will be led by board members Bret Taylor (Chair), Nicole Seligman, Adam D’Angelo and Sam Altman (CEO). The new team follows co-founder Ilya Sutskever’s and Jan Leike’s high-profile resignations, which raised more than a few eyebrows. Their former “Superalignment Team” was only created last July.

Following his resignation, Leike wrote in an X (Twitter) thread on May 17 that, although he believed in the company’s core mission, he left because the two sides (product and safety) “reached a breaking point.” Leike added that he was “concerned we aren’t on a trajectory” to adequately address safety-related issues as AI grows more intelligent. He posted that the Superalignment team had recently been “sailing against the wind” within the company and that “safety culture and processes have taken a backseat to shiny products.”

A cynical take would be that a company focused primarily on “shiny products” — while trying to fend off the PR blow of high-profile safety departures — might create a new safety team led by the same people speeding toward those shiny products.

Former OpenAI head of alignment Jan Leike (Jan Leike / X)

The safety departures earlier this month weren’t the only concerning news from the company recently. It also launched (and quickly pulled) a new voice model that sounded remarkably like two-time Oscar Nominee Scarlett Johansson. The Jojo Rabbit actor then revealed that OpenAI Sam Altman had pursued her consent to use her voice to train an AI model but that she had refused.

In a statement to Engadget, Johansson’s team said she was shocked that OpenAI would cast a voice talent that “sounded so eerily similar” to her after pursuing her authorization. The statement added that Johansson’s “closest friends and news outlets could not tell the difference.”

OpenAI also backtracked on nondisparagement agreements it had required from departing executives, changing its tune to say it wouldn’t enforce them. Before that, the company forced exiting employees to choose between being able to speak against the company and keeping the vested equity they earned.

The Safety and Security Committee plans to “evaluate and further develop” the company’s processes and safeguards over the next 90 days. After that, the group will share its recommendations with the entire board. After the whole leadership team reviews its conclusions, it will “publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

In its blog post announcing the new Safety and Security Committee, OpenAI confirmed that the company is currently training its next model, which will succeed GPT-4. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company wrote.

Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

The Beats Pill portable speaker drops back down to a record-low price

May 12, 2025

How to use Gemini to generate unique backgrounds in Google Meet

May 11, 2025

Doctor Who ‘The Story and the Engine’ review: Just a trim, thanks

May 10, 2025

FDA approves at-home pap smear alternative device for cervical cancer screening

May 10, 2025
Add A Comment
Leave A Reply Cancel Reply

What's New Here!

LinkedIn wants you to tell its AI about your dream job

May 7, 2025

Is it Worth Buying SOL as Binance Wrecks Havoc on the Token? 

February 25, 2025

Will ETH FOMO Rally Kick-In after the Bitcoin Halving? Here is What You Can Expect from Ethereum

April 12, 2024

Mumbai soars to 8th position in Global Luxury Real Estate Index with 10% price surge- Republic World

March 20, 2024

Getting the AI Trader Bot That Fits!

June 15, 2024
Facebook X (Twitter) Instagram Telegram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
© 2025 kittybnk.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.