Close Menu
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
What's Hot

Silliest CATS on the Earth 😂 Funniest Cat Videos 2026

March 8, 2026

Proximal Goals : 5-Minute Steps That Reduce Procrastination

March 8, 2026

$599 MacBook Neo for Students: Specs, Tradeoffs, and Best Uses

March 8, 2026
Facebook X (Twitter) Instagram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
KittyBNK
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
KittyBNK
Home » The OpenAI team tasked with protecting humanity is no more
Tech News

The OpenAI team tasked with protecting humanity is no more

May 17, 2024No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
The OpenAI team tasked with protecting humanity is no more
Share
Facebook Twitter LinkedIn Pinterest Email

In the summer of 2023, OpenAI created a “Superalignment” team whose goal was to steer and control future AI systems that could be so powerful they could lead to human extinction. Less than a year later, that team is dead.

OpenAI told Bloomberg that the company was “integrating the group more deeply across its research efforts to help the company achieve its safety goals.” But a series of tweets from Jan Leike, one of the team’s leaders who recently quit revealed internal tensions between the safety team and the larger company.

In a statement posted on X on Friday, Leike said that the Superalignment team had been fighting for resources to get research done. “Building smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.” OpenAI did not immediately respond to a request for comment from Engadget.

X

Leike’s departure earlier this week came hours after OpenAI chief scientist Sutskevar announced that he was leaving the company. Sutskevar was not only one of the leads on the Superalignment team, but helped co-found the company as well. Sutskevar’s move came six months after he was involved in a decision to fire CEO Sam Altman over concerns that Altman hadn’t been “consistently candid” with the board. Altman’s all-too-brief ouster sparked an internal revolt within the company with nearly 800 employees signing a letter in which they threatened to quit if Altman wasn’t reinstated. Five days later, Altman was back as OpenAI’s CEO after Sutskevar had signed a letter stating that he regretted his actions.

When it announced the creation of the Superalignment team, OpenAI said that it would dedicate 20 percent of its computer power over the next four years to solving the problem of controlling powerful AI systems of the future. “[Getting] this right is critical to achieve our mission,” the company wrote at the time. On X, Leike wrote that the Superalignment team was “struggling for compute and it was getting harder and harder” to get crucial research around AI safety done. “Over the past few months my team has been sailing against the wind,” he wrote and added that he had reached “a breaking point” with OpenAI’s leadership over disagreements about the company’s core priorities.

Over the last few months, there have been more departures from the Superalignment team. In April, OpenAI reportedly fired two researchers, Leopold Aschenbrenner and Pavel Izmailov, for allegedly leaking information.

OpenAI told Bloomberg that its future safety efforts will be led by John Schulman, another co-founder, whose research focuses on large language models. Jakub Pachocki, a director who led the development of GPT-4 — one of OpenAI’s flagship large language models — would replace Sutskevar as chief scientist.

Superalignment wasn’t the only team at OpenAI focused on AI safety. In October, the company started a brand new “preparedness” team to stem potential “catastrophic risks” from AI systems including cybersecurity issues and chemical, nuclear and biological threats.

Update, May 17 2024, 3:28 PM ET: In response to a request for comment on Leike’s allegations, an OpenAI PR person directed Engadget to Sam Altman’s tweet saying that he’d say something in the next couple of days.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.


Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Galaxy S26 Ultra, Galaxy Buds 4, Dell XPS 14 and more

March 7, 2026

Samsung Galaxy Buds 4 and 4 Pro review: Impressive audio, imperfect ANC

March 6, 2026

Possibly the most charming Pokémon game yet

March 6, 2026

A beautiful laptop that excels at almost everything… except typing

March 6, 2026
Add A Comment
Leave A Reply Cancel Reply

What's New Here!

What is OpenAI’s Deep Research and Why It Matters for AI Design

February 5, 2025

New York Billionaire’s $45M Old Superyacht Snatched Off the Market in Record Time

September 6, 2023

cat dance video 😻🥰 dudi dudi dum 🍼 #catvideos #dudidudidam #short #cat 🙄 #funny

August 20, 2025

Jungle 2 Jungle

January 7, 2026

How Gumroad Uses AI Coding to Boost Productivity by 40x

April 29, 2025
Facebook X (Twitter) Instagram Telegram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
© 2026 kittybnk.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.