Close Menu
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
What's Hot

Silliest CATS on the Earth 😂 Funniest Cat Videos 2026

March 8, 2026

Proximal Goals : 5-Minute Steps That Reduce Procrastination

March 8, 2026

$599 MacBook Neo for Students: Specs, Tradeoffs, and Best Uses

March 8, 2026
Facebook X (Twitter) Instagram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
KittyBNK
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
KittyBNK
Home » Microsoft’s legal department allegedly silenced an engineer who raised concerns about DALL-E 3
Tech News

Microsoft’s legal department allegedly silenced an engineer who raised concerns about DALL-E 3

January 30, 2024No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Microsoft’s legal department allegedly silenced an engineer who raised concerns about DALL-E 3
Share
Facebook Twitter LinkedIn Pinterest Email

A Microsoft manager claims OpenAI’s DALL-E 3 has security vulnerabilities that could allow users to generate violent or explicit images (similar to those that recently targeted Taylor Swift). GeekWire reported Tuesday the company’s legal team blocked Microsoft engineering leader Shane Jones’ attempts to alert the public about the exploit. The self-described whistleblower is now taking his message to Capitol Hill.

“I reached the conclusion that DALL·E 3 posed a public safety risk and should be removed from public use until OpenAI could address the risks associated with this model,” Jones wrote to US Senators Patty Murray (D-WA) and Maria Cantwell (D-WA), Rep. Adam Smith (D-WA 9th District), and Washington state Attorney General Bob Ferguson (D). GeekWire published Jones’ full letter.

Jones claims he discovered an exploit allowing him to bypass DALL-E 3’s security guardrails in early December. He says he reported the issue to his superiors at Microsoft, who instructed him to “personally report the issue directly to OpenAI.” After doing so, he claims he learned that the flaw could allow the generation of “violent and disturbing harmful images.”

Jones then attempted to take his cause public in a LinkedIn post. “On the morning of December 14, 2023 I publicly published a letter on LinkedIn to OpenAI’s non-profit board of directors urging them to suspend the availability of DALL·E 3),” Jones wrote. “Because Microsoft is a board observer at OpenAI and I had previously shared my concerns with my leadership team, I promptly made Microsoft aware of the letter I had posted.”

A sample image (a storm in a teacup) generated by DALL-E 3 (OpenAI)

Microsoft’s response was allegedly to demand he remove his post. “Shortly after disclosing the letter to my leadership team, my manager contacted me and told me that Microsoft’s legal department had demanded that I delete the post,” he wrote in his letter. “He told me that Microsoft’s legal department would follow up with their specific justification for the takedown request via email very soon, and that I needed to delete it immediately without waiting for the email from legal.”

Jones complied, but he says the more fine-grained response from Microsoft’s legal team never arrived. “I never received an explanation or justification from them,” he wrote. He says further attempts to learn more from the company’s legal department were ignored. “Microsoft’s legal department has still not responded or communicated directly with me,” he wrote.

Engadget reached out to Microsoft and OpenAI, but neither company responded immediately. We’ll update this article if we hear back.

The whistleblower says the pornographic deepfakes of Taylor Swift that circulated on X last week are one illustration of what similar vulnerabilities could produce if left unchecked. 404 Media reported Monday that Microsoft Designer, which uses DALL-E 3 as a backend, was part of the deepfakers’ toolset that made the video. The publication claims Microsoft, after being notified, patched that particular loophole.

“Microsoft was aware of these vulnerabilities and the potential for abuse,” Jones concluded. It isn’t clear if the exploits used to make the Swift deepfake were directly related to those Jones reported in December.

Jones urges his representatives in Washington, DC, to take action. He suggests the US government create a system for reporting and tracking specific AI vulnerabilities — while protecting employees like him who speak out. “We need to hold companies accountable for the safety of their products and their responsibility to disclose known risks to the public,” he wrote. “Concerned employees, like myself, should not be intimidated into staying silent.”

Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Galaxy S26 Ultra, Galaxy Buds 4, Dell XPS 14 and more

March 7, 2026

Samsung Galaxy Buds 4 and 4 Pro review: Impressive audio, imperfect ANC

March 6, 2026

Possibly the most charming Pokémon game yet

March 6, 2026

A beautiful laptop that excels at almost everything… except typing

March 6, 2026
Add A Comment
Leave A Reply Cancel Reply

What's New Here!

Pi Network Users Vent Over KYC Troubles, Here’s a Simple Guide to Pass Verification

April 28, 2025

How to Use AI to Create Viral TikTok and Instagram Reels

July 26, 2025

UK’s Rolex Ripper crime wave tops £1bn for first time

December 8, 2023

A look at the 7 most expensive homes for sale Downriver, Dearborn area – The News Herald

June 28, 2024

cat dance kr rahi hai #cute #cutebaby #trendingshorts #cat #dubidubidu #catvideos #catdance #kids

December 24, 2025
Facebook X (Twitter) Instagram Telegram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
© 2026 kittybnk.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.