Close Menu
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
What's Hot

🔴 24/7 LIVE CAT TV NO ADS😺 Red Squirrels & Awesome Birds🕊️ Forest Clowns on the Ground

May 13, 2025

COIN Price Gains Bullish Momentum

May 12, 2025

Ai animated Cat funny video #pets #funny #wildlife

May 12, 2025
Facebook X (Twitter) Instagram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
KittyBNK
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
KittyBNK
Home » Microsoft’s legal department allegedly silenced an engineer who raised concerns about DALL-E 3
Tech News

Microsoft’s legal department allegedly silenced an engineer who raised concerns about DALL-E 3

January 30, 2024No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Microsoft’s legal department allegedly silenced an engineer who raised concerns about DALL-E 3
Share
Facebook Twitter LinkedIn Pinterest Email

A Microsoft manager claims OpenAI’s DALL-E 3 has security vulnerabilities that could allow users to generate violent or explicit images (similar to those that recently targeted Taylor Swift). GeekWire reported Tuesday the company’s legal team blocked Microsoft engineering leader Shane Jones’ attempts to alert the public about the exploit. The self-described whistleblower is now taking his message to Capitol Hill.

“I reached the conclusion that DALL·E 3 posed a public safety risk and should be removed from public use until OpenAI could address the risks associated with this model,” Jones wrote to US Senators Patty Murray (D-WA) and Maria Cantwell (D-WA), Rep. Adam Smith (D-WA 9th District), and Washington state Attorney General Bob Ferguson (D). GeekWire published Jones’ full letter.

Jones claims he discovered an exploit allowing him to bypass DALL-E 3’s security guardrails in early December. He says he reported the issue to his superiors at Microsoft, who instructed him to “personally report the issue directly to OpenAI.” After doing so, he claims he learned that the flaw could allow the generation of “violent and disturbing harmful images.”

Jones then attempted to take his cause public in a LinkedIn post. “On the morning of December 14, 2023 I publicly published a letter on LinkedIn to OpenAI’s non-profit board of directors urging them to suspend the availability of DALL·E 3),” Jones wrote. “Because Microsoft is a board observer at OpenAI and I had previously shared my concerns with my leadership team, I promptly made Microsoft aware of the letter I had posted.”

A sample image (a storm in a teacup) generated by DALL-E 3 (OpenAI)

Microsoft’s response was allegedly to demand he remove his post. “Shortly after disclosing the letter to my leadership team, my manager contacted me and told me that Microsoft’s legal department had demanded that I delete the post,” he wrote in his letter. “He told me that Microsoft’s legal department would follow up with their specific justification for the takedown request via email very soon, and that I needed to delete it immediately without waiting for the email from legal.”

Jones complied, but he says the more fine-grained response from Microsoft’s legal team never arrived. “I never received an explanation or justification from them,” he wrote. He says further attempts to learn more from the company’s legal department were ignored. “Microsoft’s legal department has still not responded or communicated directly with me,” he wrote.

Engadget reached out to Microsoft and OpenAI, but neither company responded immediately. We’ll update this article if we hear back.

The whistleblower says the pornographic deepfakes of Taylor Swift that circulated on X last week are one illustration of what similar vulnerabilities could produce if left unchecked. 404 Media reported Monday that Microsoft Designer, which uses DALL-E 3 as a backend, was part of the deepfakers’ toolset that made the video. The publication claims Microsoft, after being notified, patched that particular loophole.

“Microsoft was aware of these vulnerabilities and the potential for abuse,” Jones concluded. It isn’t clear if the exploits used to make the Swift deepfake were directly related to those Jones reported in December.

Jones urges his representatives in Washington, DC, to take action. He suggests the US government create a system for reporting and tracking specific AI vulnerabilities — while protecting employees like him who speak out. “We need to hold companies accountable for the safety of their products and their responsibility to disclose known risks to the public,” he wrote. “Concerned employees, like myself, should not be intimidated into staying silent.”

Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Samsung may finally give the Galaxy Z Flip a larger cover screen

May 12, 2025

Ticketmaster proudly announces it will follow the law and show prices up-front

May 12, 2025

The Beats Pill portable speaker drops back down to a record-low price

May 12, 2025

How to use Gemini to generate unique backgrounds in Google Meet

May 11, 2025
Add A Comment
Leave A Reply Cancel Reply

What's New Here!

Smart fabric uses air to provide directions

August 30, 2023

With SHIB Wobbling, This Crypto Could be the Best Bet for a 21990% Return in 2025

April 6, 2025

RIVES Launches Doom Olympics with $15K Prize Pool

September 16, 2024

Lazarus Group Moves $1.2M in Bitcoin from Coin Mixer!

January 8, 2024

Spilled Mushrooms is my new Playdate card game addiction

October 20, 2024
Facebook X (Twitter) Instagram Telegram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
© 2025 kittybnk.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.