Close Menu
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
What's Hot

XRP Price Prediction For May 12

May 12, 2025

Theif Cat , Cat funny video #pets #animallife #funny

May 12, 2025

cat funny videos 🤣🤣📸 #catreaction #funny #catvideos #pets #shaababies #cat #cute #animals

May 11, 2025
Facebook X (Twitter) Instagram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
KittyBNK
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
KittyBNK
Home » Google now thinks it's OK to use AI for weapons and surveillance
Tech News

Google now thinks it's OK to use AI for weapons and surveillance

February 4, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Google now thinks it's OK to use AI for weapons and surveillance
Share
Facebook Twitter LinkedIn Pinterest Email

Google has made one of the most substantive changes to its AI principles since first publishing them in 2018. In a change spotted by The Washington Post, the search giant edited the document to remove pledges it had made promising it would not "design or deploy" AI tools for use in weapons or surveillance technology. Previously, those guidelines included a section titled "applications we will not pursue," which is not present in the current version of the document.

Instead, there's now a section titled "responsible development and deployment." There, Google says it will implement "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights."

That's a far broader commitment than the specific ones the company made as recently as the end of last month when the prior version of its AI principles was still live on its website. For instance, as it relates to weapons, the company previously said it would not design AI for use in "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” As for AI surveillance tools, the company said it would not develop tech that violates "internationally accepted norms."

Google

When asked for comment, a Google spokesperson pointed Engadget to a blog post the company published on Thursday. In it, DeepMind CEO Demis Hassabis and James Manyika, senior vice president of research, labs, technology and society at Google, say AI's emergence as a "general-purpose technology" necessitated a policy change. 

"We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," the two wrote. "… Guided by our AI Principles, we will continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights — always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks."

When Google first published its AI principles in 2018, it did so in the aftermath of Project Maven. It was a controversial government contract that, had Google decided to renew it, would have seen the company provide AI software to the Department of Defense for analyzing drone footage. Dozens of Google employees quit the company in protest of the contract, with thousands more signing a petition in opposition. When Google eventually published its new guidelines, CEO Sundar Pichai reportedly told staff his hope was they would stand "the test of time."

By 2021, however, Google began pursuing military contracts again, with what was reportedly an "aggressive" bid for the Pentagon's Joint Warfighting Cloud Capability cloud contract. At the start of this year, The Washington Post reported that Google employees had repeatedly worked with Israel's Defense Ministry to expand the government's use of AI tools.

This article originally appeared on Engadget at https://www.engadget.com/ai/google-now-thinks-its-ok-to-use-ai-for-weapons-and-surveillance-224824373.html?src=rss
Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

How to use Gemini to generate unique backgrounds in Google Meet

May 11, 2025

Doctor Who ‘The Story and the Engine’ review: Just a trim, thanks

May 10, 2025

FDA approves at-home pap smear alternative device for cervical cancer screening

May 10, 2025

Surface Pro, Rivian, Canon, Light Phone and more

May 10, 2025
Add A Comment
Leave A Reply Cancel Reply

What's New Here!

Solana & XRP Slide as Crypto Prices Tank; 3 Coins to Buy on the Dip

February 22, 2024

Geology & Mineralisation of La Cristina Copper-Gold Prospect – Sierra Maestra Copper Belt, Cuba

March 3, 2024

Will Pepe or Pepeto Be the Next 100x Memecoin in This Bull Run?

May 10, 2025

Interests of Users and Platform Are Aligned

March 20, 2025

Analyzing and creating images in ChatGPT DallE 3

October 16, 2023
Facebook X (Twitter) Instagram Telegram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
© 2025 kittybnk.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.