Close Menu
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
What's Hot

गैस सिलेंडर और बिल्ली | Chintu Chinki | Cartoon | pagal beta | desi comedy video| cs bisht vines

April 23, 2026

Titanium Court mashes together genres and cultural references to tell a strange, funny tale

April 23, 2026

Rivian begins production on the R2 electric SUV

April 23, 2026
Facebook X (Twitter) Instagram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
KittyBNK
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
KittyBNK
Home » Google now thinks it's OK to use AI for weapons and surveillance
Tech News

Google now thinks it's OK to use AI for weapons and surveillance

February 4, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Google now thinks it's OK to use AI for weapons and surveillance
Share
Facebook Twitter LinkedIn Pinterest Email

Google has made one of the most substantive changes to its AI principles since first publishing them in 2018. In a change spotted by The Washington Post, the search giant edited the document to remove pledges it had made promising it would not "design or deploy" AI tools for use in weapons or surveillance technology. Previously, those guidelines included a section titled "applications we will not pursue," which is not present in the current version of the document.

Instead, there's now a section titled "responsible development and deployment." There, Google says it will implement "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights."

That's a far broader commitment than the specific ones the company made as recently as the end of last month when the prior version of its AI principles was still live on its website. For instance, as it relates to weapons, the company previously said it would not design AI for use in "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” As for AI surveillance tools, the company said it would not develop tech that violates "internationally accepted norms."

Google

When asked for comment, a Google spokesperson pointed Engadget to a blog post the company published on Thursday. In it, DeepMind CEO Demis Hassabis and James Manyika, senior vice president of research, labs, technology and society at Google, say AI's emergence as a "general-purpose technology" necessitated a policy change. 

"We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," the two wrote. "… Guided by our AI Principles, we will continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights — always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks."

When Google first published its AI principles in 2018, it did so in the aftermath of Project Maven. It was a controversial government contract that, had Google decided to renew it, would have seen the company provide AI software to the Department of Defense for analyzing drone footage. Dozens of Google employees quit the company in protest of the contract, with thousands more signing a petition in opposition. When Google eventually published its new guidelines, CEO Sundar Pichai reportedly told staff his hope was they would stand "the test of time."

By 2021, however, Google began pursuing military contracts again, with what was reportedly an "aggressive" bid for the Pentagon's Joint Warfighting Cloud Capability cloud contract. At the start of this year, The Washington Post reported that Google employees had repeatedly worked with Israel's Defense Ministry to expand the government's use of AI tools.

This article originally appeared on Engadget at https://www.engadget.com/ai/google-now-thinks-its-ok-to-use-ai-for-weapons-and-surveillance-224824373.html?src=rss
Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Titanium Court mashes together genres and cultural references to tell a strange, funny tale

April 23, 2026

Rivian begins production on the R2 electric SUV

April 23, 2026

Meta will show parents the topics of their teens’ AI conversations

April 23, 2026

NASA targets a September launch for its next big space telescope

April 22, 2026
Add A Comment
Leave A Reply Cancel Reply

What's New Here!

CoinStats Reports Security Breach, 1,590 Wallets Affected

June 24, 2024

cute baby cat wants cool costume #cat #catvideos #catlovers #trendingshorts

May 8, 2025

iPad Air M4 vs. M3: Full Comparison

March 5, 2026

How to Use XLOOKUP in Excel for Advanced Data Analysis

October 25, 2024

Cool Cats Launches Manga Series in Collaboration with Kodansha

July 16, 2024
Facebook X (Twitter) Instagram Telegram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
© 2026 kittybnk.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.