Close Menu
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
What's Hot

Rivian begins production on the R2 electric SUV

April 23, 2026

Funniest cat videos in the world | show cat videos on youtube #funny #comedy #shorts

April 23, 2026

Spartans.com Signs Exclusive RAF Partnership as Stake Navigates Legal Pressure

April 23, 2026
Facebook X (Twitter) Instagram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
KittyBNK
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
KittyBNK
Home » Google wants an invisible digital watermark to bring transparency to AI art
Tech News

Google wants an invisible digital watermark to bring transparency to AI art

August 29, 2023No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Google wants an invisible digital watermark to bring transparency to AI art
Share
Facebook Twitter LinkedIn Pinterest Email

Google took a step towards transparency in AI-generated images today. Google DeepMind announced SynthID, a watermarking / identification tool for generative art. The company says the technology embeds a digital watermark, invisible to the human eye, directly onto an image’s pixels. SynthID is rolling out first to “a limited number” of customers using Imagen, Google’s art generator available on its suite of cloud-based AI tools.

One of the many issues with generative art — apart from the ethical implications of training on artists’ work — is the potential for creating deepfakes. For example, the pope’s hot new hip-hop attire (an AI image created with MidJourney) going viral on social media was an early example of what could become more commonplace as generative tools evolve. It doesn’t take much imagination to see how something like political ads using AI-generated art could do much more damage than a funny image circulating on Twitter. “Watermarking audio and visual content to help make it clear that content is AI-generated” was one of the voluntary commitments that seven AI companies agreed to develop after a July meeting at the White House. Google is the first of the companies to launch such a system.

Google doesn’t go too far into the weeds about SynthID’s technical implementation (likely to prevent workarounds), but it says the watermark can’t be easily removed through simple editing techniques. “Finding the right balance between imperceptibility and robustness to image manipulations is difficult,” the company wrote in a DeepMind blog post published today. “We designed SynthID so it doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colours, and saving with various lossy compression schemes — most commonly used for JPEGs,” DeepMind’s SynthID project leaders Sven Gowal and Pushmeet Kohli wrote.

Google DeepMind

The identification part of SynthID rates the image based on three digital watermark confidence levels: detected, not detected and possibly detected. Since the tool is embedded into the image’s pixels, Google says its system can work alongside metadata-based approaches, like the one Adobe uses with its Photoshop generative features, currently available in an open beta.

SynthID includes a pair of deep learning models: one for watermarking and the other for identifying. Google says the two trained on diverse images, culminating in a combined ML model. “The combined model is optimised on a range of objectives, including correctly identifying watermarked content and improving imperceptibility by visually aligning the watermark to the original content,” Gowal and Kohli wrote.

Google acknowledged that it isn’t a perfect solution, adding that it “isn’t foolproof against extreme image manipulations.” But it describes the watermark as “a promising technical approach for empowering people and organisations to work with AI-generated content responsibly.” The company says the tool could expand to other AI models, including those tasked with generating text (like ChatGPT), video and audio. 

Although watermarks could help with deepfakes, it’s easy to imagine digital watermarking turning into an arms race with hackers, with services that adopt SynthID requiring continual updating. In addition, the open-source nature of Stable Diffusion, one of the leading generative tools, could make industry-wide adoption of SynthID or any similar solution a tall order: It already has countless custom builds that can run on local PCs out in the wild. Regardless, Google hopes to make SynthID available to third parties “in the near future” to at least improve AI transparency industry-wide. 

Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Rivian begins production on the R2 electric SUV

April 23, 2026

Meta will show parents the topics of their teens’ AI conversations

April 23, 2026

NASA targets a September launch for its next big space telescope

April 22, 2026

In praise of Tim Cook

April 22, 2026
Add A Comment
Leave A Reply Cancel Reply

What's New Here!

Cat Voice 😃😘/#shots #cat #cute #catvideos #voice #funny #cutecat #kitten #viralshort

February 3, 2026

LangSmith Insights Agent: Analyze Agent Behavior at Scale

November 21, 2025

Coinbase Strengthens Advisory Board with Political and Financial Experts

January 29, 2025

Kaspa & Dogecoin Investors Flock-In as BlockDAG Drops Keynote Teaser on the Moon with an Astounding $18.5M Presale

April 19, 2024

NYC family ‘scared to sleep’ after luxury car thieves strike

September 4, 2023
Facebook X (Twitter) Instagram Telegram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
© 2026 kittybnk.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.