Close Menu
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
What's Hot

How to Optimize Samsung Galaxy A26: 17 Essential Settings

June 6, 2025

Trump Coin & Tesla Stock Crash Big Amid Elon Musk & Donald Trump Feud: Here’s What’s Next!

June 6, 2025

Weapon 🤣 #funny #cat #shorts #dog #shortsfeed #animals #dubbingdappa #catvideos #pets #dogshorts

June 6, 2025
Facebook X (Twitter) Instagram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
KittyBNK
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
KittyBNK
Home » Google wants an invisible digital watermark to bring transparency to AI art
Tech News

Google wants an invisible digital watermark to bring transparency to AI art

August 29, 2023No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Google wants an invisible digital watermark to bring transparency to AI art
Share
Facebook Twitter LinkedIn Pinterest Email

Google took a step towards transparency in AI-generated images today. Google DeepMind announced SynthID, a watermarking / identification tool for generative art. The company says the technology embeds a digital watermark, invisible to the human eye, directly onto an image’s pixels. SynthID is rolling out first to “a limited number” of customers using Imagen, Google’s art generator available on its suite of cloud-based AI tools.

One of the many issues with generative art — apart from the ethical implications of training on artists’ work — is the potential for creating deepfakes. For example, the pope’s hot new hip-hop attire (an AI image created with MidJourney) going viral on social media was an early example of what could become more commonplace as generative tools evolve. It doesn’t take much imagination to see how something like political ads using AI-generated art could do much more damage than a funny image circulating on Twitter. “Watermarking audio and visual content to help make it clear that content is AI-generated” was one of the voluntary commitments that seven AI companies agreed to develop after a July meeting at the White House. Google is the first of the companies to launch such a system.

Google doesn’t go too far into the weeds about SynthID’s technical implementation (likely to prevent workarounds), but it says the watermark can’t be easily removed through simple editing techniques. “Finding the right balance between imperceptibility and robustness to image manipulations is difficult,” the company wrote in a DeepMind blog post published today. “We designed SynthID so it doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colours, and saving with various lossy compression schemes — most commonly used for JPEGs,” DeepMind’s SynthID project leaders Sven Gowal and Pushmeet Kohli wrote.

Google DeepMind

The identification part of SynthID rates the image based on three digital watermark confidence levels: detected, not detected and possibly detected. Since the tool is embedded into the image’s pixels, Google says its system can work alongside metadata-based approaches, like the one Adobe uses with its Photoshop generative features, currently available in an open beta.

SynthID includes a pair of deep learning models: one for watermarking and the other for identifying. Google says the two trained on diverse images, culminating in a combined ML model. “The combined model is optimised on a range of objectives, including correctly identifying watermarked content and improving imperceptibility by visually aligning the watermark to the original content,” Gowal and Kohli wrote.

Google acknowledged that it isn’t a perfect solution, adding that it “isn’t foolproof against extreme image manipulations.” But it describes the watermark as “a promising technical approach for empowering people and organisations to work with AI-generated content responsibly.” The company says the tool could expand to other AI models, including those tasked with generating text (like ChatGPT), video and audio. 

Although watermarks could help with deepfakes, it’s easy to imagine digital watermarking turning into an arms race with hackers, with services that adopt SynthID requiring continual updating. In addition, the open-source nature of Stable Diffusion, one of the leading generative tools, could make industry-wide adoption of SynthID or any similar solution a tall order: It already has countless custom builds that can run on local PCs out in the wild. Regardless, Google hopes to make SynthID available to third parties “in the near future” to at least improve AI transparency industry-wide. 

Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Walmart expands drone deliveries to five new cities, including Atlanta

June 5, 2025

Current in-stock availability on consoles and games

June 5, 2025

Sony WF-C710N review: More than midrange

June 5, 2025

Discord’s CTO is just as worried about enshittification as you are

June 5, 2025
Add A Comment
Leave A Reply Cancel Reply

What's New Here!

Top Altcoins To Watch Next Week: BONK, APT, SUI Prices May Gain As Bitcoin Loses Dominance

January 14, 2024

Archaeologists Discover 2,300-Year-Old Gold Ring In Jerusalem

May 27, 2024

Man Sells Mother’s Luxury Cars with Fake Signatures for Clubbing Debts | Auto News

March 27, 2024

How Musicians Are Using NFTs to Transform the Music Industry

September 6, 2024

CAT MEOWING SOUNDS | Realistic Cat Sounds and Noises with Videos

November 10, 2024
Facebook X (Twitter) Instagram Telegram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
© 2025 kittybnk.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.