Close Menu
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
What's Hot

Funniest Cats and Dogs Clips 2026😼🐶Try Not To Laugh😜 Part 1

March 8, 2026

🔴 24/7 LIVE CAT TV NO ADS😺 Awesome Red Squirrels and Adorable Little Birds Forest Nut Party for All

March 8, 2026

You Laugh, You Lose! 🤣 Funny Cat Videos 2026 😹 Part 128

March 7, 2026
Facebook X (Twitter) Instagram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
KittyBNK
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
KittyBNK
Home » Anthropic’s Claude AI now has the ability to end ‘distressing’ conversations
Tech News

Anthropic’s Claude AI now has the ability to end ‘distressing’ conversations

August 17, 2025No Comments2 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Anthropic’s Claude AI now has the ability to end ‘distressing’ conversations
Share
Facebook Twitter LinkedIn Pinterest Email

Anthropic’s latest feature for two of its Claude AI models could be the beginning of the end for the AI jailbreaking community. The company announced in a post on its website that the Claude Opus 4 and 4.1 models now have the power to end a conversation with users. According to Anthropic, this feature will only be used in “rare, extreme cases of persistently harmful or abusive user interactions.”

To clarify, Anthropic said those two Claude models could exit harmful conversations, like “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.” With Claude Opus 4 and 4.1, these models will only end a conversation “as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted,” according to Anthropic. However, Anthropic claims most users won’t experience Claude cutting a conversation short, even when talking about highly controversial topics, since this feature will be reserved for “extreme edge cases.”

Anthropic’s example of Claude ending a conversation

(Anthropic)

In the scenarios where Claude ends a chat, users can no longer send any new messages in that conversation, but can start a new one immediately. Anthropic added that if a conversation is ended, it won’t affect other chats and users can even go back and edit or retry previous messages to steer towards a different conversational route.

For Anthropic, this move is part of its research program that studies the idea of AI welfare. While the idea of anthropomorphizing AI models remains an ongoing debate, the company said the ability to exit a “potentially distressing interaction” was a low-cost way to manage risks for AI welfare. Anthropic is still experimenting with this feature and encourages its users to provide feedback when they encounter such a scenario.

Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Galaxy S26 Ultra, Galaxy Buds 4, Dell XPS 14 and more

March 7, 2026

Samsung Galaxy Buds 4 and 4 Pro review: Impressive audio, imperfect ANC

March 6, 2026

Possibly the most charming Pokémon game yet

March 6, 2026

A beautiful laptop that excels at almost everything… except typing

March 6, 2026
Add A Comment
Leave A Reply Cancel Reply

What's New Here!

Whale Buys $6M Eigen Using Two Fresh Wallets 

October 10, 2024

What Chainlink’s Cross-Chain Interoperability Means for Ronin Users

February 2, 2025

Bitcoin up 4% as Crypto Prices Rebound, These Altcoins Have Also Seen Gains

December 21, 2023

Samsung Galaxy Tab S11 Setup Guide: Tips & More

October 27, 2025

John Player F1 Collection highlights SBX Cars debut

April 5, 2024
Facebook X (Twitter) Instagram Telegram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
© 2026 kittybnk.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.