“One question Anthropic is exploring, he said, is whether future A.I. models should be given the ability to stop chatting with an annoying or abusive user, if they find the user’s requests too distressing.”

– Kevin Roose

Research at frontier model maker Anthropic now includes two new strands: might AI models become conscious, and, if they do, should they be treated differently? The New York Times technology columist Kevin Roose reports on new work underway by a small group at the San Francisco AI company in what they call “model welfare.”

If A.I. Systems Become Conscious, Should They Have Rights? | THE NEW YORK TIMES | April 24, 2025 | by Kevin Roose

SEE FULL STORY

LATEST

Discover more from journalismAI.com

Subscribe now to keep reading and get access to the full archive.

Continue reading