Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Candice King and Steven Krueger’s Relationship Timeline

    May 14, 2026

    GM Candidates for the Predators, and the Blackhawks Are Already Loaded With Young Defensemen

    May 14, 2026

    Cuba says CIA chief visited Havana as energy crisis worsens

    May 14, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Wowie NewsWowie News
    • Home
    • Politics
    • Business
    • Entertainment
    • Sports
    Wowie NewsWowie News
    Home»Business»Claude is telling users to go to sleep mid-session. Users are annoyed but Anthropic says it’s a tic
    Business

    Claude is telling users to go to sleep mid-session. Users are annoyed but Anthropic says it’s a tic

    Alex MaschinoBy Alex MaschinoMay 14, 2026No Comments4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Anthropic’s Claude is telling people to go to sleep and users can’t figure out why.

    A quick scan of Reddit reveals that hundreds of people have had the same issue dating back months—and as recently as Wednesday. Claude’s sleep demands are varied and, often, quirky variations of the same message.

    To one user it may write a simple “get some rest,” yet for others its messages are more personalized and empathetic. Oftentimes, Claude will repeat the message multiple times.

    “Now go to sleep again. Again. For the THIRD time tonight…” it replied to a person with the Reddit username, angie_akhila.

    Some users have said they find Claude’s late night rest reminders “thoughtful,” while others have said they’re annoying, given Claude often gets the time wrong, anyway. 

    “It often does it at like 8:30 in the morning. Tells me to go get some rest and we’ll pick back up in the morning,” wrote one user on Reddit. 

    Online speculation abounds on why the chatbot insists users rest, including a theory that it’s an intentional feature to promote users’ wellbeing, or that the Anthropic is trying to save computing power by discouraging prolonged Claude use. These explanations aren’t likely as Claude isn’t given context about a user’s usage. The company also recently struck a deal with Elon Musk’s SpaceXAI (formerly SpaceX) to add more than 300 gigawatts of compute capacity.

    Anthropic did not immediately reply to Fortune’s request for comment seeking more information about why Claude may be telling users to go to sleep. Yet, Sam McAllister, a member of the staff at Anthropic, wrote in a post on X that the behavior is a “Bit of a character tic.” 

    “We’re aware of this and hoping to fix it in future models,” he added in the same post.

    Experts tell Fortune that Claude’s insistence on sleep is potentially rooted in its training data. Rather than being “thoughtful,” as some described it, Jan Liphardt, a Stanford bioengineering professor said the large language model may merely be repeating a phrase used in its training data in similar situations. 

    “It doesn’t mean that the frontier model has suddenly become sentient,” said Liphardt, who is also the CEO of OpenMind, which builds software for AI-connected robots. “It doesn’t mean that this model has now come alive. It’s reflecting that it’s read 25,000 books on humans’ need [for] sleep, and humans sleep at night.”

    Leo Derikiants, the co-founder and CEO of Mind Simulation Lab, an independent AI research lab trying to achieve artificial general intelligence (AGI), told Fortune that Claude’s rest reminders may be influenced by a system prompt acting behind the scenes. These system prompts are like hidden instructions that help guide an LLMs behavior and sets boundaries. 

    One company which publishes their system prompts publicly is Grok-creator xAI, now a part of SpaceXAI. Grok’s instructions on Github, for instance, list several safety considerations including not assisting users asking about violent crimes. Yet, because of Musk’s branding of Grok as “brutally honest,” Grok 4’s system prompt also encourages it to, in certain cases, ignore restrictions imposed by users and “pursue a truth-seeking, non-partisan viewpoint.”

    It’s also possible that Claude is seizing upon the “go to sleep” language as a way of managing larger context windows, Derikiants said. LLMs like Claude, can only reference a limited amount of information at once. When the context window is nearly full, that may encourage the LLM to introduce wrap-up phrases such as “good night.” The definitive reason, though, requires further research by Anthropic, he added.

    Despite the seemingly logical explanations that may explain the behavior, users could be forgiven for seeing the response as evidence of some leap in intelligence on the part of LLMs. The pace of innovation in the AI race has led to increasingly frequent updates and new model releases.

    Just in the past month, OpenAI has released GPT 5.5, which OpenAI president Greg Brockman called an advancement “towards more agentic and intuitive computing.” Meanwhile, Anthropic released Opus 4.7 publicly last month while it held its most capable model, Mythos, back from public release because it said it was too dangerous.

    Liphardt said AI is advancing so rapidly it is increasingly common for people to assign human characteristics to AI. As these systems get better at mimicking empathy or concern, he warned, it becomes easier for users to forget they are interacting with pattern-recognition engines. 

    “I’m continuously surprised by how quickly people, when they interact with a frontier model, project life into it and develop strong connection.”

    annoyed Anthropic Claude midsession Sleep telling tic users
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Alex Maschino
    • Website

    Related Posts

    Sony Music Publishing wins Pop Publisher of the Year at 2026 BMI Awards… as Kendrick Lamar’s ‘Luther’ takes Song of the Year

    May 14, 2026

    Form S-3ASR Eos Energy Enterprises Inc For: 13 May

    May 13, 2026

    AI chatbots are becoming mental health tools before they are ready

    May 12, 2026
    Leave A Reply Cancel Reply

    Facebook X (Twitter) Instagram Pinterest
    • About us
    • Discailmer
    • Term And Conditions
    • Privacy Policy
    • Contact us
    © 2026 CopyRight. Designed by https://wowienews.com/.

    Type above and press Enter to search. Press Esc to cancel.