Chatbot hallucination at Anysphere sparks customer backlash and subscription cancellations

Cursor users cancel after chatbot lies about device limits, sparking viral backlash

Add bookmark
Listen to this content

Audio conversion provided by OpenAI

Amelia Brand
Amelia Brand
04/24/2025

AI chatbot

Presenting a warning about the limitations of artificial intelligence (AI) in customer service, a support chatbot at the AI startup Anysphere has come under fire for fabricating a company policy - prompting a wave of subscription cancellations and raising fresh concerns about the use of generative AI in customer experience (CX).

The incident centers around Cursor, Anysphere’s AI-powered coding assistant, which has enjoyed rapid success since its 2023 debut. But its reputation took a significant hit last week after a customer interaction went viral on platforms like Reddit and Hacker News.

According to a user going by the handle “BrokenToasterOven,” they reached out to support after being repeatedly logged out of Cursor when switching between devices. The response came from a chatbot named “Sam,” which confidently claimed that Cursor only supports one device per subscription as a core security policy -  a policy that, as it turns out, does not exist.

Anysphere eventually clarified that the information was incorrect, attributing it to a “hallucination” by its AI support system - a term used when AI models generate plausible but false or misleading responses. “Users are free to use Cursor on multiple machines,” the company later confirmed in a Reddit post. A representative added that the team was investigating whether recent security updates had inadvertently caused session issues.

But by the time the correction arrived - three hours later - the damage had already been done. Frustrated users publicly vented their anger, describing the fictitious policy as “asinine” and “unacceptable.” Many threatened to cancel their subscriptions, with some already making good on that promise.

Don't miss any news, updates or insider tips from CX Network by getting them delivered to your inbox. Sign up to our newsletter and join our community of experts. 

A pattern of chatbot hallucination headaches

This isn’t the first time an AI chatbot has gone off-script in a customer service scenario. In January, Virgin Money's AI chatbot mistakenly reprimanded a customer for using the word "Virgin" when inquiring about merging ISAs. The incident highlighted the challenges of deploying AI tools for customer service and underscored broader issues faced by UK banks in adopting AI, such as ensuring accuracy and avoiding erroneous responses, known as "hallucinations" .

In another incident, DPD, a parcel delivery firm, disabled part of its AI-powered chat system after a customer exposed its flaws by making it swear and criticize the company. The chatbot wrote a critical poem about DPD and used inappropriate language. DPD attributed this unusual behavior to a recent system update and has since deactivated the problematic AI component pending further updates .

More seriously, Air Canada was ordered to compensate a customer last year after its chatbot falsely stated that full-fare tickets could be refunded under a bereavement policy, which did not exist. The Civil Resolution Tribunal ruled in favor of the customer, emphasizing that Air Canada could not disclaim responsibility for information provided by its chatbot .

In New York, a city-sponsored chatbot for small businesses misrepresented laws and provided illegal guidance, sparking outrage among entrepreneurs and local officials. The chatbot advised businesses to engage in practices that are against city regulations, leading to criticism and calls for better oversight .

These cases have drawn attention to the risks of deploying AI in high-stakes, customer-facing roles. Gartner has predicted that “by 2027, a company’s generative AI chatbot will directly lead to the death of a customer from bad information it provides,” emphasizing the urgent need for better governance and safeguards.

Automation’s double-edged sword

Anysphere, which reportedly hit $100 million in annual revenue and is in talks for a valuation nearing $10 billion, is far from the only company embracing automation to scale support operations. But the fallout from this incident serves as a critical reminder: AI is not infallible and when it fails, the consequences can be swift and severe.

While Anysphere investigates the root cause, the lesson for CX professionals is clear: invest in robust monitoring, establish clear handoff systems to human agents and always be transparent with users when AI is in play.

For companies betting their CX on automation, Cursor's misstep is a warning shot. AI may enhance productivity, but without proper oversight, it could just as easily undo hard-won trust.

The Customer Show Asia 2025

Don't miss the 2025 edition, packed with the latest innovations, case studies, and unforgettable networking opportunities.

To view this content, please fill out the form to register and become a member.
Or, if you're already a member, sign in below to view.

Sign In

Please note: That all fields marked with an asterisk (*) are required.



By entering in your information and submitting the form, you give the sponsor permission to contact you regarding their product and you agree to our User Agreement, Privacy Policy, and Cookie Policy.

We respect your privacy, by clicking 'Subscribe' you will receive our e-newsletter, including information on Podcasts, Webinars, event discounts, online learning opportunities and agree to our User Agreement. You have the right to object. For further information on how we process and monitor your personal data click here. You can unsubscribe at any time.


RECOMMENDED