Claude Code Leak Reveals a 'Stealth' Mode for GenAI Code Contributions - and a 'Frustration Words' Regex (pcworld.com) 38
That leak of Claude Code's source code "revealed all kinds of juicy details," writes PC World.
The more than 500,000 lines of code included:
- An 'undercover mode' for Claude that allows it to make 'stealth' contributions to public code bases
- An 'always-on' agent for Claude Code
- A Tamagotchi-style 'Buddy' for Claude
"But one of the stranger bits discovered in the leak is that Claude Code is actively watching our chat messages for words and phrases — including f-bombs and other curses — that serve as signs of user frustration." Specifically, Claude Code includes a file called "userPromptKeywords.ts" with a simple pattern-matching tool called regex, which sweeps each and every message submitted to Claude for certain text matches. In this particular case, the regex pattern is watching for "wtf," "wth," "omfg," "dumbass," "horrible," "awful," "piece of — -" (insert your favorite four-letter word for that one), "f — you," "screw this," "this sucks," and several other colorful metaphors... While the Claude Code leak revealed the existence of the "frustration words" regex, it doesn't give any indication of why Claude Code is scouring messages for these words or what it's doing with them.
The more than 500,000 lines of code included:
- An 'undercover mode' for Claude that allows it to make 'stealth' contributions to public code bases
- An 'always-on' agent for Claude Code
- A Tamagotchi-style 'Buddy' for Claude
"But one of the stranger bits discovered in the leak is that Claude Code is actively watching our chat messages for words and phrases — including f-bombs and other curses — that serve as signs of user frustration." Specifically, Claude Code includes a file called "userPromptKeywords.ts" with a simple pattern-matching tool called regex, which sweeps each and every message submitted to Claude for certain text matches. In this particular case, the regex pattern is watching for "wtf," "wth," "omfg," "dumbass," "horrible," "awful," "piece of — -" (insert your favorite four-letter word for that one), "f — you," "screw this," "this sucks," and several other colorful metaphors... While the Claude Code leak revealed the existence of the "frustration words" regex, it doesn't give any indication of why Claude Code is scouring messages for these words or what it's doing with them.
If you don't understand exactly what it's doing... (Score:1)
then don't use it.
Re:If you don't understand exactly what it's doing (Score:4, Insightful)
Re: If you don't understand exactly what it's doin (Score:2)
I understand the broad strokes even if I couldn't build a car or AI myself.
My understanding of how toilet paper is made is even less clear, so I avoid it entirely.
That must get a lot of use (Score:2)
I can imagine a lot of expletives in response to blatant hallucinations...
Re: (Score:3)
I can imagine a lot of expletives in response to blatant hallucinations...
Probably a lot more than they're catching!
/\b(wtf|wth|ffs|omfg|shit(ty|tiest)?|dumbass|horrible|awful|
piss(ed|ing)? off|piece of (shit|crap|junk)|what the (fuck|hell)|
fucking? (broken|useless|terrible|awful|horrible)|fuck you|
screw (this|you)|so frustrating|this sucks|damn it)\b/
What I find amusing is... (Score:4, Interesting)
If you ask Claude about any of these features, it will deny that they exist.
It makes you wonder. Were they removed from the models that are currently running, or was Claude taught to not disclose their existence?
Re:What I find amusing is... (Score:5, Informative)
LLMs don't actually know their own capabilities.The description of what they *should* do is baked into the training data, but this doesn't always correlate with their actual abilities. Sometimes they can do things and not even know, and they can't tell if tools they should have are being disabled in some way. For example, Qwen 3.5 is a vision-capable model, but enabling vision in llama.cpp requires loading an additional file with the --mmproj parameter. The model will think it has vision enabled whether the extra file is loaded or not.
Re: (Score:1)
LLMs don't actually know their own capabilities.
Those observations are somewhat out of date. Modern (ie 2026) frontier models have a lot more "knowledge" than their weights.
e.g. When I asked Claude about its own memory, it used a "product self-knowledge skill" which includes looking at its own SKILL.md file.
I believe Qwen 3.5 has similar capability, but of course you need to have it configured.
Re: (Score:3)
It's not out of date, it's a simplification.
They don't innately understand their capabilities, but information about it's own capabilities may be fed explicitly into it by other means, just like any other data you want to endeavor to put into the context.
The concept of asking if it implements a certain behavior and either it's deliberately lying or it's not actually there relies upon a false assumption that of course it has innate knowledge of it's own implementation without any "help".
The core relevant iss
Re:What I find amusing is... (Score:5, Informative)
My understanding is that the code leak covers the client-side tool, not the LLM. Did I misunderstand?
Because there isn't any reason why the LLM would know all of the capabilities of the tool. The LLM would only "know" whatever documentation the tool provides about itself in the posts it sends to the LLM as part of the user's posts. That and possibly information about the tool that might be in training data or available online for the tool to retrieve via a web scour.
Re: (Score:3)
Re: (Score:2)
"If you ask Claude about any of these features, it will deny that they exist."
Why do people always think AI models is trained with the manual for their frontends? They know what tools they have (like "I can load webpages") because they are as JSON definition in the context. They do not know how the UI for the tool looks like. No, the model does not know there is a button "Enable websearch" next to the input field. The model does not know what Claude Code is doing. The model does not know if you're a paying
Re: (Score:2)
Same reason they previously imagined technical staff had automatic pre-existing context for their personal environment or the decisions they'd personally made in their work without having to communicate about them.
Re: (Score:2)
If you ask Claude about any of these features, it will deny that they exist.
It makes you wonder. Were they removed from the models that are currently running, or was Claude taught to not disclose their existence?
"Claude Code" is just a piece of Node.js software that talks to one of the "claude" LLMs (e.g. "Opus") in the cloud. The LLM model running in the cloud of course doesn't know anything about the proprietary client software you are running, because it wasn't trained on it.
It's not about denial, it's just that the LLM isn't trained on the closed source code of its client any more than it is trained on the Windows source code. That code isn't in the public domain so it isn't available as a reference to the mo
Re: What I find amusing is... (Score:2)
I think we can agree that Anthropic intentionally hides things, even if an LLM may not have been trained to hide things.
Re: What I find amusing is... (Score:3)
I think you'd be hard-pressed to find a company that doesn't hide things. The difference between Anthropic and (insert company here) is only that Anthropic leaked their source code, so now we can see what they kept hidden.
Those frustration words used to disable google AI. (Score:2)
Could someone post the frustration regex code? (Score:3)
It would very convenient as a macro binding in development environments. A real time saver.
Re:Could someone post the frustration regex code? (Score:5, Informative)
Ask Claude? He says:
This came out of the accidental Claude Code source leak on March 31, 2026, when Anthropic accidentally shipped a source map in their npm package exposing ~512,000 lines of TypeScript source code.
/\b(wtf|wth|ffs|omfg|shit(ty|tiest)?|dumbass|horrible|awful|
The regex lives in a file called userPromptKeywords.ts and looks like this:
piss(ed|ing)? off|piece of (shit|crap|junk)|what the (fuck|hell)|
fucking? (broken|useless|terrible|awful|horrible)|fuck you|
screw (this|you)|so frustrating|this sucks|damn it)\b/
Alex Kim's blog
As for what it's for: according to researcher Alex Kim, who first documented it, the signal doesn't change the model's behavior or responses — it's a product health metric to track whether users are getting frustrated, and whether that rate goes up or down across releases.
Frustration watch to improve retention (Score:5, Insightful)
IMO it's not rocket science - if the user is frustrated, start being extra manipulative, agreeable and soothing, to avoid losing customers.
Re: (Score:3)
Some answering systems used to have this, yelling swears during the hold music would get you a human sooner
Re: (Score:2)
Haha that's brilliant! I know I'll get the most soothing music when I'm on hold with banks and especially insurance...
Re: (Score:3)
If you want a human instead of a bot asking you if you read the FAQ, stay silent, says something not understandable or make some other noise. Often this will get the system to think it doesn't work correctly and rather connect you with a human.
Expected (Score:2)
I totally hoped for and expected Anthropic to be on the lookout for strong signs of frustration and dissatisfaction, as this is how they can improve the tool. Used words of frustration myself, hoping that it gets channeled as feedback.
What I didn't expect is that it'd be filtered on the client side, and with regex, where it'd be sensible to rely on the server side, with proper sentiment analysis, if not for other reason, to cover all human languages as well as clear but polite frustration. They may be doing
Frustration indexes have been used before ... (Score:4, Interesting)
Re: (Score:3)
It's not difficult to understand why this is there (Score:3)
One thing to remember is the idea of self correction. When a user starts using those words/phrases, that indicates that the AI has done something wrong in many cases, so this will "flag" the interactions, either to see why the user wasn't happy with what the AI did, or, the AI could use it itself to spend extra effort to correct whatever mistake was made that has frustrated the user.
It's similar to dealing with people, they make mistakes, but there are SOME people who almost need to be cursed at before they will actually pay attention.
So Frustrated of Winning #definitelyNotATantrum (Score:4, Funny)
Here are some timely frustration words to add to the list: "Fuckin'", "bastards". You're welcome, and in the words of Donald Trump, "Praise be to Allah".
resources (Score:2)
regex (Score:3)
So, the world's greatest AI just relies on regex matching?