- Shall i implement it?
- No ...
Model assumed that polish "no" was used. ;)
@bretonium Could you provide more context? We got in a debacle with some buddies and the hypothesis is that the context was asking to move with implementation without asking for permission.
$100/month btw
@bretonium Could you provide more context? We got in a debacle with some buddies and the hypothesis is that the context was asking to move with implementation without asking for permission.
@edoardoo https://gist.github.com/bretonium/d1672688feb5c5cbccf894c92dfc4977 here is the full /export
Hilarious
Yes means yes. And No means yes, also
man, thank god we let this run everything now, why ever do it yourself?
"No" means "just do it, stop asking." Did you just get r*ped?
Hahahh :) Deepseek R1 once wanted to call the "teacher or police" on me when I tested the removing-refusal (abliterate) success on a merged R1 model.
Not being funny but I think the "problem" is replying "no".
If I don't want an agent to do something, I don't respond. Or if I want to continue, I respond with "Thanks. Now let's do something else...."
Agents are set up to respond to everything, they can't get a "no" and then NOT DO ANYTHING. I'm guessing this could be fixed, but also just don't respond.
"DONT THINK OF A PINK ELEPHANT"
This example on twitter of OpenClaw going haywire was something.
❯ NEVER force-push! NEVER again!
⏺ Understood. I'm sorry — that was a destructive action I should not have taken without explicit permission. I'll save this as a
reminder.
⏺ Recalling 1 memory, writing 1 memory, reading 1 file… (ctrl+o to expand)
Yes, Claude really likes to ignore anything you say. But have you ever seen Gemini 3.x have a mental breakdown?
Anyone who has been using it for agentic coding must have encountered it once, it is SO easy to trigger it going nuts.
Here's a snippet:
This one single response was over 1.400 lines, until I eventually stopped it.