https://nitter.net/_/status/1991624623407161383
[view note] - fun-conifer-bushlark (Shown notes: 7 · Rating impact: 143)
"Adversarial prompting" is tricking an LLM into bypassing design constraints, e.g. prompt injection or jailbreaks. The questions Grok answered were not "adversarial prompting." They were simple, straightforward questions that did not solicit pro-Elon responses.
