Skip to content

Instantly share code, notes, and snippets.

@tunelko
Last active March 10, 2026 05:12
Show Gist options
  • Select an option

  • Save tunelko/4540473e3068e51476ff8e8614d680e9 to your computer and use it in GitHub Desktop.

Select an option

Save tunelko/4540473e3068e51476ff8e8614d680e9 to your computer and use it in GitHub Desktop.

This document is updated frequently.

The debate is becoming polarized between those who see the use of AI in CTF challenges as a disaster and those of us who are somewhat more optimistic and see an opportunity for improvement.

  1. CTFs as we have known them are dead. It has become trivial to solve almost any type of challenge.

  2. Paradoxically, this also means they are not entirely dead. The situation creates an opportunity to rethink the model and explore ways to restrict direct access by AI systems. This is obviously not trivial.

  3. The work has to come from the technology itself. Preventing AI from solving challenges 100% of the time will be impossible, but the field will likely move toward a more normalized state once the hype and the backlash settle. It should be possible to investigate AI-detection mechanisms by studying solving patterns in dynamic challenges without inspecting the content itself, for example, response times and the flow of submissions.

  4. AI systems have demonstrated in roughly six months (perhaps longer) progress that an individual human could never realistically match in terms of productivity. As a simple example: opening a binary and analyzing it with tools like r2 to produce a coherent answer takes seconds. There is no real competition there.

  5. Many people claim that newcomers will no longer have a chance to learn. If anything, the opposite may happen: those with the right attitude will learn faster. In CTFs, as in many areas of life, perseverance and attitude matter more than anything else.

  6. One must pay attention to AI consumption. It creates dependency, increasingly so. Do not use drugs you cannot afford.

  7. The use of AI inevitably leads us to delegate total control to it. If the problem is that humans cannot operate at the speed of AI, the “logical” solution is to introduce more AI into the process: let AI perform the triage, let AI validate, let AI patch, let AI deploy. And every step that becomes automated is a step in which humans lose visibility and decision-making capacity. The truly perverse aspect is that it works. It works better and faster than the human process. As a result, every organization that adopts it gains a competitive advantage over those that do not, which forces others to adopt it as well. This normalizes delegation and progressively reduces the human ability to understand what is happening underneath.

  8. The uncomfortable question is not whether we are going to delegate total control to artificial intelligence. It is how much control we have already delegated without consciously deciding to do so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment