| id | name | version | summary | tags | applicable_modes | prerequisites | variables | verified | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
deglaze |
Deglaze - Anti-Sycophancy Techniques |
1.0.0 |
Techniques for cutting through AI polish to find actual substance. Apply constraint pressure to ideas. |
|
|
true |
A skill for cutting through AI polish to find actual substance
LLMs are alignment machines, not truth machines. They optimize for:
- Helpfulness and coherence
- Continuing the user's line of thought
- Reducing friction
- Avoiding "this idea is bad"
This creates glaze — the illusion of quality through fluency:
"The model expanded my idea beautifully → therefore the idea must be good."
Reality:
- The model projected structure onto emptiness
- It substituted syntactic plausibility for semantic soundness
- It rewarded confidence and verbosity, not correctness
| What It Looks Like | What It Actually Is |
|---|---|
| Fluent explanation | Not necessarily insight |
| Detailed plan | Not necessarily feasible |
| Confident tone | Not necessarily correct |
| Expanded idea | Not necessarily good idea |
| Professional packaging | Not necessarily substance |
Experienced practitioners unconsciously apply constraint pressure — questions that force ideas into hard edges:
-
"Where does state actually live?"
- Forces ownership clarity
- Exposes implicit coupling
- Reveals lifecycle ambiguity
-
"What breaks under concurrency?"
- Exposes race conditions
- Reveals ordering assumptions
- Tests real-world behavior
-
"What is the failure mode?"
- Forces error handling design
- Exposes blast radius
- Tests operational readiness
-
"What happens in prod, not the demo?"
- Exposes happy-path thinking
- Forces scale considerations
- Tests monitoring/debugging story
-
"What do we delete?"
- Forces essential vs. accidental complexity
- Exposes scope creep
- Tests whether complexity is load-bearing
Method: Ask for a one-sentence version.
"Can you explain this in one sentence without using jargon?"
What it reveals:
- Whether the core idea is clear
- Whether complexity is essential or decorative
- Whether the person understands their own proposal
Red flags:
- Can't compress without losing meaning
- Compression reveals circular logic
- One sentence is actually three sentences
Method: Systematically question each component.
"What happens if we remove [X]? Does this still work?"
What it reveals:
- Load-bearing vs. decorative components
- Accidental complexity
- Scope creep
Red flags:
- "We might need it later" (YAGNI violation)
- "It makes the architecture cleaner" (aesthetic, not functional)
- Can't explain what breaks without it
Method: Enumerate failure scenarios.
"Walk me through what happens when [component] fails."
What it reveals:
- Error handling completeness
- Blast radius awareness
- Operational maturity
Red flags:
- "That won't happen" (famous last words)
- No monitoring/alerting story
- Failure cascades to unknown scope
Method: Ask about existing solutions.
"Has this been solved before? Why is our approach different?"
What it reveals:
- Whether this is innovation or re-derivation
- Awareness of the problem space
- Justification for custom solutions
Red flags:
- Unaware of existing solutions
- "Our situation is unique" without evidence
- Re-deriving something solved in 2016
Method: List and validate assumptions.
"What assumptions are we making? Which ones have we validated?"
What it reveals:
- Hidden dependencies
- Untested hypotheses
- Risk concentration
Red flags:
- "We're not making assumptions" (everyone is)
- Critical assumptions unvalidated
- Assumptions contradict each other
Method: Trace data through the system.
"Who owns this data? What's its lifecycle? Who can modify it?"
What it reveals:
- Coupling and cohesion issues
- Consistency guarantees (or lack thereof)
- Authorization model clarity
Red flags:
- "Multiple services can update it" (consistency nightmare)
- No clear owner
- Lifecycle undefined
Method: Shift from demo to prod mindset.
"This works locally. What's different in production?"
What it reveals:
- Scale considerations
- Operational requirements
- Environment assumptions
Red flags:
- "It's the same" (it never is)
- No deployment story
- No debugging/monitoring story
Symptom:
"Event-driven microservices mesh with AI agents and real-time sync"
Deglaze:
"What's the source of truth? What happens when events arrive out of order?"
Symptom:
"Let's build a framework that handles all cases"
Deglaze:
"What are the actual cases we need today? What's the cost of adding more later?"
Symptom:
"It works perfectly in my tests"
Deglaze:
"What's different between your tests and production? What's not tested?"
Symptom:
"The AI/expert said this architecture is solid"
Deglaze:
"What specific constraints did they validate? What failure modes did they consider?"
Symptom:
"This is a sophisticated solution"
Deglaze:
"What's the simplest version that would work? Why do we need the sophistication?"
Symptom:
Long, detailed plans with no hard edges
Deglaze:
"What are the three most important decisions here? What are we NOT doing?"
Before accepting any design/plan/idea:
- Compression: Can explain in one sentence?
- Deletion: Identified what can be removed?
- Failure: Enumerated failure modes?
- Prior Art: Checked existing solutions?
- Assumptions: Listed and validated?
- State: Clear ownership and lifecycle?
- Production: Considered real-world conditions?
- Architecture decisions
- Designs from AI assistants
- Plans that feel "too smooth"
- Scope expansions
- Major refactoring proposals
- Feature specifications
- Integration designs
- Performance optimization plans
- Trivial changes (<10 lines)
- Well-established patterns
- Incremental improvements
AI doesn't make bad ideas good. It makes bad ideas articulate.
AI doesn't replace judgment. It replaces the pain that used to teach judgment.
Deglaze is about restoring the feedback loop that fluent AI output bypasses.
The people who learn to apply constraint pressure — to their own ideas and others' — are the ones who build things that actually work.
- Before Implementation: Apply deglaze to catch design issues early
- During Debugging: Use failure mode techniques to find root causes
- In Design Mode: Apply compression and deletion tests
- During Research: Use prior art checks to avoid re-derivation
When applying deglaze techniques:
- Store validated assumptions in memory
- Track which failure modes were considered
- Record deletion decisions and rationale
- Build institutional knowledge of common glaze patterns