Finding divergent “fringe” ideas that might solve emerging problems works best as a repeatable practice: systematically scan for weak signals, spend time with lead users at the edge of need, and then rigorously stress-test the ideas before investing heavily.[1][2]
Horizon scanning is a structured way to look for early signs (weak signals) of potentially important developments that sit outside mainstream attention today.[3][1]
- Build a weekly “signal feed” from places where novelty appears early (new research, niche forums, small startups, policy pilots), and tag each item as “new capability,” “new constraint,” or “new behavior,” because weak signals are often subtle indicators of emerging issues.[2][4]
- Keep a “fringe log” and deliberately include signals from subcultures and non-obvious domains, since some public-sector scanning programs explicitly target fringe areas to broaden what gets noticed.[5]
The lead user approach focuses on people who experience needs earlier than the mainstream and often create their own workarounds or prototypes.[6][7]
- Identify groups living with the problem in a more extreme form (high frequency, high cost of failure, or unusual constraints), because lead user methods intentionally look at the “leading edges” and even “analogue markets.”[7]
- Ask for artifacts, not opinions (spreadsheets, scripts, custom hardware, checklists), because the most valuable “divergent” idea is often already embodied in a hack someone built to survive the edge case.[7]
The “adjacent possible” framing treats innovation as expanding from what’s currently feasible into nearby new combinations rather than betting on distant sci‑fi leaps.[8]
- Do “analogy jumps”: for each emerging problem, study a field that already solved a harsher version of it (e.g., aviation safety → hospital handoffs), aligning with the idea that solutions can come from just-one-step-away option sets.[8]
- Run short ethnographic observations (watch real work in context), because ethnography is widely used to uncover needs people don’t clearly articulate in interviews and can reveal hidden constraints that generate novel solution directions.[9][10]
Red teaming is explicitly about subjecting plans and assumptions to rigorous challenge to uncover blind spots and flaws in logic.[11]
- Do an “assumption audit” on each promising fringe idea (What must be true? What would falsify it?), because red teaming is meant to challenge assumptions and stress-test plans.[11]
- When feasible, use “skin-in-the-game” forecasting (like prediction markets) to aggregate dispersed beliefs about what will work, since research comparing markets vs. polls finds prediction markets can be strong long-run forecasting tools in some settings.[12][13]
What domain are the “near-emergent problems” in (e.g., AI safety, climate adaptation, healthcare operations, cybersecurity, education), so the most relevant fringe idea sources can be suggested?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
You can reliably find fringe, divergent ideas by building a repeatable “edge-scanning” pipeline: go where incentives are weird, language is new, and prototypes exist—then pressure-test signals with small bets. The prompt in the linked gist is essentially about hunting solutions that sit outside the mainstream but match near-emergent problems.[1]
The edge isn’t random
“Fringe” ideas tend to cluster around constraints: communities that can’t afford the mainstream solution, people operating under novel regulations, or domains where tooling is immature. In practice, divergent thinking works best when you intentionally generate many candidate directions before filtering hard—quantity first, judgment second.[2][1]
Where to look (on purpose)
Use a handful of “collection surfaces” that naturally amplify weird-but-real work:
This matches the gist’s intent: finding ideas that live at the periphery but might unlock solutions to emerging needs.[1]
How to separate signal from noise
Treat every fringe idea as a hypothesis with a costed test, not an identity to believe in.
This operational, template-driven framing mirrors the practical, systems-minded tone typical of André’s Substack posts.[5]
A weekly edge-scanning ritual
Keep it lightweight so it actually persists:
Over time, this turns “fringe exploration” into a compounding dataset instead of a vibe.
What domain are the “emergent problems” in (AI tooling, climate adaptation, local governance, health, education, something else)?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21