Skip to content

Instantly share code, notes, and snippets.

@reggiechan74
Last active October 16, 2025 12:32
Show Gist options
  • Select an option

  • Save reggiechan74/4f8d66a4a189f9d099b2087664f04dd3 to your computer and use it in GitHub Desktop.

Select an option

Save reggiechan74/4f8d66a4a189f9d099b2087664f04dd3 to your computer and use it in GitHub Desktop.
Why most deep-research reports stop at 20–30 pages

Why Most Deep-Research Reports Stop at 20–30 Pages

Introduction

Problem: AI cuts drafting time but often expands document length.

Outcome: end-to-end time does not fall because humans must read, verify, and coordinate around more pages.

Constraint: the decision path is gated by human throughput and meeting calendars, not by text-generation speed.

Objective: model total time-to-decision, show where breakeven fails as pages grow, and state operating rules that preserve time savings.

Deliverables: closed-form time model, worked examples, a reading-speed sensitivity table, and a practical page-budget rule.


Definitions and model

Let $N$ = pages in the decision document, $w$ = minutes to write one page, $r$ = minutes to read/verify one page, $c$ = minutes of coordination/review per page, $f$ = fixed setup time.

$$ T(N) = f + N(w + r + c) $$

AI mainly reduces $w$. If teams increase $N$, the $N(r+c)$ term dominates and cancels drafting gains.


Core idea (human-in-the-loop)

Human-in-the-loop cost = reading/verification + coordination.

Reading/verification:

$$ r = r_{\text{read}} + r_{\text{check}} + r_{\text{cross}} + r_{\text{risk}} $$

  • $r_{\text{read}}$: straight reading.
  • $r_{\text{check}}$: source/citation checks, recalcs.
  • $r_{\text{cross}}$: figure/table cross-references.
  • $r_{\text{risk}}$: legal/compliance/risk scan.

Coordination/review:

$$ c = c_{\text{mark}} + c_{\text{adj}} + c_{\text{merge}} + c_{\text{mtg}} $$

  • $c_{\text{mark}}$: comments/redlines.
  • $c_{\text{adj}}$: adjudicating conflicts.
  • $c_{\text{merge}}$: version merges, doc hygiene.
  • $c_{\text{mtg}}$: calendar time in reviews.

Escalation with length $N$ and reviewers $k$ (stylized):

$$ c(N,k) \approx c_0 + \alpha N + \beta k + \gamma N(k-1),\quad \gamma>0 $$

Reading sits on the critical path and is only partially parallelizable. Once $r+c$ dominates, page growth drives latency regardless of drafting speed.


Break-even pages when switching to AI

Human baseline: $T_H=f_H+N_H(w_H+r+c)$ AI case: $T_A=f_A+N_A(w_A+r+c)$

Breakeven $T_A \le T_H$:

$$ N_A^{*} = \frac{N_H (w_H+r+c)+f_H-f_A}{ w_A+r+c} $$

If $f_A \approx f_H$:

$$ N_A^{*} = \frac{N_H (w_H+r+c)}{w_A+r+c} $$

If $N_A \le N_A^{\ast}$, AI saves time. If $N_A > N_A^{\ast}$, AI loses time. As $r+c$ grows relative to $w_H$, $N_A^{*}\to N_H$.


Worked examples

A — Same pages, faster writing Parameters: $N=20$, $w_H=30$, $w_A=2$, $r=3$, $c=1$.

$$ T_H = 20(30+3+1)=680\ \text{min},\qquad T_A = 20(2+3+1)=120\ \text{min} $$

B — Output inflation Same rates, AI produces $N_A=80$.

$$ T_A = 80(2+3+1)=480\ \text{min} $$

C — Coordination scales with size At 80 pages, $c=4$.

$$ T_A = 80(2+3+4)=720\ \text{min} $$

D — 2-page vs 20-page illustration $r=5$ min/page, $w_H=55$ min/page, $w_A=1$ min/page.

$$ \text{Human (2 pages)}: 2\cdot55 + 2\cdot5 = 120\ \text{min} $$

$$ \text{AI (20 pages)}: 20\cdot1 + 20\cdot5 = 120\ \text{min} $$

Writing time per page fell by $54/55$ ($\approx 98.18%$), but total time is unchanged because reading scaled with $N$.


Sensitivity: reading speed $\rightarrow$ breakeven AI pages

Assumptions: $w_H=55$ min/page, $N_H=2$, $w_A=1$ min/page, $c=0$, $f_A \approx f_H$.

$$ T_H = 2(55+r), \qquad N_A^{*} = \frac{2(55+r)}{1+r} $$

Read sec/page Read min/page Human total time $T_H$ (min) Breakeven AI pages $N_A^*$
30 0.50 111.00 74.0000
45 0.75 111.50 63.7143
60 1.00 112.00 56.0000
75 1.25 112.50 50.0000
90 1.50 113.00 45.2000
105 1.75 113.50 41.2727
120 2.00 114.00 38.0000
135 2.25 114.50 35.2308
150 2.50 115.00 32.8571
165 2.75 115.50 30.8000
180 3.00 116.00 29.0000
195 3.25 116.50 27.4118
210 3.50 117.00 26.0000
225 3.75 117.50 24.7368
240 4.00 118.00 23.6000
255 4.25 118.50 22.5714
270 4.50 119.00 21.6364
285 4.75 119.50 20.7826
300 5.00 120.00 20.0000

Interpretation: faster readers can tolerate a larger AI page count before time savings vanish; at 5 min/page, breakeven is 20 pages.


Why the 20–30 page cap persists

Capacity in typical decision windows, coordination growth with page count and reviewers, decision density (few propositions drive the decision), and separation of layers (short decision doc, detailed appendices). Page growth increases $N(r+c)$; returns collapse when $r+c$ dominates.


Operating rules

Fix decision-layer length. Split evidence vault vs decision doc. Enforce per-page value. Parallelize verification. Gate page expansions with the breakeven test $N_A^{*}=\dfrac{N_H(w_H+r+c)}{w_A+r+c}$.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment