Skip to content

Instantly share code, notes, and snippets.

@sguzman
Created January 21, 2026 21:49
Show Gist options
  • Select an option

  • Save sguzman/af6e25cbe1e2f8c006cd34800e7567b5 to your computer and use it in GitHub Desktop.

Select an option

Save sguzman/af6e25cbe1e2f8c006cd34800e7567b5 to your computer and use it in GitHub Desktop.
A quick reference for estimating how fast Codex usage drains your weekly credits based on model family, mode, reasoning effort, and your message pace.

Codex weekly credit drain estimates (local usage)

Use these tables to estimate how many hours of active prompting it would take to drain a weekly allowance, assuming a steady pace.

How to scale these numbers to your setup

These tables assume:

  • Weekly allowance = 1000 credits
  • Pace = 20 messages/hour

To scale to your actual setup:

  • If your weekly cap is W, multiply hours by (W / 1000).
  • If your pace is P messages/hour, multiply hours by (20 / P).

A) GPT-5.2-Codex / GPT-5.1-Codex-Max / GPT-5.2

Assumption: local baseline ~5 credits/message

Mode small medium high very high
Chat 12.5 h 10.0 h 5.9 h 4.0 h
Agent 7.8 h 6.3 h 3.7 h 2.5 h
Agent (Full) 6.3 h 5.0 h 2.9 h 2.0 h

B) GPT-5.1-Codex-Mini

Assumption: local baseline ~1 credit/message

Mode small medium high very high
Chat 62.5 h 50.0 h 29.4 h 20.0 h
Agent 39.1 h 31.3 h 18.4 h 12.5 h
Agent (Full) 31.3 h 25.0 h 14.7 h 10.0 h

Plug-and-play formula

Let:

  • W = your weekly credits cap (from Usage Dashboard)
  • B = base credits/message (5 for big models, 1 for Mini)
  • m = mode multiplier (1.0 / 1.6 / 2.0)
  • r = reasoning multiplier (0.8 / 1.0 / 1.7 / 2.5)
  • P = messages/hour

Then:

hours_to_drain ≈ W / (B * m * r * P)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment