Last active
November 19, 2025 20:25
-
-
Save crmne/301be1d38ff193e7274a69833947139a to your computer and use it in GitHub Desktop.
Proposed multi-provider capabilities and pricing API. Blog post: https://paolino.me/standard-api-llm-capabilities-pricing/
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # This is a YAML file so I can have comments but the API should obviously return an array of models in JSON. | |
| # Legend: | |
| # Required: this is important to have in v1. | |
| # Optional: this is still important but can wait for v2. | |
| id: gpt-4.5-preview # Required, will match it with the OpenAI API | |
| name: GPT-4.5 Preview # Required | |
| provider: openai # Required | |
| family: gpt45 # Optional, each model page is a family for OpenAI models | |
| context_window: 128000 # Required | |
| max_output_tokens: 16384 # Required | |
| knowledge_cutoff: 20231001 # Optional | |
| modalities: | |
| input: # Sort arrays alphabetically to make diffs consistent | |
| - text | |
| - image | |
| - audio | |
| output: | |
| - text | |
| - image | |
| - audio | |
| - embeddings | |
| - moderation | |
| capabilities: | |
| - streaming # Optional | |
| - function_calling # Required | |
| - structured_output # Required | |
| - predicted_outputs # Optional | |
| - distillation # Optional | |
| - fine_tuning # Optional | |
| - batch # Required | |
| - realtime # Optional | |
| - image_generation # Required | |
| - speech_generation # Required | |
| - transcription # Required | |
| - translation # Optional | |
| - citations # Optional - from Anthropic | |
| - reasoning # Optional - called Extended Thinking in Anthropic's lingo | |
| pricing: | |
| text_tokens: | |
| standard: | |
| input_per_million: 75.0 # Required | |
| cached_input_per_million: 37.5 # Required | |
| output_per_million: 150.0 # Required | |
| reasoning_output_per_million: 0 # Optional | |
| batch: | |
| input_per_million: 37.5 # Required | |
| output_per_million: 75.0 # Required | |
| images: | |
| standard: | |
| input: 0.0 # Optional | |
| output: 0.0 # Optional | |
| batch: | |
| input: 0.0 # Optional | |
| output: 0.0 # Optional | |
| audio_tokens: | |
| standard: | |
| input_per_million: 0.0 # Optional | |
| output_per_million: 0.0 # Optional | |
| batch: | |
| input_per_million: 0.0 # Optional | |
| output_per_million: 0.0 # Optional | |
| embeddings: | |
| standard: | |
| input_per_million: 0.0 # Required | |
| batch: | |
| input_per_million: 0.0 # Required |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
@crmne Any updates on this? That would be a great resource for our LLM client/framework for Smalltalk.
The spec looks very helpful! Specifically for OpenAI but perhaps generalizable, the following addition would be helpful but not desired:
gpt-realtimecurrently points togpt-realtime-2025-08-28)It would be great to hear whether this idea has made any progress or how I could maybe contribute!