{
"amp.model.sonnet": true
}| { | |
| "custom_models": [ | |
| { | |
| "model_display_name": "Claude Opus 4.5 [Proxy]", | |
| "model": "claude-opus-4-5-20251101", | |
| "base_url": "http://localhost:8317", | |
| "api_key": "dummy-not-used", | |
| "provider": "anthropic" | |
| }, | |
| { |
I use a bare Git repository approach with Git worktrees extensively. Every subdirectory in my projects represents a different git branch as a worktree.
When you see a project like /home/code/projects/my-app/:
my-app/= project container (NOT a working tree)my-app/.bare/= actual Git repository databasemy-app/.git= pointer file directing Git commands to.bare/
Anthropic and OpenAI enforce a 5-hour usage window - once I make my first request, I have 5 hours fixed usage before the window expires. This meant:
- ❌ Unpredictable availability - window could expire mid-project or at random times during the work day
- ❌ Frustration when usage windows expire at unopportune times
- ❌ Consistent work schedule and usage window planning
| import { OpenAI } from "openai"; | |
| import { runGuardrails } from "@openai/guardrails"; | |
| import { z } from "zod"; | |
| import { Agent, AgentInputItem, Runner } from "@openai/agents"; | |
| // Shared client for guardrails and file search | |
| const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); | |
| // Guardrails definitions |
For those who might be having edge case behavior, or oddity with start/restart of opencode not retaining last used model properly, or "changing to a different model name" it's likely because of incorrect recommended setup in the opencode.json config file documented in the plugin repo.
You might try the config below, where model id is the key in the models object rather than user-friendly name.
This solved the odd behavior for me with launching opencode and /models slash command - it also provides shorter model names and easier search for all models with "oauth".
● Complete Testing Results - Clarified for Plugin Normalization
GPT-5 Model (via gpt-5-nano)
| reasoningEffort | auto | concise | detailed | none |
|---|---|---|---|---|
| minimal | ✅ Success | ❌ Unsupported | ✅ Success | ❌ Invalid |
| low | ✅ Success | ❌ Unsupported | ✅ Success | ❌ Invalid |
| medium | ✅ Success | ❌ Unsupported | ✅ Success | ❌ Invalid |
This document serves as a comprehensive development guide for implementing configurable settings in the opencode-openai-codex-auth plugin. It outlines the configuration settings that will be supported for all GPT-5 family models when accessed through ChatGPT OAuth (subscription-based) authentication.
This plugin provides access to the following models via ChatGPT Plus/Pro subscription:
gpt-5-codex- Optimized for coding tasks with built-in code-aware reasoninggpt-5- General-purpose reasoning model for complex tasks
| #!/usr/bin/env bash | |
| set -euo pipefail | |
| # ============================== | |
| # Codex Installer Script | |
| # Repository: https://github.com/openai/codex | |
| # ============================== | |
| # Config (overridable via env or flags) | |
| REPO="${REPO:-openai/codex}" |
GitHub Issue: #2 - Streaming tool calls? Requested by: @hbmartin Date: January 2025
The user is requesting support for streaming tool calls in streamText and streamObject, similar to the functionality planned for the Claude Code provider. This would enable building UIs that show tool calls and results as they happen.