Skip to content

Instantly share code, notes, and snippets.

@MajorTal
Created February 26, 2026 09:27
Show Gist options
  • Select an option

  • Save MajorTal/e911fa8403d6616f58b7111b123d1a4f to your computer and use it in GitHub Desktop.

Select an option

Save MajorTal/e911fa8403d6616f58b7111b123d1a4f to your computer and use it in GitHub Desktop.
Claude Code /consult skill — send project context to GPT-5.2 Pro for external consultation
# /// script
# requires-python = ">=3.11"
# dependencies = [
# "openai>=1.82.0",
# "boto3>=1.35.0",
# ]
# ///
"""
Submits a consultation to GPT-5.2 Pro via OpenAI Responses API (background mode).
Polls for the result, writes it to ~/.claude/consult-results/, and notifies the user.
Usage: uv run consult.py <input-file>
"""
import sys
import json
import time
import subprocess
from pathlib import Path
from datetime import datetime
import boto3
import openai
RESULTS_DIR = Path.home() / ".claude" / "consult-results"
SECRET_NAME = "openai-kychee"
AWS_REGION = "us-east-1"
AWS_PROFILE = "kychee"
MODEL = "gpt-5.2-pro"
POLL_INTERVAL = 5 # seconds
def get_openai_key() -> str:
session = boto3.Session(profile_name=AWS_PROFILE, region_name=AWS_REGION)
client = session.client("secretsmanager")
resp = client.get_secret_value(SecretId=SECRET_NAME)
return resp["SecretString"]
def submit_background(client: openai.OpenAI, prompt: str) -> str:
response = client.responses.create(
model=MODEL,
input=prompt,
background=True,
reasoning={"effort": "xhigh"},
)
return response.id, response.status
def poll_until_done(client: openai.OpenAI, response_id: str) -> object:
while True:
response = client.responses.retrieve(response_id)
if response.status not in ("queued", "in_progress"):
return response
print(f" Status: {response.status} (polling every {POLL_INTERVAL}s...)")
time.sleep(POLL_INTERVAL)
def notify(title: str, message: str):
"""Send a macOS notification."""
try:
subprocess.run(
[
"osascript", "-e",
f'display notification "{message}" with title "{title}"'
],
check=False,
capture_output=True,
)
except Exception:
pass # best-effort
def main():
if len(sys.argv) < 2:
print("Usage: uv run consult.py <input-file>")
sys.exit(1)
input_file = Path(sys.argv[1])
if not input_file.exists():
print(f"Input file not found: {input_file}")
sys.exit(1)
prompt = input_file.read_text()
RESULTS_DIR.mkdir(parents=True, exist_ok=True)
# Get API key
print("Fetching OpenAI API key from AWS Secrets Manager...")
api_key = get_openai_key()
client = openai.OpenAI(api_key=api_key)
# Submit in background mode
print(f"Submitting to {MODEL} (background mode)...")
response_id, status = submit_background(client, prompt)
print(f" Response ID: {response_id}")
print(f" Status: {status}")
# Save metadata
timestamp = datetime.now().strftime("%Y%m%d-%H%M%S")
meta = {
"response_id": response_id,
"model": MODEL,
"submitted_at": datetime.now().isoformat(),
"input_file": str(input_file),
"status": "polling",
}
meta_path = RESULTS_DIR / f"{timestamp}-meta.json"
meta_path.write_text(json.dumps(meta, indent=2))
# Poll for result
print("Polling for result...")
response = poll_until_done(client, response_id)
print(f" Final status: {response.status}")
# Extract result
if response.status == "completed":
result_text = response.output_text or "(empty response)"
usage = response.usage
cost_info = ""
if usage:
cost_info = (
f"\n\n---\n"
f"**Tokens**: {usage.input_tokens:,} input, "
f"{usage.output_tokens:,} output, "
f"{usage.total_tokens:,} total"
)
else:
result_text = f"Consultation failed with status: {response.status}"
if response.error:
result_text += f"\nError: {response.error}"
cost_info = ""
# Write result
result_path = RESULTS_DIR / f"{timestamp}-result.md"
result_path.write_text(
f"# Consultation Result\n\n"
f"**Model**: {MODEL}\n"
f"**Submitted**: {meta['submitted_at']}\n"
f"**Completed**: {datetime.now().isoformat()}\n"
f"**Status**: {response.status}\n\n"
f"---\n\n"
f"{result_text}"
f"{cost_info}\n"
)
# Update latest symlink
latest_path = RESULTS_DIR / "latest.md"
if latest_path.is_symlink() or latest_path.exists():
latest_path.unlink()
latest_path.symlink_to(result_path.name)
# Update meta
meta["status"] = response.status
meta["completed_at"] = datetime.now().isoformat()
meta["result_file"] = str(result_path)
meta_path.write_text(json.dumps(meta, indent=2))
print(f"\nResult written to: {result_path}")
# Notify
if response.status == "completed":
notify("consult", "GPT-5.2 Pro consultation complete. Run /consult-results to read it.")
print("Notification sent. Use /consult-results to read the response.")
else:
notify("consult", f"Consultation failed: {response.status}")
print(f"Consultation failed: {response.status}")
if __name__ == "__main__":
main()
name description user-invocable disable-model-invocation allowed-tools argument-hint
consult
Send project context and a question to GPT-5.2 Pro for an external consultation. Gathers docs, code, and context automatically, submits via OpenAI Responses API in background mode, and notifies when the response arrives. Use /consult-results to read the response.
true
true
Read, Glob, Grep, Bash(uv run *)
<your question or request for the consultant>

/consult — External Consultation via GPT-5.2 Pro

You are helping the user send a consultation request to GPT-5.2 Pro. Follow these steps exactly:

Step 1: Understand the request

The user's consultation prompt is: $ARGUMENTS

Step 2: Gather project context

Automatically detect and read relevant project files. Use your judgement based on the user's prompt:

  • Always include: README, CLAUDE.md, docs/ directory contents, package.json / pyproject.toml / Cargo.toml (whatever exists)
  • If about security/architecture/design: Include all docs, config files, infrastructure files (terraform, CDK, docker), API route definitions
  • If about code quality/patterns: Include key source files, test files, linting configs
  • If about specific files: The user may mention files — include those plus their imports/dependencies

If the user's prompt references specific files or directories, prioritize those.

Read the files using the Read tool. Concatenate all gathered context into a single block.

Step 3: Bundle and submit

Once you have gathered all relevant context, write it to a temporary file and invoke the consult script.

Create a file at /tmp/consult-input-{timestamp}.md with this format:

# Consultation Request

## Project Summary

{Write a 3-8 sentence summary of the project based on the context files you read. Cover: what the project is, what it does, who it's for, key tech choices, and current state. This gives the consultant essential context before reading the detailed files.}

## User's Question
{the user's prompt from $ARGUMENTS}

## Project Context

### {filename1}

{file contents}


### {filename2}

{file contents}


... (all gathered files)

Then run:

uv run ~/.claude/skills/consult/consult.py /tmp/consult-input-{timestamp}.md

This submits the consultation in background mode and returns immediately.

Step 4: Confirm to the user

Tell the user:

  • What context you gathered (list the files)
  • That the consultation has been submitted to GPT-5.2 Pro
  • That they can continue working and use /consult-results when they see the notification that the response has arrived
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment