- Confusing default behavior - Default query uses chat completions;
rawis a hidden subcommand - Flag overload - 15+ flags mixed together (model, context, mode, domains, dates, effort, media, etc.)
- Model complexity hidden - Four distinct models with different capabilities collapsed into one flag
- Pro Search absent - The new
search_type: pro/autofeature isn't exposed - No clear cost/capability signaling - Users don't know if they're using $0.005 or $1.30 per query
| Endpoint | Purpose | Models/Features |
|---|---|---|
/chat/completions |
AI-grounded answers | sonar, sonar-pro, sonar-reasoning-pro |
/chat/completions + Pro Search |
Multi-step reasoning | sonar-pro with search_type: pro/auto |
/async/chat/completions |
Deep research | sonar-deep-research (polling) |
/search |
Raw ranked results | No AI, just search index |
packagectl: Verb-oriented by capability (code-grep, semantic-grep, research-package)knowctl: Resource + action (list-topics, show-document, semantic-search)codectl: Action-oriented (search-text, extract-symbols, find-dependencies)
searchctl chat <query> # Chat completions API (default: sonar-pro)
searchctl search <query> # Search API (raw results)
searchctl async <query> # Async API (deep research)
searchctl models # List available models
Options flow naturally from API:
searchctl chat "quantum computing" --model sonar-reasoning-pro --context high
searchctl chat "SEC filings NVDA" --mode sec --domains sec.gov
searchctl search "AI news" --max-results 20 --recency week
searchctl async "market analysis 2025" --effort high
Pros: 1:1 API mapping, predictable, easy to document
Cons: "chat" is awkward for agents, "async" is implementation detail
Organize by reasoning depth and use case:
searchctl quick <query> # sonar (fast, cheap, ~$0.006/query)
searchctl answer <query> # sonar-pro (balanced, ~$0.01/query)
searchctl reason <query> # sonar-reasoning-pro (multi-step)
searchctl research <query> # sonar-deep-research (exhaustive, ~$0.40-1.30/query)
searchctl raw <query> # Search API (no AI, $0.005/query)
Examples:
searchctl quick "What is the capital of France?"
searchctl answer "Compare Tesla Model 3 vs Rivian R1T" --context high
searchctl reason "Analyze the trade-offs of microservices vs monolith"
searchctl research "Comprehensive analysis of quantum computing impact on cryptography"
searchctl raw "AI regulations 2025" --max-results 20
Pros: Intuitive task mapping, clear cost/capability progression, agent-friendly
Cons: Model names hidden, "quick" naming could be clearer
Models become first-class subcommands:
searchctl sonar <query> # Lightweight search
searchctl sonar-pro <query> # Advanced search
searchctl sonar-pro <query> --pro # Pro Search (multi-step tools)
searchctl reasoning <query> # Reasoning model
searchctl deep <query> # Deep research (async)
searchctl search <query> # Raw search API
searchctl models # List models with descriptions
Examples:
searchctl sonar "latest news on SpaceX"
searchctl sonar-pro "quantum computing advances" --context high --mode academic
searchctl sonar-pro "compare EV manufacturers 2025" --pro # enables tool usage
searchctl reasoning "analyze the logic in this argument about AI safety"
searchctl deep "comprehensive report on renewable energy policy" --effort high
searchctl search "python web frameworks" --domains github.com,pypi.org
Pros: Explicit model selection, clear capability mapping
Cons: Many subcommands, model names may change
Focus on what agents actually need:
searchctl find <query> # Quick fact lookup (sonar)
searchctl ask <query> # Q&A with citations (sonar-pro)
searchctl analyze <query> # Complex reasoning (sonar-reasoning-pro)
searchctl research <query> # Deep research report (sonar-deep-research)
searchctl sources <query> # Raw sources only (search API)
searchctl models # Show model details and pricing
Examples:
searchctl find "current Bitcoin price"
searchctl ask "How does OAuth2 work?" --context high
searchctl analyze "What are the implications of the EU AI Act for startups?"
searchctl research "State of AI in healthcare 2025" --effort high
searchctl sources "climate change research 2024" --domains nature.com,science.org
Pros: Very intuitive, natural language task names, agent-friendly
Cons: Model mapping not obvious to API-familiar users
Combines intent-based naming with Perplexity's own terminology:
searchctl quick-search <query> # → sonar (fast, cheap)
searchctl web-search <query> # → sonar-pro (grounded Q&A)
searchctl pro-search <query> # → sonar-pro + search_type: pro (multi-step tools)
searchctl reason-search <query> # → sonar-reasoning-pro
searchctl deep-research <query> # → sonar-deep-research (async, exhaustive)
searchctl raw-search <query> # → search API (no AI, just results)
searchctl list-models # → show models with pricing
Why this naming:
web-searchvspro-searchmaps to Perplexity's own terminologydeep-researchmatches what Perplexity calls the model- Consistent
*-searchsuffix groups commands visually raw-searchclarifies you're still searching, just without AI synthesis
# Quick fact lookup (sonar, ~$0.006/query)
searchctl quick-search "What is the capital of France?"
# Standard grounded Q&A (sonar-pro, ~$0.01/query)
searchctl web-search "Compare React vs Vue for large applications" --context high
# Multi-step tool usage for complex queries (sonar-pro + Pro Search)
searchctl pro-search "Analyze Tesla's Q3 2024 earnings vs competitors"
# Academic research with reasoning (sonar-reasoning-pro)
searchctl reason-search "What are the logical implications of Gödel's incompleteness theorems?"
# Exhaustive research report (sonar-deep-research, ~$0.40-1.30/query)
searchctl deep-research "Comprehensive analysis of quantum computing impact on cryptography" --effort high
# Raw sources without AI synthesis (search API, $0.005/1K requests)
searchctl raw-search "climate change research 2024" --domains nature.com,science.org --max-results 20--context low|medium|high # Search context size (cost/depth tradeoff)
--mode web|academic|sec # Search mode (general, scholarly, SEC filings)
--domains example.com,... # Allowlist domains
--exclude reddit.com,... # Denylist domains
--recency day|week|month|year # Time filter
--after MM/DD/YYYY # Published after date
--before MM/DD/YYYY # Published before date
--country US|GB|DE|... # Geographic filter (raw-search only)
--language en,fr,de # Language filter (raw-search only)
Consistent output format across all commands:
query: "your question"
model: sonar-pro
search_type: pro # only for pro-search
answer: "The synthesized response..."
citations:
- title: "Source Title"
url: "https://example.com/article"
date: "2024-03-15"
usage:
input_tokens: 150
output_tokens: 500
search_context_size: medium
estimated_cost: "$0.012"For raw-search:
query: "your query"
results:
- title: "Result Title"
url: "https://example.com"
snippet: "Relevant excerpt..."
date: "2024-03-15"
last_updated: "2024-09-20"| Command | Model | Approx. Cost |
|---|---|---|
quick-search |
sonar | ~$0.006/query |
web-search |
sonar-pro | ~$0.01-0.015/query |
pro-search |
sonar-pro + tools | ~$0.015-0.025/query |
reason-search |
sonar-reasoning-pro | ~$0.01-0.015/query |
deep-research |
sonar-deep-research | ~$0.40-1.30/query |
raw-search |
search API | $5/1K requests |