Skip to content

Instantly share code, notes, and snippets.

@nerdalert
Last active February 18, 2026 16:12
Show Gist options
  • Select an option

  • Save nerdalert/4b3364e4c4eceb18a349addb1182696c to your computer and use it in GitHub Desktop.

Select an option

Save nerdalert/4b3364e4c4eceb18a349addb1182696c to your computer and use it in GitHub Desktop.

Setup

HOST="maas.apps.brent.pcbk.p1.openshiftapps.com"
TOKEN="eyJhbGciOiJSUzI1NiIsImtpZCI6IjVpZ0pFZGs4R0tFWExERnI2Nkg5bFExeWtwWUw5anhTd3M3ZXFqMFlFM1kifQ.eyJhdWQiOlsibWFhcy1kZWZhdWx0LWdhdGV3YXktc2EiXSwiZXhwIjoxNzcyMjU3NDk2LCJpYXQiOjE3NzEzOTM0OTYsImlzcyI6Imh0dHBzOi8vcmgtb2lkYy5zMy51cy1lYXN0LTEuYW1hem9uYXdzLmNvbS8yN3Bxa3F2ZnVxMG8zNXM5NmEwbWEyMnBzbzZjNDcxMyIsImp0aSI6ImZkZTBiYzM2LWE5YjAtNDA0NC1iZDBmLTc1Mzk4NTNkMmQ2YiIsImt1YmVybmV0ZXMuaW8iOnsibmFtZXNwYWNlIjoibWFhcy1kZWZhdWx0LWdhdGV3YXktdGllci1mcmVlIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImNsdXN0ZXItYWRtaW4tYjA5MDY3YTYiLCJ1aWQiOiI3NzEwMjZiYy0xZTkwLTQ2NDctYTllZC1lNjkwNDI0OTE0ZGUifX0sIm5iZiI6MTc3MTM5MzQ5Niwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om1hYXMtZGVmYXVsdC1nYXRld2F5LXRpZXItZnJlZTpjbHVzdGVyLWFkbWluLWIwOTA2N2E2In0.CPivrhAkXGxfyB46AT5EEje2hgTtoFqbr2u2Giy-63eNxKF0kW8r__cf1wIVqOb3r8HsW_tnk7Xn4gCjcaZ8ZTwo3TRLkrMsxaj_lqPFDkzHWdl3aO6bc5OrWjsRxhqKyhKaZqdMU2ZTIaTFO2BbzxhFB5WqZ351oCGOlXmLVxDtQJRqYJU7ttLFR8mdH_5Xu0SJMPyz9P-HyTriBaRfb9HOTWzAPGsP9ArbWBGeB_soTswiOH28Dr1tnnETnJ5OVbN7msWwZmx3nTKdfkTLhFuLSuILuLsI7rNKqdQjlRF9lZvE3kupdDlzT47WenGD8kjJy4dz0SQoZ2bZoItKTdT9HkIVtc8r3V0w_SvnaXu1CVlARNe7iNbGUcc_3mSV2cT9_-9u-fFAQrR2RDPa8sc3HOh2X1rzrzrcmZn855p2OYCg_gUrT2wK95ecAWtZQkS_lnVU6_js-m7r1iu8CVh5C1VPKTHTyzGbc7N7jDdmU3v6Jka-dUPHNUahj5943QLCSGBUSvic32d7fR0ig7x7rDCllAH5eDADoSmIYFsh5mWCP1qVU-FjZFR15_UQGmP99dijtQs9LPPERUl-WT7pmtA_ebBnykUQQXP3lpZeegz6svgBsNz6ESehIirW-zYdlfhABgOPsOoxQafgkjax9DcZe4hd2JZbk5xnlHU"

List all models (local + external unified listing)

curl -sSk -H "Authorization: Bearer $TOKEN" "https://${HOST}/maas-api/v1/models" | jq

Chat with OpenAI (external, key injected: sk-openai-key-for-demo)

curl -sSk -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-4","messages":[{"role":"user","content":"Hello from OpenAI"}],"max_tokens":10}' \
  "https://${HOST}/external/openai/v1/chat/completions" | jq

Chat with Anthropic (external, key injected: sk-ant-claude-key-for-demo)

curl -sSk -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"model":"claude-3-sonnet","messages":[{"role":"user","content":"Hello from Claude"}],"max_tokens":10}' \
  "https://${HOST}/external/anthropic/v1/chat/completions" | jq

Chat with vLLM (external GPU, real inference)

curl -sSk -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"model":"Qwen/Qwen3-0.6B","messages":[{"role":"user","content":"Say hi in one sentence."}],"max_tokens":20}' \
  "https://${HOST}/external/vllm/v1/chat/completions" | jq

Chat with local model (on-prem KServe, no external key needed)

curl -sSk -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"model":"facebook/opt-125m","messages":[{"role":"user","content":"Hello from on-prem"}],"max_tokens":10}' \
  "https://${HOST}/llm/facebook-opt-125m-simulated/v1/chat/completions" | jq

No auth - expect 401 from all providers

curl -sSk -o /dev/null -w "%{http_code}\n" "https://${HOST}/external/openai/v1/models"
curl -sSk -o /dev/null -w "%{http_code}\n" "https://${HOST}/external/anthropic/v1/models"
curl -sSk -o /dev/null -w "%{http_code}\n" "https://${HOST}/external/vllm/v1/models"

Wrong API key - expect 401 from vLLM (proves key injection is required)

curl -sSk -H "Authorization: Bearer bogus-key" \
  -H "Content-Type: application/json" \
  -d '{"model":"Qwen/Qwen3-0.6B","messages":[{"role":"user","content":"Hello"}],"max_tokens":10}' \
  "http://ec2-34-202-9-189.compute-1.amazonaws.com:8000/v1/chat/completions"

Rate limiting - expect 200s then 429s (OpenAI)

The policy violation resets after 2m

for i in {1..16}; do
  curl -sSk -o /dev/null -w "%{http_code}\n" \
    -H "Authorization: Bearer $TOKEN" \
    -H "Content-Type: application/json" \
    -d '{"model":"gpt-4","messages":[{"role":"user","content":"Hello"}],"max_tokens":10}' \
    "https://${HOST}/external/openai/v1/chat/completions"
done

Rate limiting - expect 200s then 429s (vLLM)

The policy violation resets after 2m

for i in {1..16}; do
  curl -sSk -o /dev/null -w "%{http_code}\n" \
    -H "Authorization: Bearer $TOKEN" \
    -H "Content-Type: application/json" \
    -d '{"model":"Qwen/Qwen3-0.6B","messages":[{"role":"user","content":"Hello"}],"max_tokens":10}' \
    "https://${HOST}/external/vllm/v1/chat/completions"
done
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment