If you have a .edu email, you can get GitHub Copilot Pro for free through GitHub Education. Here's the sign-up guide.
Copilot Pro gets you access to models from OpenAI, Claude, Gemini, Grok and more. The VS Code integration is nice but what's really useful is that you can expose all of this as a local OpenAI-compatible API endpoint, meaning any tool or script that speaks the OpenAI API just works with it, for free.
The trick is LiteLLM, which acts as a proxy between your code and whatever model backend you want. All you need is one config file.
- GitHub Education access with Copilot Pro enabled
- Python 3.11. Avoid 3.14, it breaks some of LiteLLM's proxy dependencies
- VS Code to get slugs of the models you want to use.
Spin up a venv and install the proxy extras:
python3.11 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install 'litellm[proxy]'Paste this into a file called litellm-config.yaml:
model_list:
- model_name: gpt-4.1
litellm_params:
model: github_copilot/gpt-4.1
- model_name: gpt-5.1-codex
litellm_params:
model: github_copilot/gpt-5.1-codex
- model_name: claude-sonnet-4.5
litellm_params:
model: github_copilot/claude-sonnet-4.5
- model_name: gemini-3-pro
litellm_params:
model: github_copilot/gemini-3-pro-previewmodel_name is the alias you'll use in your API calls. litellm_params.model is the actual Copilot model slug, it always starts with github_copilot/. You can add as many entries as you want:
model_name (your alias) |
litellm_params.model (Copilot slug) |
|---|---|
gpt-4.1 |
github_copilot/gpt-4.1 |
claude-sonnet-4.6 |
github_copilot/claude-sonnet-4.6 |
gemini-3-pro |
github_copilot/gemini-3-pro-preview |
gpt-5.1-codex |
github_copilot/gpt-5.1-codex |
Finding the slug for a model: Open VS Code, go to the Copilot Chat panel, and click the model dropdown at the top. The last option is Manage Models. Click it to see all available models. Hover over any model name and you'll see its slug. Prepend github_copilot/ to it and you're good.
No API keys to set. LiteLLM handles Copilot auth on its own (more on that below).
litellm --config litellm-config.yamlThe first time you run this, LiteLLM will prompt you to authenticate with GitHub. Just go through the OAuth flow β it only happens once. After that, your token is cached.
Once you're authenticated, you should see:
LiteLLM: Proxy initialized with Config, Set models:
gpt-4.1
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)
Swap the model below to one of the aliases you defined in the YAML file.
curl http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"messages": [{"role": "user", "content": "Hello"}]
}'If it's working, you'll get back something like:
{
"id": "xxx",
"model": "gpt-4.1",
"object": "chat.completion",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today? π"
}
}
],
"usage": {
"prompt_tokens": 8,
"completion_tokens": 11,
"total_tokens": 19
}
}And you've got an OpenAI-compatible API server running on your machine!
Since the proxy speaks the OpenAI protocol, you just point the SDK at localhost:4000:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:4000",
api_key="anything", # no real key needed
)
response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)Or you can use LiteLLM's SDK directly without starting the proxy server. More details on that here
Heads up on rate limits: Copilot Pro rate limits still apply. The proxy doesn't bypass them.
| Problem | Fix |
|---|---|
litellm command not found |
You probably forgot to activate the venv: source .venv/bin/activate |
| Weird Python dependency errors | Switch to Python 3.11 and avoid 3.14 |