Skip to content

Instantly share code, notes, and snippets.

View chinthakindi-saikumar's full-sized avatar

chinthakindi saikumar chinthakindi-saikumar

View GitHub Profile
@chinthakindi-saikumar
chinthakindi-saikumar / claude_with_ollama.md
Last active February 24, 2026 12:58 — forked from iam-veeramalla/claude_with_ollama.md
claude code integration with ollama to use local models

Run Claude with the power of Local LLMs using Ollama

Install Ollama

  1. Open CMD/terminal and run below command curl -fsSL https://ollama.com/install.sh | sh

Pull the Model

  1. Install model based on your system configuration using belw commands ollama pull glm-4.7-flash # or gpt-oss:20b (for better performance) or ollama pull gemma:2b
  2. Optional: ollama run gemma:2b and work in your local