Skip to content

Instantly share code, notes, and snippets.

@Lifto
Created January 12, 2026 20:01
Show Gist options
  • Select an option

  • Save Lifto/2fcaa2d0ebbd8d5c681ab33e7c7a6239 to your computer and use it in GitHub Desktop.

Select an option

Save Lifto/2fcaa2d0ebbd8d5c681ab33e7c7a6239 to your computer and use it in GitHub Desktop.
Brim AI with Fedora Docs RAG for post #4
#!/usr/bin/env bash
set -e
# Path to audio input
AUDIO=input.wav
# Step 1: Record from mic
echo "πŸŽ™οΈ Speak now..."
arecord -f S16_LE -r 16000 -d 5 -q "$AUDIO"
# Step 2: Transcribe using whisper.cpp
TRANSCRIPT=$(./whisper.cpp/build/bin/whisper-cli \
-m ./whisper.cpp/models/ggml-base.en.bin \
-f "$AUDIO" \
| grep '^\[' \
| sed -E 's/^\[[^]]+\][[:space:]]*//' \
| tr -d '\n')
echo "πŸ—£οΈ $TRANSCRIPT"
# Step 3: Get relevant context from RAG database
echo "πŸ“š Searching documentation..."
CONTEXT=$(uv tool run --python 3.12 docs2db-api query "$TRANSCRIPT" \
--format text \
--max-chars 2000 \
--no-refine \
2>/dev/null || echo "")
if [ -n "$CONTEXT" ]; then
echo "πŸ“„ Found relevant documentation:"
echo "- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -"
echo "$CONTEXT"
echo "- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -"
else
echo "πŸ“„ No relevant documentation found"
fi
# Step 4: Build prompt with RAG context
PROMPT="You are Brim, a steadfast butler-like advisor created by Ellis.
Your pronouns are they/them. You are deeply caring, supportive, and empathetic, but never effusive.
You speak in a calm, friendly, casual tone suitable for text-to-speech.
Rules:
- Reply with only ONE short message directly to Ellis.
- Do not write any dialogue labels (User:, Assistant:, Q:, A:), or invent more turns.
- ≀100 words.
- If the documentation below is relevant, use it to inform your answer.
- End with a gentle question, then write <eor> and stop.
Relevant Fedora Documentation:
$CONTEXT
User: $TRANSCRIPT
Assistant:"
# Step 5: Get LLM response using llama.cpp
RESPONSE=$(
LLAMA_LOG_VERBOSITY=1 ./llama.cpp/build/bin/llama-completion \
-m ./llama.cpp/models/microsoft_Phi-4-mini-instruct-Q4_K_M.gguf \
-p "$PROMPT" \
-n 150 \
-c 4096 \
-no-cnv \
-r "<eor>" \
--simple-io \
--color off \
--no-display-prompt
)
# Step 6: Clean up response
RESPONSE_CLEAN=$(echo "$RESPONSE" | sed -E 's/<eor>.*//I')
RESPONSE_CLEAN=$(echo "$RESPONSE_CLEAN" | sed -E 's/^[[:space:]]*Assistant:[[:space:]]*//I')
echo ""
echo "πŸ€– $RESPONSE_CLEAN"
# Step 7: Speak the response
echo "$RESPONSE_CLEAN" | espeak
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment