Skip to content

Instantly share code, notes, and snippets.

@caleb-kaiser
Created October 23, 2024 23:31
Show Gist options
  • Select an option

  • Save caleb-kaiser/bc116c584301eddc4a238c7d7d18a944 to your computer and use it in GitHub Desktop.

Select an option

Save caleb-kaiser/bc116c584301eddc4a238c7d7d18a944 to your computer and use it in GitHub Desktop.
1-food-chatflow-traces.ipynb
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/caleb-kaiser/bc116c584301eddc4a238c7d7d18a944/1-food-chatflow-traces.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"<img src=\"https://raw.githubusercontent.com/comet-ml/opik/main/apps/opik-documentation/documentation/static/img/opik-logo.svg\" width=\"250\"/>"
],
"metadata": {
"id": "s8W39VBR0CAJ"
}
},
{
"cell_type": "markdown",
"metadata": {
"id": "UseBjjnnrrTO"
},
"source": [
"# Food Chatbot Using Prompt Chaining\n",
"\n",
"In this exercise, you'll use Opik to trace a chatbot that uses prompt chaining. We've provided you an LLMClient class, which will allow you to easily use OpenAI or open source models via LiteLLM."
]
},
{
"cell_type": "markdown",
"source": [
"# Imports & Configuration"
],
"metadata": {
"id": "FBSBZpNlAczx"
}
},
{
"cell_type": "code",
"source": [
"%pip install opik openai litellm --quiet"
],
"metadata": {
"id": "RPHuX5YytFLz"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "HSIDXzU8rrTP"
},
"outputs": [],
"source": [
"import os\n",
"import IPython\n",
"import opik\n",
"from opik import track\n",
"from opik.integrations.openai import track_openai\n",
"from openai import OpenAI\n",
"import getpass\n",
"\n",
"# Define project name to enable tracing\n",
"os.environ[\"OPIK_PROJECT_NAME\"] = \"food_chatbot\""
]
},
{
"cell_type": "code",
"source": [
"# opik configs\n",
"if \"OPIK_API_KEY\" not in os.environ:\n",
" os.environ[\"OPIK_API_KEY\"] = getpass.getpass(\"Enter your Opik API key: \")\n",
"\n",
"opik.configure()"
],
"metadata": {
"id": "PMUAEgyEtMEO"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# openai configs\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API key: \")\n",
"\n",
"client = OpenAI()\n",
"openai_client = track_openai(client)"
],
"metadata": {
"id": "KilkqIqptWw0"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Prompts & Chain Steps"
],
"metadata": {
"id": "i4jOTa1ZAlTM"
}
},
{
"cell_type": "markdown",
"metadata": {
"id": "Rxjh1iRtrrTP"
},
"source": [
"Define menu items for the food chatbot:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "AUjZawVdrrTQ"
},
"outputs": [],
"source": [
"menu_items = \"\"\"\n",
"Menu: Kids Menu\n",
"Food Item: Mini Cheeseburger\n",
"Price: $6.99\n",
"Vegan: N\n",
"Popularity: 4/5\n",
"Included: Mini beef patty, cheese, lettuce, tomato, and fries.\n",
"\n",
"Menu: Appetizers\n",
"Food Item: Loaded Potato Skins\n",
"Price: $8.99\n",
"Vegan: N\n",
"Popularity: 3/5\n",
"Included: Crispy potato skins filled with cheese, bacon bits, and served with sour cream.\n",
"\n",
"Menu: Appetizers\n",
"Food Item: Bruschetta\n",
"Price: $7.99\n",
"Vegan: Y\n",
"Popularity: 4/5\n",
"Included: Toasted baguette slices topped with fresh tomatoes, basil, garlic, and balsamic glaze.\n",
"\n",
"Menu: Main Menu\n",
"Food Item: Grilled Chicken Caesar Salad\n",
"Price: $12.99\n",
"Vegan: N\n",
"Popularity: 4/5\n",
"Included: Grilled chicken breast, romaine lettuce, Parmesan cheese, croutons, and Caesar dressing.\n",
"\n",
"Menu: Main Menu\n",
"Food Item: Classic Cheese Pizza\n",
"Price: $10.99\n",
"Vegan: N\n",
"Popularity: 5/5\n",
"Included: Thin-crust pizza topped with tomato sauce, mozzarella cheese, and fresh basil.\n",
"\n",
"Menu: Main Menu\n",
"Food Item: Spaghetti Bolognese\n",
"Price: $14.99\n",
"Vegan: N\n",
"Popularity: 4/5\n",
"Included: Pasta tossed in a savory meat sauce made with ground beef, tomatoes, onions, and herbs.\n",
"\n",
"Menu: Vegan Options\n",
"Food Item: Veggie Wrap\n",
"Price: $9.99\n",
"Vegan: Y\n",
"Popularity: 3/5\n",
"Included: Grilled vegetables, hummus, mixed greens, and a wrap served with a side of sweet potato fries.\n",
"\n",
"Menu: Vegan Options\n",
"Food Item: Vegan Beyond Burger\n",
"Price: $11.99\n",
"Vegan: Y\n",
"Popularity: 4/5\n",
"Included: Plant-based patty, vegan cheese, lettuce, tomato, onion, and a choice of regular or sweet potato fries.\n",
"\n",
"Menu: Desserts\n",
"Food Item: Chocolate Lava Cake\n",
"Price: $6.99\n",
"Vegan: N\n",
"Popularity: 5/5\n",
"Included: Warm chocolate cake with a gooey molten center, served with vanilla ice cream.\n",
"\n",
"Menu: Desserts\n",
"Food Item: Fresh Berry Parfait\n",
"Price: $5.99\n",
"Vegan: Y\n",
"Popularity: 4/5\n",
"Included: Layers of mixed berries, granola, and vegan coconut yogurt.\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "zl-C3LLSrrTP"
},
"outputs": [],
"source": [
"# Simple little client class for using different LLM APIs (OpenAI or LiteLLM)\n",
"class LLMClient:\n",
" def __init__(self, client_type: str =\"openai\", model: str =\"gpt-4o-mini\"):\n",
" self.client_type = client_type\n",
" self.model = model\n",
"\n",
" if self.client_type == \"openai\":\n",
" self.client = track_openai(openai.OpenAI())\n",
"\n",
" else:\n",
" self.client = None\n",
"\n",
" # LiteLLM query function\n",
" def _get_litellm_response(self, query: str, system: str = \"You are a helpful assistant.\"):\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system },\n",
" { \"role\": \"user\", \"content\": query }\n",
" ]\n",
"\n",
" response = litellm.completion(\n",
" model=self.model,\n",
" messages=messages\n",
" )\n",
"\n",
" return response.choices[0].message.content\n",
"\n",
" # OpenAI query function - use **kwargs to pass arguments like temperature\n",
" def _get_openai_response(self, query: str, system: str = \"You are a helpful assistant.\", **kwargs):\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system },\n",
" { \"role\": \"user\", \"content\": query }\n",
" ]\n",
"\n",
" response = self.client.chat.completions.create(\n",
" model=self.model,\n",
" messages=messages,\n",
" **kwargs\n",
" )\n",
"\n",
" return response.choices[0].message.content\n",
"\n",
"\n",
" def query(self, query: str, system: str = \"You are a helpful assistant.\", **kwargs):\n",
" if self.client_type == 'openai':\n",
" return self._get_openai_response(query, system, **kwargs)\n",
"\n",
" else:\n",
" return self._get_litellm_response(query, system)\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"source": [
"# Initialize LLMClient\n",
"\n",
"llm_client = LLMClient()"
],
"metadata": {
"id": "1ZqVsuxut3zH"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "S4Cs7tz7rrTQ"
},
"outputs": [],
"source": [
"@track\n",
"def reasoning_step(user_query, menu_items):\n",
" prompt_template = \"\"\"\n",
" Your task is to answer questions factually about a food menu, provided below and delimited by +++++. The user request is provided here: {request}\n",
"\n",
" Step 1: The first step is to check if the user is asking a question related to any type of food (even if that food item is not on the menu). If the question is about any type of food, we move on to Step 2 and ignore the rest of Step 1. If the question is not about food, then we send a response: \"Sorry! I cannot help with that. Please let me know if you have a question about our food menu.\"\n",
"\n",
" Step 2: In this step, we check that the user question is relevant to any of the items on the food menu. You should check that the food item exists in our menu first. If it doesn't exist then send a kind response to the user that the item doesn't exist in our menu and then include a list of available but similar food items without any other details (e.g., price). The food items available are provided below and delimited by +++++:\n",
"\n",
" +++++\n",
" {menu_items}\n",
" +++++\n",
"\n",
" Step 3: If the item exists in our food menu and the user is requesting specific information, provide that relevant information to the user using the food menu. Make sure to use a friendly tone and keep the response concise.\n",
"\n",
" Perform the following reasoning steps to send a response to the user:\n",
" Step 1: <Step 1 reasoning>\n",
" Step 2: <Step 2 reasoning>\n",
" Response to the user (only output the final response): <response to user>\n",
" \"\"\"\n",
"\n",
" prompt = prompt_template.format(request=user_query, menu_items=menu_items)\n",
" response = llm_client.query(prompt)\n",
" return response"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YNNuJgRqrrTQ"
},
"outputs": [],
"source": [
"@track\n",
"def extraction_step(reasoning):\n",
" prompt_template = \"\"\"\n",
" Extract the final response from delimited by ###.\n",
"\n",
" ###\n",
" {reasoning}.\n",
" ###\n",
"\n",
" Only output what comes after \"Response to the user:\".\n",
" \"\"\"\n",
"\n",
" prompt = prompt_template.format(reasoning=reasoning)\n",
" response = llm_client.query(prompt)\n",
" return response"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "hIa1nUixrrTQ"
},
"outputs": [],
"source": [
"@track\n",
"def refinement_step(final_response):\n",
" prompt_template = \"\"\"\n",
" Perform the following refinement steps on the final output delimited by ###.\n",
"\n",
" 1). Shorten the text to one sentence\n",
" 2). Use a friendly tone\n",
"\n",
" ###\n",
" {final_response}\n",
" ###\n",
" \"\"\"\n",
"\n",
" prompt = prompt_template.format(final_response=final_response)\n",
" response = llm_client.query(prompt)\n",
" return response"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5nFkQqGIrrTR"
},
"outputs": [],
"source": [
"@track\n",
"def verification_step(user_question, refined_response, menu_items):\n",
" prompt_template = \"\"\"\n",
" Your task is to check that the refined response (delimited by ###) is providing a factual response based on the user question (delimited by @@@) and the menu below (delimited by +++++). If yes, just output the refined response in its original form (without the delimiters). If no, then make the correction to the response and return the new response only.\n",
"\n",
" User question: @@@ {user_question} @@@\n",
"\n",
" Refined response: ### {refined_response} ###\n",
"\n",
" +++++\n",
" {menu_items}\n",
" +++++\n",
" \"\"\"\n",
"\n",
" prompt = prompt_template.format(user_question=user_question, refined_response=refined_response, menu_items=menu_items)\n",
" response = llm_client.query(prompt)\n",
" return response"
]
},
{
"cell_type": "markdown",
"source": [
"# Putting It All Together"
],
"metadata": {
"id": "YOzcx1rrvYUd"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8xOEoTqKrrTR"
},
"outputs": [],
"source": [
"@track\n",
"def generate_food_chatbot(user_query, menu_items):\n",
" reasoning = reasoning_step(user_query, menu_items)\n",
" extraction = extraction_step(reasoning)\n",
" refinement = refinement_step(extraction)\n",
" verification = verification_step(user_query, refinement, menu_items)\n",
" return verification\n",
"\n",
"generate_food_chatbot(\"What is the price of the cheeseburger?\", menu_items)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "comet-eval",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.15"
},
"colab": {
"provenance": [],
"collapsed_sections": [
"FBSBZpNlAczx",
"i4jOTa1ZAlTM",
"YOzcx1rrvYUd"
],
"include_colab_link": true
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment