Skip to content

Instantly share code, notes, and snippets.

@stepanogil
Created January 12, 2026 12:16
Show Gist options
  • Select an option

  • Save stepanogil/8086a2eefe6d5e09ea11cf100c12b0db to your computer and use it in GitHub Desktop.

Select an option

Save stepanogil/8086a2eefe6d5e09ea11cf100c12b0db to your computer and use it in GitHub Desktop.
Function Calling Deep Dive.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyMWI00TsZK/c6EhBvSiUQgT",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/stepanogil/8086a2eefe6d5e09ea11cf100c12b0db/function-calling-deep-dive.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# Function Calling Deep Dive\n",
"\n",
"Stephen Bonifacio\n",
"\n",
"[![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?logo=linkedin)](https://www.linkedin.com/in/stephenbonifacio/)\n",
"\n",
"[![X](https://img.shields.io/badge/X-Follow-black?logo=twitter)](https://x.com/Stepanogil)\n",
"\n",
"\n",
"\n"
],
"metadata": {
"id": "x8f1tFLcmxOL"
}
},
{
"cell_type": "markdown",
"source": [
"# Introduction\n",
"\n",
"Function calling lets AI systems (like ChatGPT) work with external tools, apps, or services by asking them to do specific tasks in a structured way. This means the AI can do things like look up information, set reminders, or run bits of code. In short, it allows AI to go beyond just chattingβ€”it can take action and help get real things done by using other tools.\n",
"\n",
"### *Scenario*\n",
"\n",
"You’re building an AI Agent that acts as a sales assistant for an e-commerce site.\n",
"\n",
"The website sells the following t-shirts:\n",
"\n",
"**Name:** *β€œMy AI is Smarter Than Your Honor Student”* \n",
"**Available sizes:** S, M, L \n",
"**Available colors:** Black, White, Light Blue\n",
"\n",
"\n",
"**Name:** *β€œKeep Calm and Trust the Neural Network”* \n",
"**Available sizes:** S, M, L \n",
"**Available colors:** Black, Pink\n",
"\n",
"**Name:** *β€œI’m Just Here for the Deep Learning”* \n",
"**Available sizes:** S, M \n",
"**Available colors:** White\n",
"\n",
"The agent interacts with the site’s tools via function calling to assist customers."
],
"metadata": {
"id": "W7LlBCBp77F9"
}
},
{
"cell_type": "markdown",
"source": [
"### *Sample Interaction*\n",
"\n",
"**User**: \"I'd like to buy the 'My AI is Smarter Than Your Honor Student' t-shirt in size M and color White please.\"\n",
"\n",
"**Assistant**: \"The t-shirt \"My AI is Smarter Than Your Honor Student\" in size M and color White has been added to your order! πŸŽ‰πŸ‘•\n",
"\n",
"Here’s the current status of your cart:\n",
"\n",
"| Item | Quantity | Size | Color |\n",
"|------------|----------|------|-------|\n",
"| My AI is Smarter Than Your Honor Student | 1 | M | White |\n",
"\n",
"If you need anything else, just let me know! 😊\"\n"
],
"metadata": {
"id": "M95CgXgXy2Qi"
}
},
{
"cell_type": "markdown",
"source": [
"**Note**: The interaction above required the Agent to make 2 LLM calls.\n",
"\n",
"I've outlined what happens at each LLM call below."
],
"metadata": {
"id": "MD2IEYF1DvAW"
}
},
{
"cell_type": "markdown",
"source": [
"# First LLM Call\n",
"\n",
"The first LLM call is the 'Routing' step - i.e. the LLM routes to the appropriate tool to use for the given user input."
],
"metadata": {
"id": "110laVBqzmlK"
}
},
{
"cell_type": "markdown",
"source": [
"## The Prompt\n",
"\n",
"\n",
"### The User Input:\n",
"\n",
"**User**: `\"I'd like to buy the 'My AI is Smarter Than Your Honor Student' t-shirt in size M and color White please.\"`\n",
"\n",
"### The Tools List Input (i.e. the list of tools that the agent can use):\n",
"\n",
"The LLM is provided with 3 tools, namely:\n",
"\n",
"1. `get_t_shirts` - retrieves the list of available t-shirts\n",
"2. `add_to_order` - adds a a t-shirt to the order/cart\n",
"3. `place_order` - checkouts an order/cart\n",
"\n",
"For now, just note the `description` values - these are passed to the LLM and helps guide it on what tool to use for any given scenario.\n",
"\n",
"\n",
"```\n",
"[\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_t_shirts\",\n",
" \"description\": \"Use this tool to get the list of available t-shirts\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {},\n",
" \"required\": []\n",
" }\n",
" }\n",
" },\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"add_to_order\",\n",
" \"description\": \"Use this tool when adding a t-shirt to a user's order\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"name\": {\"type\": \"string\", \"description\": \"The name of the t-shirt\"},\n",
" \"quantity\": {\"type\": \"integer\", \"description\": \"The quantity of the t-shirt\"},\n",
" \"size\": {\"type\": \"string\", \"description\": \"The size of the t-shirt\"},\n",
" \"color\": {\"type\": \"string\", \"description\": \"The color of the t-shirt\"}\n",
" },\n",
" \"required\": [\"name\", \"size\", \"color\", \"quantity\"]\n",
" }\n",
" }\n",
" },\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"place_order\",\n",
" \"description\": \"Use this tool to place an order for the t-shirts\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {},\n",
" \"required\": []\n",
" }\n",
" }\n",
" }\n",
"]\n",
"```"
],
"metadata": {
"id": "l6XK_6_D7hWr"
}
},
{
"cell_type": "markdown",
"source": [
"## Passing the Prompt to the LLM (via OpenAI API)\n",
"\n",
"(*This is the same as sending a message to ChatGPT but programatically via the API*)\n",
"\n",
"Note that both the `User Input` and the `Tools List` are passed to the LLM at the same time.\n",
"\n",
"While the code doesn’t explicitly show this, a useful mental model is to imagine a prompt structured like this being passed to the LLM:\n",
"\n",
"```\n",
"Here is the user input: <User Input>,\n",
"Use the appropriate tools from this list: <Tools List> that can assist the user.\n",
"\n",
"```\n",
"\n",
"In the code below - the `Tool List` is passed to the LLM via a different parameter (called `tools`), while the `User Input` is passed to the LLM via the normal `messages` parameter. The LLM has been trained that the values/prompt passed via the `tools` parameter are callable depending on the user input and thus explicit prompting for this is unnecessary."
],
"metadata": {
"id": "tYN8g9WK8jYV"
}
},
{
"cell_type": "code",
"source": [
"!pip install openai"
],
"metadata": {
"id": "S8bcjcF358Ly"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "HmyBKeDLmuq7",
"outputId": "3a744924-504d-4f26-876a-05e9098896ec"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"{\n",
" \"name\": \"add_to_order\",\n",
" \"parameters\": {\n",
" \"name\": \"My AI is Smarter Than Your Honor Student\",\n",
" \"quantity\": 1,\n",
" \"size\": \"M\",\n",
" \"color\": \"White\"\n",
" }\n",
"}\n"
]
}
],
"source": [
"# Passing the Input to the LLM (implemented in python code)\n",
"import json\n",
"from openai import OpenAI\n",
"\n",
"# Set up the client\n",
"api_key = \"sk-proj-...\" # the OpenAI API key. get yours from https://platform.openai.com/\n",
"client = OpenAI(api_key=api_key)\n",
"\n",
"# Sending the prompt to chat completions endpoint to get an LLM response\n",
"llm_response = client.chat.completions.create(\n",
"# We are using the 'gpt-4o-mini' model\n",
"model=\"gpt-4o-mini\",\n",
"messages=[\n",
"# The User Input\n",
" {\"role\": \"user\", \"content\": \"I'd like to order the 'My AI is Smarter Than Your Honor Student' t-shirt in size M and color White please.\"}\n",
" ],\n",
"# The Tool List input\n",
"tools=[\n",
" # Tool #1\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_t_shirts\",\n",
" \"description\": \"Use this tool to get the list of available t-shirts\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {},\n",
" \"required\": []\n",
" }\n",
" }\n",
" },\n",
" # Tool #2\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"add_to_order\",\n",
" \"description\": \"Use this tool when adding a t-shirt to a user's order\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"name\": {\"type\": \"string\", \"description\": \"The name of the t-shirt\"},\n",
" \"quantity\": {\"type\": \"integer\", \"description\": \"The quantity of the t-shirt\"},\n",
" \"size\": {\"type\": \"string\", \"description\": \"The size of the t-shirt\"},\n",
" \"color\": {\"type\": \"string\", \"description\": \"The color of the t-shirt\"}\n",
" },\n",
" \"required\": [\"name\", \"size\", \"color\", \"quantity\"]\n",
" }\n",
" }\n",
" },\n",
" # Tool #3\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"place_order\",\n",
" \"description\": \"Use this tool to place an order for the t-shirts\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {},\n",
" \"required\": []\n",
" }\n",
"\n",
" }\n",
" }\n",
" ]\n",
")\n",
"\n",
"# Helper code to properly display the output from the LLM\n",
"tool_call_dict = {\n",
" \"name\": llm_response.choices[0].message.tool_calls[0].function.name,\n",
" \"parameters\": json.loads(llm_response.choices[0].message.tool_calls[0].function.arguments)\n",
"}\n",
"print(json.dumps(tool_call_dict, indent=4))"
]
},
{
"cell_type": "markdown",
"source": [
"## The Output\n",
"\n",
"Normally, the LLM will output something in a 'chat' format (e.g. Q: \"Hello!\". A: \"Hello! How can I assist you today?\").\n",
"\n",
"However, when the LLM 'decides' that it needs to use a tool (like in our example) based on the given user input, it emits an output like the one below:\n",
"\n",
"**Note**:This is a data format called JSON - it stores information as simple key-value pairs.\n",
"\n",
"```\n",
"{\n",
" \"name\": \"add_to_order\",\n",
" \"parameters\": {\n",
" \"name\": \"My AI is Smarter Than Your Honor Student\",\n",
" \"quantity\": 1,\n",
" \"size\": \"M\",\n",
" \"color\": \"White\"\n",
" }\n",
"}\n",
"```\n",
"\n",
"Let's break this down:\n",
"\n",
"Given the user input `\"I'd like to order the 'My AI is Smarter Than Your Honor Student' t-shirt in size M and color White please.\"`, the LLM decided to use the `'add_to_order'` tool.\n",
"\n",
"The LLM selected this tool based on the `description` we have assigned this tool: `Use this tool when adding a t-shirt to a user's order`\n",
"\n",
">Hint: All the `description` values are prompt engineer-able. πŸš€\n",
"\n",
"Using this tool requires 4 parameters (based on the `required` field as defined by the specs below).\n",
"\n",
"These are `name`, `size`, `color` and `quantity`.\n",
"\n",
"```\n",
" {\n",
" \"name\": \"add_to_order\",\n",
" \"description\": \"Use this tool when adding a t-shirt to a user's order\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"name\": {\"type\": \"string\", \"description\": \"The name of the t-shirt\"},\n",
" \"quantity\": {\"type\": \"integer\", \"description\": \"The quantity of the t-shirt\"},\n",
" \"size\": {\"type\": \"string\", \"description\": \"The size of the t-shirt\"},\n",
" \"color\": {\"type\": \"string\", \"description\": \"The color of the t-shirt\"}\n",
" },\n",
" \"required\": [\"name\", \"size\", \"color\", \"quantity\"]\n",
" }\n",
"```\n",
"Given the `description` for what each parameter mean e.g. `name` is `The name of the t-shirt`, the LLM was able to correctly identify the appropriate parameter values to be used with this tool call. These are:\n",
"\n",
"```\n",
"\"name\": \"My AI is Smarter Than Your Honor Student\",\n",
"\"quantity\": 1,\n",
"\"size\": \"M\",\n",
"\"color\": \"White\"\n",
"```\n",
"Now, we can execute the tool selected by the LLM: `add_to_order` using the appropriate parameter values:\n",
"\n",
"```\n",
"add_to_order(name=\"My AI is Smarter Than Your Honor Student\", quantity=1, size=\"M\", color=\"White\")\n",
"```\n",
"\n"
],
"metadata": {
"id": "PIahGiv_TEES"
}
},
{
"cell_type": "markdown",
"source": [
"### But first, we need to create the tool...\n",
"\n",
"In this step, we’re building the 'integration' between the LLM and our hypothetical e-commerce website.\n",
"\n",
"In a real live system, executing `add_to_order` would add the selected t-shirt to a cart and save it to the database, a cache, memory, etc., of the e-commerce site via some API/function.\n",
"\n",
"If the save operation completes successfully, the API returns a 'Success' message.\n",
"\n",
"For this example, I’m mocking/pretending an API that adds the order to the database and returns a 'Success' message. The code implementation below isn’t super important.\n",
"\n",
"What *is* important is to remember that this step is where you can integrate your external systems (e.g. SAP, Salesforce, Workday, ServiceNow, etc.) by creating functions and passing those function specs/signature to the LLM:\n",
"\n",
"`create_purchase_order` – create a PO in SAP using the relevant details.\n",
"\n",
"`create_sales_lead` – create a new sales lead in Salesforce.\n",
"\n",
"`initiate_job_requisition` – create a new job requisition in Workday.\n",
"\n",
"`create_incident` – log a new incident in ServiceNow.\n"
],
"metadata": {
"id": "DmyUJXCh08Ce"
}
},
{
"cell_type": "code",
"source": [
"# Mock add_to_order implementation\n",
"\n",
"def add_to_order(name, quantity, size, color):\n",
" \"\"\"\n",
" Simulates adding a t-shirt to a shopping order system.\n",
" This is a mock (pretend) version of what real e-commerce site would do.\n",
"\n",
" Args:\n",
"\n",
" name (str): The name of the t-shirt\n",
" quantity (int): The quantity of the t-shirt\n",
" size (str): The size of the t-shirt\n",
" color (str): The color of the t-shirt\n",
"\n",
" Returns:\n",
" str: The message to be displayed to the user.\n",
"\n",
" \"\"\"\n",
" # Mock API call to add a t-shirt to the order\n",
" def some_api_call_to_add_to_order():\n",
"\n",
" status = \"Success\"\n",
" cart_status = {\n",
" 'Item 1':\n",
" {\n",
" 'Name': name,\n",
" 'Quantity': quantity,\n",
" 'Size': size,\n",
" 'Color': color\n",
" }\n",
" }\n",
" return status, cart_status\n",
"\n",
" status, cart_status = some_api_call_to_add_to_order()\n",
"\n",
" if status == \"Success\":\n",
" # This is the output of the tool call that will be passed to the (2nd) LLM call\n",
" return f\"T-shirt added to order successfully. Cart status: {cart_status}. Inform the user that the t-shirt has been added to the order. Display the Cart status to user as markdown table. Use emojis\"\n",
" else:\n",
" return \"Inform the user the add t-shirt to order operation has failed. Ask them to try again.\""
],
"metadata": {
"id": "Amz6kkC_pN2n"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"**Note:** Function calling is essentially a classification task for the LLM β€” it analyzes user input and selects the most appropriate tool or function to invoke.\n",
"\n",
"The LLM doesn’t execute any functions or toolsβ€”it just returns the function name and the parameter values. Then it’s up to you (the developer) to decide whether to run it, do some checks first (like validation), or something else before moving forward."
],
"metadata": {
"id": "iWGB43uZOtmE"
}
},
{
"cell_type": "markdown",
"source": [
"### We execute the add_to_order tool...\n",
"\n",
"and it generates the 'Successs' message as output."
],
"metadata": {
"id": "FZm9QQr91tp2"
}
},
{
"cell_type": "code",
"source": [
"add_to_order(name=\"My AI is Smarter Than Your Honor Student\", quantity=1, size=\"M\", color=\"White\")"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 36
},
"id": "FZTJP1T4sSxk",
"outputId": "69140fba-582f-466a-b15f-75c2c3b56937"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"\"T-shirt added to order successfully. Cart status: {'Item 1': {'Name': 'My AI is Smarter Than Your Honor Student', 'Quantity': 1, 'Size': 'M', 'Color': 'White'}}. Inform the user that the t-shirt has been added to the order. Display the Cart status to user as markdown table. Use emojis\""
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 18
}
]
},
{
"cell_type": "markdown",
"source": [
"We get the output of the tool:\n",
"\n",
"`T-shirt added to order successfully. Cart status: {'Item 1': {'Name': 'My AI is Smarter Than Your Honor Student', 'Quantity': 1, 'Size': 'M', 'Color': 'White'}}. Inform the user that the t-shirt has been added to the order. Display the Cart status to user as markdown table. Use emojis`\n",
"\n",
"and we pass this to the 2nd LLM as the `Tool Input`.\n",
"\n",
"Normally, the API will return some generic 'Success' message (e.g. 200 - OK) so we applied some prompt engineering to the tool output that we'll be passing to the LLM for the 2nd LLM call.\n",
"\n",
"\n"
],
"metadata": {
"id": "tQkhRSd02DS5"
}
},
{
"cell_type": "markdown",
"source": [
"# Second LLM Call\n",
"\n",
"This is the 'Synthesis' step. It synthesizes the user input and tool output to come up with a reply to the user."
],
"metadata": {
"id": "n1em6Isu0Gky"
}
},
{
"cell_type": "markdown",
"source": [
"## Passing the Prompt to the LLM (for the 2nd time)\n",
"For this step, we just want to inform the user (via the LLM) that the 'add to order' command was succesful. We don't want it to invoke any tool, and thus the `Tools List` was excluded from the prompt.\n",
"\n",
"Instead, we add the output of the called function as the `Tool Input` together with the `User Input`.\n",
"\n",
"The `User Input` provides context of the ongoing conversation to the LLM."
],
"metadata": {
"id": "EmhEL_8_xo9H"
}
},
{
"cell_type": "code",
"source": [
"# Passing the Input to the LLM (implemented in python code)\n",
"import json\n",
"from openai import OpenAI\n",
"from IPython.display import Markdown\n",
"\n",
"# Set up the client\n",
"api_key = \"sk-proj-...\" # the OpenAI API key. get yours from https://platform.openai.com/\n",
"client = OpenAI(api_key=api_key)\n",
"\n",
"# Sending the prompt to chat completions endpoint to get an LLM response\n",
"llm_response = client.chat.completions.create(\n",
"# We are using the 'gpt-4o-mini' model\n",
"model=\"gpt-4o-mini\",\n",
"messages=[\n",
"# The User Input\n",
" {\"role\": \"user\", \"content\": \"I'd like to order the 'My AI is Smarter Than Your Honor Student' t-shirt in size M and color White please.\"},\n",
"# The Tool Input (output of the tool call)\n",
" {\"role\": \"function\",\n",
" \"name\": \"add_to_order\",\n",
" \"content\": \"T-shirt added to order successfully. Cart status: {'Item 1': {'Name': 'My AI is Smarter Than Your Honor Student', 'Quantity': 1, 'Size': 'M', 'Color': 'White'}}. Inform the user that the t-shirt has been added to the order. Display the Cart status to user as markdown table. Use emojis\"\n",
" }\n",
"]\n",
")\n",
"# Display the reponse\n",
"display(Markdown(llm_response.choices[0].message.content))"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 163
},
"id": "dTMWlytis1TC",
"outputId": "a8202bca-bcef-41fa-b2c8-45feba38f357"
},
"execution_count": null,
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.Markdown object>"
],
"text/markdown": "The t-shirt **\"My AI is Smarter Than Your Honor Student\"** in size **M** and color **White** has been added to your order! πŸŽ‰\n\nHere's your current cart status:\n\n| Item | Quantity | Size | Color |\n|-----------|----------|------|-------|\n| T-shirt | 1 | M | White |\n\nIf you need anything else, feel free to ask! 😊"
},
"metadata": {}
}
]
},
{
"cell_type": "markdown",
"source": [
"The 1st and 2nd LLM calls represents just one conversation turn with the user."
],
"metadata": {
"id": "tqRUO7nXMY3u"
}
},
{
"cell_type": "markdown",
"source": [
"# BONUS: Implementing the 'get_t_shirts' tool..."
],
"metadata": {
"id": "zrRTZfX3ot5C"
}
},
{
"cell_type": "markdown",
"source": [
"## Mock the get_t_shirts tool"
],
"metadata": {
"id": "3qNF_qln4oFS"
}
},
{
"cell_type": "code",
"source": [
"from textwrap import dedent\n",
"\n",
"def get_t_shirts():\n",
" \"\"\"\n",
" Simulates getting the list of available t-shirts from a database.\n",
"\n",
" Args:\n",
" None\n",
"\n",
" Return\n",
" str: The message to be displayed to the user.\n",
" \"\"\"\n",
" available_t_shirts = [\n",
" {\n",
" \"name\": \"My AI is Smarter Than Your Honor Student\",\n",
" \"available_sizes\": [\"S\", \"M\", \"L\"],\n",
" \"available_colors\": [\"Black\", \"White\", \"Light Blue\"]\n",
" },\n",
" {\n",
" \"name\": \"Keep Calm and Trust the Neural Network\",\n",
" \"available_sizes\": [\"S\", \"M\", \"L\"],\n",
" \"available_colors\": [\"Black\", \"Pink\"]\n",
" },\n",
" {\n",
" \"name\": \"I'm Just Here for the Deep Learning\",\n",
" \"available_sizes\": [\"S\", \"M\"],\n",
" \"available_colors\": [\"White\"]\n",
" }\n",
" ]\n",
"\n",
" return dedent(\n",
" f\"\"\"\n",
" Here are the available t-shirts:\n",
"\n",
" {available_t_shirts}\n",
"\n",
" Display the list as markdown table when displaying to the user.\n",
" Use emojis.\n",
" \"\"\"\n",
" )"
],
"metadata": {
"id": "OcGV8nE84s5Q"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Consolidating the 2-step LLM call as a single function: 'process_user_query'"
],
"metadata": {
"id": "4q4OkiTktBxz"
}
},
{
"cell_type": "code",
"source": [
"from openai import OpenAI\n",
"import json\n",
"from IPython.display import Markdown\n",
"\n",
"def process_user_query(user_query):\n",
" \"\"\"\n",
" Process a user query through a two-step LLM interaction.\n",
"\n",
" Args:\n",
" user_query (str): The user's input query\n",
"\n",
" Returns:\n",
" str: The final LLM response after processing tool output\n",
" \"\"\"\n",
" # Initialize OpenAI client\n",
" api_key = \"sk-proj-...\" # the OpenAI API key. get yours from https://platform.openai.com/\n",
" client = OpenAI(api_key=api_key)\n",
"\n",
" # Define the tools list\n",
" tools = [\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_t_shirts\",\n",
" \"description\": \"Use this tool to get the list of available t-shirts\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {},\n",
" \"required\": []\n",
" }\n",
" }\n",
" },\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"add_to_order\",\n",
" \"description\": \"Use this tool when adding a t-shirt to a user's order\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"name\": {\"type\": \"string\", \"description\": \"The name of the t-shirt\"},\n",
" \"quantity\": {\"type\": \"integer\", \"description\": \"The quantity of the t-shirt\"},\n",
" \"size\": {\"type\": \"string\", \"description\": \"The size of the t-shirt\"},\n",
" \"color\": {\"type\": \"string\", \"description\": \"The color of the t-shirt\"}\n",
" },\n",
" \"required\": [\"name\", \"size\", \"color\", \"quantity\"]\n",
" }\n",
" }\n",
" },\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"place_order\",\n",
" \"description\": \"Use this tool to place an order for the t-shirts\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {},\n",
" \"required\": []\n",
" }\n",
" }\n",
" }\n",
" ]\n",
"\n",
" # First LLM call to determine which tool to use\n",
" first_response = client.chat.completions.create(\n",
" model=\"gpt-4o-mini\",\n",
" messages=[{\"role\": \"user\", \"content\": user_query}],\n",
" tools=tools\n",
" )\n",
"\n",
" # Get tool call information\n",
" tool_call = first_response.choices[0].message.tool_calls[0]\n",
" tool_name = tool_call.function.name\n",
"\n",
" # If the tool called is get_t_shirts\n",
" if tool_name == \"get_t_shirts\":\n",
"\n",
" # Execute the get_t_shirst tool\n",
" tool_output = get_t_shirts()\n",
"\n",
" # Second LLM call with the tool output\n",
" second_response = client.chat.completions.create(\n",
" model=\"gpt-4o-mini\",\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": user_query},\n",
" {\"role\": \"function\", \"name\": \"get_t_shirts\", \"content\": tool_output}\n",
" ]\n",
" )\n",
"\n",
" # Display the LLM response\n",
" display(Markdown(second_response.choices[0].message.content))\n"
],
"metadata": {
"id": "gzRl8M_PsfCE"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Run the function"
],
"metadata": {
"id": "5lpfe0QqA5WK"
}
},
{
"cell_type": "code",
"source": [
"user_query = \"I would like to order the 'Neurons, not Morons' t-shirt please.\"\n",
"\n",
"process_user_query(user_query)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 202
},
"id": "bHbvpR1PB4iP",
"outputId": "1009677f-a85f-4db5-da01-fdd46844be5b"
},
"execution_count": null,
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.Markdown object>"
],
"text/markdown": "It seems that the \"Neurons, not Morons?\" t-shirt is not available. However, here are some other t-shirt options you can consider:\n\n| T-Shirt Name | Available Sizes | Available Colors |\n|---------------------------------------------|-----------------|---------------------------------|\n| My AI is Smarter Than Your Honor Student | S, M, L | Black, White, Light Blue |\n| Keep Calm and Trust the Neural Network | S, M, L | Black, Pink |\n| I'm Just Here for the Deep Learning | S, M | White |\n\nIf you're interested in any of these options, please let me know! 😊"
},
"metadata": {}
}
]
},
{
"cell_type": "code",
"source": [
"user_query = \"What are the t-shirts you're selling?\"\n",
"\n",
"process_user_query(user_query)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 202
},
"id": "EG-R1BQaAmZi",
"outputId": "83ca38a4-ab58-4f23-c070-457bddaf341e"
},
"execution_count": null,
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.Markdown object>"
],
"text/markdown": "Here are the t-shirts available for sale! πŸ‘•βœ¨\n\n| T-Shirt Name | Available Sizes | Available Colors |\n|---------------------------------------------------|-----------------|-------------------------------|\n| My AI is Smarter Than Your Honor Student πŸ€– | S, M, L | Black, White, Light Blue |\n| Keep Calm and Trust the Neural Network πŸ’» | S, M, L | Black, Pink |\n| I'm Just Here for the Deep Learning πŸ“š | S, M | White |\n\nLet me know if you'd like to order one! 😊"
},
"metadata": {}
}
]
},
{
"cell_type": "markdown",
"source": [
"### Note: Cells with code can be executed by clicking the ▢️ button.\n",
"Some cells require an OpenAI API key.\n",
"\n",
"Here's how to get one: https://www.youtube.com/watch?v=SzPE_AE0eEo\n",
"\n",
"Execute cells sequentially (from top to bottom) to avoid errors.\n",
"\n"
],
"metadata": {
"id": "nsZuhaib5SYg"
}
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment