Created
October 17, 2024 10:44
-
-
Save NassimElH01/08c0cdeecd130e184a43ceab155b8b47 to your computer and use it in GitHub Desktop.
ChatGPTChatSequence.ipynb (OPG)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| { | |
| "nbformat": 4, | |
| "nbformat_minor": 0, | |
| "metadata": { | |
| "colab": { | |
| "provenance": [], | |
| "include_colab_link": true | |
| }, | |
| "kernelspec": { | |
| "name": "python3", | |
| "display_name": "Python 3" | |
| }, | |
| "language_info": { | |
| "name": "python" | |
| } | |
| }, | |
| "cells": [ | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "view-in-github", | |
| "colab_type": "text" | |
| }, | |
| "source": [ | |
| "<a href=\"https://colab.research.google.com/gist/NassimElH01/08c0cdeecd130e184a43ceab155b8b47/chatgptchatsequence-ipynb-opg.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "#Chat sequence" | |
| ], | |
| "metadata": { | |
| "id": "MgpnjDz_ARFo" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "###Install OpenAI components" | |
| ], | |
| "metadata": { | |
| "id": "bmZomqAqUet4" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "colab": { | |
| "base_uri": "https://localhost:8080/" | |
| }, | |
| "id": "WMZtZ_fFULYL", | |
| "outputId": "b40f6f4e-c1d6-4349-d629-83e2651eec68" | |
| }, | |
| "outputs": [ | |
| { | |
| "output_type": "stream", | |
| "name": "stdout", | |
| "text": [ | |
| "Collecting openai\n", | |
| " Downloading openai-1.51.2-py3-none-any.whl.metadata (24 kB)\n", | |
| "Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.10/dist-packages (from openai) (3.7.1)\n", | |
| "Requirement already satisfied: distro<2,>=1.7.0 in /usr/lib/python3/dist-packages (from openai) (1.7.0)\n", | |
| "Collecting httpx<1,>=0.23.0 (from openai)\n", | |
| " Downloading httpx-0.27.2-py3-none-any.whl.metadata (7.1 kB)\n", | |
| "Collecting jiter<1,>=0.4.0 (from openai)\n", | |
| " Downloading jiter-0.6.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.2 kB)\n", | |
| "Requirement already satisfied: pydantic<3,>=1.9.0 in /usr/local/lib/python3.10/dist-packages (from openai) (2.9.2)\n", | |
| "Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from openai) (1.3.1)\n", | |
| "Requirement already satisfied: tqdm>4 in /usr/local/lib/python3.10/dist-packages (from openai) (4.66.5)\n", | |
| "Requirement already satisfied: typing-extensions<5,>=4.11 in /usr/local/lib/python3.10/dist-packages (from openai) (4.12.2)\n", | |
| "Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->openai) (3.10)\n", | |
| "Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->openai) (1.2.2)\n", | |
| "Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx<1,>=0.23.0->openai) (2024.8.30)\n", | |
| "Collecting httpcore==1.* (from httpx<1,>=0.23.0->openai)\n", | |
| " Downloading httpcore-1.0.6-py3-none-any.whl.metadata (21 kB)\n", | |
| "Collecting h11<0.15,>=0.13 (from httpcore==1.*->httpx<1,>=0.23.0->openai)\n", | |
| " Downloading h11-0.14.0-py3-none-any.whl.metadata (8.2 kB)\n", | |
| "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1.9.0->openai) (0.7.0)\n", | |
| "Requirement already satisfied: pydantic-core==2.23.4 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1.9.0->openai) (2.23.4)\n", | |
| "Downloading openai-1.51.2-py3-none-any.whl (383 kB)\n", | |
| "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m383.7/383.7 kB\u001b[0m \u001b[31m8.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", | |
| "\u001b[?25hDownloading httpx-0.27.2-py3-none-any.whl (76 kB)\n", | |
| "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m76.4/76.4 kB\u001b[0m \u001b[31m2.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", | |
| "\u001b[?25hDownloading httpcore-1.0.6-py3-none-any.whl (78 kB)\n", | |
| "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m78.0/78.0 kB\u001b[0m \u001b[31m3.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", | |
| "\u001b[?25hDownloading jiter-0.6.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (325 kB)\n", | |
| "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m325.2/325.2 kB\u001b[0m \u001b[31m14.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", | |
| "\u001b[?25hDownloading h11-0.14.0-py3-none-any.whl (58 kB)\n", | |
| "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m2.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", | |
| "\u001b[?25hInstalling collected packages: jiter, h11, httpcore, httpx, openai\n", | |
| "Successfully installed h11-0.14.0 httpcore-1.0.6 httpx-0.27.2 jiter-0.6.1 openai-1.51.2\n" | |
| ] | |
| } | |
| ], | |
| "source": [ | |
| "pip install openai\n" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "pip install python-dotenv" | |
| ], | |
| "metadata": { | |
| "colab": { | |
| "base_uri": "https://localhost:8080/" | |
| }, | |
| "id": "dtwUjrS2VbqJ", | |
| "outputId": "fdc04f63-ba4f-41a7-8e89-a7548ec41428" | |
| }, | |
| "execution_count": null, | |
| "outputs": [ | |
| { | |
| "output_type": "stream", | |
| "name": "stdout", | |
| "text": [ | |
| "Collecting python-dotenv\n", | |
| " Downloading python_dotenv-1.0.1-py3-none-any.whl.metadata (23 kB)\n", | |
| "Downloading python_dotenv-1.0.1-py3-none-any.whl (19 kB)\n", | |
| "Installing collected packages: python-dotenv\n", | |
| "Successfully installed python-dotenv-1.0.1\n" | |
| ] | |
| } | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "###Mount Google Drive" | |
| ], | |
| "metadata": { | |
| "id": "LiVhUlZfVsRP" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "from google.colab import drive\n", | |
| "drive.mount('/content/drive')" | |
| ], | |
| "metadata": { | |
| "colab": { | |
| "base_uri": "https://localhost:8080/" | |
| }, | |
| "id": "ERwSE0yEV4Rs", | |
| "outputId": "913032de-d09f-4378-9374-19fb0868e8f8" | |
| }, | |
| "execution_count": null, | |
| "outputs": [ | |
| { | |
| "output_type": "stream", | |
| "name": "stdout", | |
| "text": [ | |
| "Mounted at /content/drive\n" | |
| ] | |
| } | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "###Load .env file" | |
| ], | |
| "metadata": { | |
| "id": "cc3ly5uAWIYc" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "from dotenv import load_dotenv\n", | |
| "load_dotenv('drive/My Drive/Colab Notebooks/env')" | |
| ], | |
| "metadata": { | |
| "colab": { | |
| "base_uri": "https://localhost:8080/" | |
| }, | |
| "id": "m5ycZHwQWNan", | |
| "outputId": "59a52ef6-50c7-4c15-c600-b3b3d5e212a0" | |
| }, | |
| "execution_count": null, | |
| "outputs": [ | |
| { | |
| "output_type": "execute_result", | |
| "data": { | |
| "text/plain": [ | |
| "True" | |
| ] | |
| }, | |
| "metadata": {}, | |
| "execution_count": 4 | |
| } | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "###Initialize Open AI client" | |
| ], | |
| "metadata": { | |
| "id": "uWB5Q5NqXCYm" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "from openai import OpenAI as openai\n", | |
| "import os\n", | |
| "openai.api_key = os.getenv(\"OPENAI_API_KEY\")\n", | |
| "openai.api_key\n", | |
| "client = openai()" | |
| ], | |
| "metadata": { | |
| "id": "NRygMPIVXDzp" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "###Various variables and settings" | |
| ], | |
| "metadata": { | |
| "id": "rerQJqKxX_i0" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "gptModel=\"gpt-4o-mini\"\n", | |
| "chatCompletionChoices=1\n", | |
| "samplingTemperature=None\n", | |
| "maxCompletionTokens=100\n", | |
| "nucleusSampling=None" | |
| ], | |
| "metadata": { | |
| "id": "WmrMIHu-ZwNb" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "###1st prompt" | |
| ], | |
| "metadata": { | |
| "id": "A4nb_Yj2Z_1s" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "InitialSystemContent = \"You are a helpful teacher.\"\n", | |
| "#InitialSystemContent = \"You are an experienced sales copywriter. The name of my business is Future Mind Consulting, and we sell software development. Write a 200-word email that will persuade anyone who reads it to become a customer.\"\n" | |
| ], | |
| "metadata": { | |
| "id": "qGt6k0aqYxCt" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "FirstUserContent = \"Are there other measures than time complexity for an algorithm?\"\n", | |
| "#FirstUserContent = \"Serious\"" | |
| ], | |
| "metadata": { | |
| "id": "3Q13yuwTY4zt" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "chatMessages=[\n", | |
| " {\n", | |
| " \"role\": \"system\", \"content\": InitialSystemContent\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"user\",\n", | |
| " \"content\": FirstUserContent,\n", | |
| " },\n", | |
| "]" | |
| ], | |
| "metadata": { | |
| "id": "WL5KOl_KZUFB" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "print(chatMessages)" | |
| ], | |
| "metadata": { | |
| "id": "vgSDj-dyaVNe", | |
| "colab": { | |
| "base_uri": "https://localhost:8080/" | |
| }, | |
| "outputId": "43943cb9-e973-4831-b400-3a1db49e44bd" | |
| }, | |
| "execution_count": null, | |
| "outputs": [ | |
| { | |
| "output_type": "stream", | |
| "name": "stdout", | |
| "text": [ | |
| "[{'role': 'system', 'content': 'You are a helpful teacher.'}, {'role': 'user', 'content': 'Are there other measures than time complexity for an algorithm?'}]\n" | |
| ] | |
| } | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "response = client.chat.completions.create(model=gptModel, messages=chatMessages, n=chatCompletionChoices, temperature=samplingTemperature, max_completion_tokens=maxCompletionTokens, top_p=nucleusSampling)" | |
| ], | |
| "metadata": { | |
| "id": "EeCD7AUFZlrV" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "FirstAssistantContent = response.choices[0].message.content\n", | |
| "#Force assitant message to be something: FirstAssistantContent = \"Yes, there are other measures besides time complexfor an algorithm, such as space complexity.\"\n" | |
| ], | |
| "metadata": { | |
| "id": "aSHZU_1mbPJu" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "print(FirstAssistantContent)" | |
| ], | |
| "metadata": { | |
| "collapsed": true, | |
| "id": "7ppulUHVbb1W", | |
| "colab": { | |
| "base_uri": "https://localhost:8080/" | |
| }, | |
| "outputId": "fa13740e-f3bc-4cc9-f723-74eb20ab906f" | |
| }, | |
| "execution_count": null, | |
| "outputs": [ | |
| { | |
| "output_type": "stream", | |
| "name": "stdout", | |
| "text": [ | |
| "Yes, in addition to time complexity, there are several other important measures and considerations for evaluating algorithms. Here are some of the key ones:\n", | |
| "\n", | |
| "1. **Space Complexity**: This measures the amount of memory an algorithm requires relative to the input size. It includes both the space needed for the input and additional memory used by the algorithm (e.g., for variables, function call stacks, and dynamically allocated memory).\n", | |
| "\n", | |
| "2. **Auxiliary Space Complexity**: This specifically measures the additional space used by the\n" | |
| ] | |
| } | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "###2nd prompt" | |
| ], | |
| "metadata": { | |
| "id": "RFqIPlLuflNM" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "SecondUserContent = \"What is it?\"\n", | |
| "#SecondUserContent = \"It is for medical device industry\"" | |
| ], | |
| "metadata": { | |
| "id": "r_RsKgpPfqEq" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "chatMessages=[\n", | |
| " {\n", | |
| " \"role\": \"system\", \"content\": InitialSystemContent\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"user\",\n", | |
| " \"content\": FirstUserContent,\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"assistant\",\n", | |
| " \"content\": FirstAssistantContent,\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"user\",\n", | |
| " \"content\": SecondUserContent,\n", | |
| " },\n", | |
| "]" | |
| ], | |
| "metadata": { | |
| "id": "ZvDPMcZegS4h" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "print(chatMessages)" | |
| ], | |
| "metadata": { | |
| "id": "_MNGcNulhYNy", | |
| "colab": { | |
| "base_uri": "https://localhost:8080/" | |
| }, | |
| "outputId": "6f73c5fd-cceb-402a-b78a-67513698ae62" | |
| }, | |
| "execution_count": null, | |
| "outputs": [ | |
| { | |
| "output_type": "stream", | |
| "name": "stdout", | |
| "text": [ | |
| "[{'role': 'system', 'content': 'You are a helpful teacher.'}, {'role': 'user', 'content': 'Are there other measures than time complexity for an algorithm?'}, {'role': 'assistant', 'content': 'Yes, in addition to time complexity, there are several other important measures and considerations for evaluating algorithms. Here are some of the key ones:\\n\\n1. **Space Complexity**: This measures the amount of memory an algorithm requires relative to the input size. It includes both the space needed for the input and additional memory used by the algorithm (e.g., for variables, function call stacks, and dynamically allocated memory).\\n\\n2. **Auxiliary Space Complexity**: This specifically measures the additional space used by the'}, {'role': 'user', 'content': 'What is it?'}]\n" | |
| ] | |
| } | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "response = client.chat.completions.create(model=gptModel, messages=chatMessages, n=chatCompletionChoices, temperature=samplingTemperature, max_completion_tokens=maxCompletionTokens, top_p=nucleusSampling)" | |
| ], | |
| "metadata": { | |
| "id": "XCQaA_8chjk-" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "SecondAssistantContent = response.choices[0].message.content\n", | |
| "#Force assitant message to be something: SecondAssistantContent = \"<something>\"" | |
| ], | |
| "metadata": { | |
| "id": "io-1QG3jhq_l" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "print(SecondAssistantContent)" | |
| ], | |
| "metadata": { | |
| "id": "qJeh1LFthwQk", | |
| "colab": { | |
| "base_uri": "https://localhost:8080/" | |
| }, | |
| "outputId": "ea5d363a-82bd-43bf-c2db-86a012ee29de" | |
| }, | |
| "execution_count": null, | |
| "outputs": [ | |
| { | |
| "output_type": "stream", | |
| "name": "stdout", | |
| "text": [ | |
| "Auxiliary space complexity refers to the extra space or temporary space needed by an algorithm to solve a problem, excluding the space taken by the input itself. It focuses on the additional space required beyond the input data. \n", | |
| "\n", | |
| "### Key Points:\n", | |
| "\n", | |
| "- **Definition**: It is the extra space an algorithm uses in addition to the input size.\n", | |
| "- **Why It Matters**: Understanding auxiliary space is important, especially for algorithms that might need extra data structures (like arrays, stacks, or queues) to hold\n" | |
| ] | |
| } | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "###3rd prompt" | |
| ], | |
| "metadata": { | |
| "id": "n1OwZfAhkuBv" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "ThirdUserContent = \"<some content>\"" | |
| ], | |
| "metadata": { | |
| "id": "oJLEhjbSk8gE" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "chatMessages=[\n", | |
| " {\n", | |
| " \"role\": \"system\", \"content\": InitialSystemContent\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"user\",\n", | |
| " \"content\": FirstUserContent,\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"assistant\",\n", | |
| " \"content\": FirstAssistantContent,\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"user\",\n", | |
| " \"content\": SecondUserContent,\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"assistant\",\n", | |
| " \"content\": SecondAssistantContent,\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"user\",\n", | |
| " \"content\": ThirdUserContent,\n", | |
| " },\n", | |
| "]" | |
| ], | |
| "metadata": { | |
| "id": "IgcFKH1YlOu4" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "print(chatMessages)" | |
| ], | |
| "metadata": { | |
| "id": "hUY_FAiblngZ", | |
| "colab": { | |
| "base_uri": "https://localhost:8080/" | |
| }, | |
| "outputId": "5ff62d95-68f5-460c-aaf5-2234a3c775b4" | |
| }, | |
| "execution_count": null, | |
| "outputs": [ | |
| { | |
| "output_type": "stream", | |
| "name": "stdout", | |
| "text": [ | |
| "[{'role': 'system', 'content': 'You are a helpful teacher.'}, {'role': 'user', 'content': 'Are there other measures than time complexity for an algorithm?'}, {'role': 'assistant', 'content': 'Yes, in addition to time complexity, there are several other important measures and considerations for evaluating algorithms. Here are some of the key ones:\\n\\n1. **Space Complexity**: This measures the amount of memory an algorithm requires relative to the input size. It includes both the space needed for the input and additional memory used by the algorithm (e.g., for variables, function call stacks, and dynamically allocated memory).\\n\\n2. **Auxiliary Space Complexity**: This specifically measures the additional space used by the'}, {'role': 'user', 'content': 'What is it?'}, {'role': 'assistant', 'content': 'Auxiliary space complexity refers to the extra space or temporary space needed by an algorithm to solve a problem, excluding the space taken by the input itself. It focuses on the additional space required beyond the input data. \\n\\n### Key Points:\\n\\n- **Definition**: It is the extra space an algorithm uses in addition to the input size.\\n- **Why It Matters**: Understanding auxiliary space is important, especially for algorithms that might need extra data structures (like arrays, stacks, or queues) to hold'}, {'role': 'user', 'content': '<some content>'}]\n" | |
| ] | |
| } | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "response = client.chat.completions.create(model=gptModel, messages=chatMessages, n=chatCompletionChoices, temperature=samplingTemperature,max_completion_tokens=maxCompletionTokens, top_p=nucleusSampling)" | |
| ], | |
| "metadata": { | |
| "id": "dxvc-yxCl1--" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "ThirdAssistantContent = response.choices[0].message.content\n", | |
| "#Force assitant message to be something: ThirdAssistantContent = \"<something>\"" | |
| ], | |
| "metadata": { | |
| "id": "qrluUrmfmC7I" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "print(ThirdAssistantContent)" | |
| ], | |
| "metadata": { | |
| "id": "QFjLzydvmOmb", | |
| "colab": { | |
| "base_uri": "https://localhost:8080/" | |
| }, | |
| "outputId": "e5e6ab42-2136-400a-bad8-240b21c75385" | |
| }, | |
| "execution_count": null, | |
| "outputs": [ | |
| { | |
| "output_type": "stream", | |
| "name": "stdout", | |
| "text": [ | |
| "It seems like your message might have been cut off. If you have additional content or a specific question in mind, please feel free to share! I'm here to help with any queries you have on algorithms, complexity measures, or any other topic.\n" | |
| ] | |
| } | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "#Exercises" | |
| ], | |
| "metadata": { | |
| "id": "p1fwf3DB_lOx" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "Answering the questions below you may benefit from these references: \n", | |
| "\n", | |
| "\n", | |
| "1. Chat completion parameters : https://platform.openai.com/docs/api-reference/chat/create\n", | |
| "2. ChatGPT Prompts Library: https://gptbot.io/chatgpt-prompts/\n", | |
| "3. Models overview: https://platform.openai.com/docs/models\n", | |
| "\n" | |
| ], | |
| "metadata": { | |
| "id": "XUSB9DnTBki3" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "1. Establish your own chat sequence with max 3. prompts (unless you extend the notebook with additional prompts). You may start with a prompt from the 'ChatGPT Prompts Library' as mentioned above. \n", | |
| "Answer: ?" | |
| ], | |
| "metadata": { | |
| "id": "bhE77NaXAwzS" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "# prompt: Establish your own chat sequence with max 3. prompts (unless you extend the notebook with additional prompts). You may start with a prompt from the 'ChatGPT Prompts Library' as mentioned above.\n", | |
| "# Answer: ?\n", | |
| "\n", | |
| "# ###Various variables and settings\n", | |
| "gptModel=\"gpt-4o-mini\"\n", | |
| "chatCompletionChoices=1\n", | |
| "samplingTemperature=None\n", | |
| "maxCompletionTokens=100\n", | |
| "nucleusSampling=None\n", | |
| "\n", | |
| "# ###1st prompt\n", | |
| "InitialSystemContent = \"You are a creative story writer.\"\n", | |
| "FirstUserContent = \"Write a short story about a talking cat who dreams of becoming a famous chef.\"\n", | |
| "chatMessages=[\n", | |
| " {\n", | |
| " \"role\": \"system\", \"content\": InitialSystemContent\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"user\",\n", | |
| " \"content\": FirstUserContent,\n", | |
| " },\n", | |
| "]\n", | |
| "print(chatMessages)\n", | |
| "response = client.chat.completions.create(model=gptModel, messages=chatMessages, n=chatCompletionChoices, temperature=samplingTemperature, max_completion_tokens=maxCompletionTokens, top_p=nucleusSampling)\n", | |
| "FirstAssistantContent = response.choices[0].message.content\n", | |
| "print(FirstAssistantContent)\n", | |
| "\n", | |
| "# ###2nd prompt\n", | |
| "SecondUserContent = \"What is the cat's name and what is his most famous dish?\"\n", | |
| "chatMessages=[\n", | |
| " {\n", | |
| " \"role\": \"system\", \"content\": InitialSystemContent\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"user\",\n", | |
| " \"content\": FirstUserContent,\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"assistant\",\n", | |
| " \"content\": FirstAssistantContent,\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"user\",\n", | |
| " \"content\": SecondUserContent,\n", | |
| " },\n", | |
| "]\n", | |
| "print(chatMessages)\n", | |
| "response = client.chat.completions.create(model=gptModel, messages=chatMessages, n=chatCompletionChoices, temperature=samplingTemperature, max_completion_tokens=maxCompletionTokens, top_p=nucleusSampling)\n", | |
| "SecondAssistantContent = response.choices[0].message.content\n", | |
| "print(SecondAssistantContent)\n", | |
| "\n", | |
| "# ###3rd prompt\n", | |
| "ThirdUserContent = \"Describe the cat's personality in detail.\"\n", | |
| "chatMessages=[\n", | |
| " {\n", | |
| " \"role\": \"system\", \"content\": InitialSystemContent\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"user\",\n", | |
| " \"content\": FirstUserContent,\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"assistant\",\n", | |
| " \"content\": FirstAssistantContent,\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"user\",\n", | |
| " \"content\": SecondUserContent,\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"assistant\",\n", | |
| " \"content\": SecondAssistantContent,\n", | |
| " },\n", | |
| " {\n", | |
| " \"role\": \"user\",\n", | |
| " \"content\": ThirdUserContent,\n", | |
| " },\n", | |
| "]\n", | |
| "print(chatMessages)\n", | |
| "response = client.chat.completions.create(model=gptModel, messages=chatMessages, n=chatCompletionChoices, temperature=samplingTemperature,max_completion_tokens=maxCompletionTokens, top_p=nucleusSampling)\n", | |
| "ThirdAssistantContent = response.choices[0].message.content\n", | |
| "print(ThirdAssistantContent)\n" | |
| ], | |
| "metadata": { | |
| "colab": { | |
| "base_uri": "https://localhost:8080/" | |
| }, | |
| "id": "VqqRzfGJps6-", | |
| "outputId": "df2fbe4f-7236-48df-f2fd-f422abad7ee4" | |
| }, | |
| "execution_count": null, | |
| "outputs": [ | |
| { | |
| "output_type": "stream", | |
| "name": "stdout", | |
| "text": [ | |
| "[{'role': 'system', 'content': 'You are a creative story writer.'}, {'role': 'user', 'content': 'Write a short story about a talking cat who dreams of becoming a famous chef.'}]\n", | |
| "In the heart of a bustling city, tucked between the towering skyscrapers, lay a quaint little café called \"Paws & Pastries.\" It was known for its warm atmosphere and delicious treats, but unbeknownst to the patrons, it housed a rather extraordinary resident—a talking cat named Whiskers.\n", | |
| "\n", | |
| "Whiskers was no ordinary feline. With a coat as black as the midnight sky and piercing green eyes, he prided himself on his discerning palate. He spent his days lounging lazily on\n", | |
| "[{'role': 'system', 'content': 'You are a creative story writer.'}, {'role': 'user', 'content': 'Write a short story about a talking cat who dreams of becoming a famous chef.'}, {'role': 'assistant', 'content': 'In the heart of a bustling city, tucked between the towering skyscrapers, lay a quaint little café called \"Paws & Pastries.\" It was known for its warm atmosphere and delicious treats, but unbeknownst to the patrons, it housed a rather extraordinary resident—a talking cat named Whiskers.\\n\\nWhiskers was no ordinary feline. With a coat as black as the midnight sky and piercing green eyes, he prided himself on his discerning palate. He spent his days lounging lazily on'}, {'role': 'user', 'content': \"What is the cat's name and what is his most famous dish?\"}]\n", | |
| "The cat's name is Whiskers, and his most famous dish is \"Feline Florentine.\" This mouthwatering creation is a delicate blend of hand-rolled pasta, stuffed with a luscious mixture of fresh catnip, creamy ricotta cheese, and finely diced chicken, all smothered in a rich, velvety sauce infused with aromatic herbs. Topped with a sprinkle of parmesan and a garnish of edible flowers, the dish is a hit among humans and pets alike, drawing diners from\n", | |
| "[{'role': 'system', 'content': 'You are a creative story writer.'}, {'role': 'user', 'content': 'Write a short story about a talking cat who dreams of becoming a famous chef.'}, {'role': 'assistant', 'content': 'In the heart of a bustling city, tucked between the towering skyscrapers, lay a quaint little café called \"Paws & Pastries.\" It was known for its warm atmosphere and delicious treats, but unbeknownst to the patrons, it housed a rather extraordinary resident—a talking cat named Whiskers.\\n\\nWhiskers was no ordinary feline. With a coat as black as the midnight sky and piercing green eyes, he prided himself on his discerning palate. He spent his days lounging lazily on'}, {'role': 'user', 'content': \"What is the cat's name and what is his most famous dish?\"}, {'role': 'assistant', 'content': 'The cat\\'s name is Whiskers, and his most famous dish is \"Feline Florentine.\" This mouthwatering creation is a delicate blend of hand-rolled pasta, stuffed with a luscious mixture of fresh catnip, creamy ricotta cheese, and finely diced chicken, all smothered in a rich, velvety sauce infused with aromatic herbs. Topped with a sprinkle of parmesan and a garnish of edible flowers, the dish is a hit among humans and pets alike, drawing diners from'}, {'role': 'user', 'content': \"Describe the cat's personality in detail.\"}]\n", | |
| "Whiskers was a cat of many flavors, his personality a delightful blend of charisma, ambition, and each subtle spice that made him unique. At first glance, he appeared aloof, sitting regally atop the counter with an air of superiority. But beneath that calm exterior lay a vibrant spirit bursting with creativity and determination.\n", | |
| "\n", | |
| "He was exceptionally clever, always observing the world around him with those sparkling green eyes, ready to absorb ideas and inspiration. Whiskers had an innate curiosity, exploring every corner\n" | |
| ] | |
| } | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [], | |
| "metadata": { | |
| "id": "YlV_25_7KDu0" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "2. Does it make any difference, if you force assistants 'content' to be somethings else, than that received in the previous reply from ChatGPT. Technically you can do this removing one or more '#Force assitant message to be something:' and specify something else. \n", | |
| "\n", | |
| "Answer: Ja, du kan teknisk set ændre assistentens svar bare ved at specificere, i forhold til hvad du vil have indholdet til at være. Man kan gøre det ved at justere prompts eller betingelser som kan tvinge assistenten til at give en bestemt type svar eller fokusere på forskellige aspekter af et spørgsmål" | |
| ], | |
| "metadata": { | |
| "id": "mXU-PEtZA0Vt" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "3. Explain the 'model' parameter specified by the 'gptModel' variable.\n", | |
| "\n", | |
| "\n", | |
| "Answer: Parameteren model, er ofte refereret til som en gptModel, det angiver bare hvilken version eller variant af AI-modellen, der anvendes til at behandle samtalen eller prompten. Dette er en indstilling i systemer som OpenAI's API, da forskellige modeller har forskellige kapaciteter, omkostninger og præstationskarakteristika. Kig som et eksempel som chatgpt som har chat-4, chat3-5, chat4-o, osv. De har allesammen hvert forskellig karakterstika som gøre dem unikke" | |
| ], | |
| "metadata": { | |
| "id": "m0fNJEjSG5Aq" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "4. Explain the 'messages' parameter specified by the 'chatMessages' variable. \n", | |
| "\n", | |
| "\n", | |
| "Answer:\n", | |
| "\n", | |
| "Parameteren 'messages', som typisk findes i API'er som OpenAI's chatmodel, er struktureret på den måde at der er en input til en samtale mellem en bruger og en AI-assistent. Dette indeholder giver et array af beskedobjekter, der definerer samtalens kontekst og giver assistenten den information til at generere eller give det passende svar.\n", | |
| "\n", | |
| "I hvert besked i messages-arrayet består generelt af følgende nøgleattributter:\n", | |
| "\n", | |
| "Role (Rolle): Angiver deltagerens rolle i samtalen. De typiske værdier er:\n", | |
| "\n", | |
| "\"system\": Definerer retningslinjer eller instruktioner for, hvordan assistenten skal opføre sig. Det kan være regler som \"svar så kortfattet som muligt\" eller \"brug en formel tone\".\n", | |
| "\n", | |
| "\"user\": Indeholder brugerens input eller spørgsmål.\n", | |
| "\n", | |
| "\"assistant\": Repræsenterer AI-assistentens svar." | |
| ], | |
| "metadata": { | |
| "id": "MnyEfPT3HkRC" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "5. Explain the 'n' parameter specified by the 'chatCompletionChoices' variable. \n", | |
| "\n", | |
| "\n", | |
| "Answer: Parameteren n bliver brugt, når du ønsker f.eks ønsker flere afslutningsmuligheder for at opnå fleksibilitet, kreativitet eller sammenligning. Det kan dog øge både tidsforbrug og omkostninger ved API-kald, afhængigt af hvor mange afslutninger man anmoder om." | |
| ], | |
| "metadata": { | |
| "id": "W0ipFRpcH4-L" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "6. Explain the 'temperature' parameter specified by the 'samplingTemperature' variable. \n", | |
| "\n", | |
| "\n", | |
| "Answer: Parameteren temperature, er ofte kaldet samplingTemperature, det styrer graden af tilfældighed og kreativitet i AI'ens svar. Det der kan have en påvirking, i dens forudsigelige eller varierede modellens svar vil være ved at justere sandsynligheden for at vælge forskellige ord eller sætninger under responsgenereringen." | |
| ], | |
| "metadata": { | |
| "id": "Q2RxcWOzIVM3" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "7. Explain the 'max_completion_tokens' parameter specified by the 'maxCompletionTokens' variable. \n", | |
| "\n", | |
| "\n", | |
| "Answer: Parameteren max_completion_tokens, er ofte refereret til maxCompletionTokens, er det maksimale antal tokens, som modellen må bruge til at generere det endelige svar (completion). En token kan være et ord, en del af et ord, eller et tegn ( eksempelvis som et punktum eller mellemrum). Antallet af tokens bestemmer, hvor langt og detaljeret svaret kan blive." | |
| ], | |
| "metadata": { | |
| "id": "MwNggaKXIrFw" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "8. Explain the 'top_p' parameter specified by the 'nucleusSampling' variable. \n", | |
| "\n", | |
| "\n", | |
| "Answer: Parameteren top_p, ofte refereret til som nucleus sampling, styrer, hvordan modellen vælger ord, når den genererer svar. Det er en teknik til at skabe mere naturlige og varierede resultater ved at begrænse valg af ord til en bestemt sandsynlighedsmængde i stedet for at vælge fra hele ordforrådet." | |
| ], | |
| "metadata": { | |
| "id": "QW7wpSdKI_ed" | |
| } | |
| } | |
| ] | |
| } |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment