Last active
July 29, 2025 19:14
-
-
Save theoknock/dcb6bb65548b8d4a444acdcf759cf733 to your computer and use it in GitHub Desktop.
Sends a chat prompt to the OpenAI API using streaming mode and reconstructs the full response by concatenating incoming chunks.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| from openai import OpenAI | |
| client = OpenAI() | |
| stream = client.chat.completions.create( | |
| model="gpt-4-1106-preview", # or "gpt-4.1" if that's valid in your setup | |
| messages=[ | |
| { | |
| "role": "user", | |
| "content": "Say 'double bubble bath' ten times fast.", | |
| }, | |
| ], | |
| stream=True, | |
| ) | |
| full_response = "" | |
| for chunk in stream: | |
| if chunk.choices and chunk.choices[0].delta.content: | |
| full_response += chunk.choices[0].delta.content | |
| print(full_response) | |
| # Sample output | |
| # | |
| # I'm an AI text-based model, so I can't physically say things aloud. However, you're welcome to give it a try! Rapid repetition of "Double bubble bath" can be quite a tongue twister. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment