Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save davxiao/af211629a4a7ec5709a2d40c7242f70e to your computer and use it in GitHub Desktop.

Select an option

Save davxiao/af211629a4a7ec5709a2d40c7242f70e to your computer and use it in GitHub Desktop.
get_started_with_chirp3_instant_custom_voice.ipynb
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ur8xi4C7S06n"
},
"outputs": [],
"source": [
"# Copyright 2025 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JAPoU8Sm5E6e"
},
"source": [
"# Get started with Chirp 3: Instant custom voice\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_chirp3_instant_custom_voice.ipynb\">\n",
" <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Faudio%2Fspeech%2Fgetting-started%2Fget_started_with_chirp3_instant_custom_voice.ipynb\">\n",
" <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/audio/speech/getting-started/get_started_with_chirp3_instant_custom_voice.ipynb\">\n",
" <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_chirp3_instant_custom_voice.ipynb\">\n",
" <img width=\"32px\" src=\"https://www.svgrepo.com/download/217753/github.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</table>\n",
"\n",
"<div style=\"clear: both;\"></div>\n",
"\n",
"<b>Share to:</b>\n",
"\n",
"<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_chirp3_instant_custom_voice.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_chirp3_instant_custom_voice.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_chirp3_instant_custom_voice.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_chirp3_instant_custom_voice.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_chirp3_instant_custom_voice.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
"</a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "84f0f73a0f76"
},
"source": [
"| Author(s) |\n",
"| --- |\n",
"| [Ivan Nardini](https://github.com/inardini) |\n",
"| [Gary Chien](https://github.com/goldenchest) |"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tvgnzT1CKxrO"
},
"source": [
"## Overview\n",
"\n",
"This notebook introduces [Chirp 3 Instant Custom Voice](https://cloud.google.com/text-to-speech/docs/chirp3-hd), a powerful feature of Google Cloud's Text-to-Speech (TTS) API that allows you to create personalized voice models.\n",
"\n",
"With Instant Custom Voice, you can generate unique, custom voices by training a model with your own high-quality audio recordings. This enables the rapid generation of personal voices that can then be used to synthesize audio using the Cloud TTS API, supporting both streaming and long-form text output. Instant Custom Voice creation and synthesis is supported in more than 25 language.\n",
"\n",
"In this tutorial, you will learn how to:\n",
"\n",
"- Create an Instant Custom Voice.\n",
"- Synthesize text using your custom voice both in real-time and streaming.\n",
"- Build a simple Gradio app to use your custom voice."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7qzYMYf2Hlvv"
},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>⚠️ Due to safety considerations, access to this voice cloning capability is restricted to allow-listed users. To access this feature, contact a member of the Google Cloud team to be added to the allow list. ⚠️</b>\n",
"</div>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "61RBz8LLbxCR"
},
"source": [
"## Get started"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1filYzhuK_MK"
},
"source": [
"### Install required packages\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-_O1r5MNLFu-"
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet gradio"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SxJiB0GXALQ2"
},
"source": [
"### Set Google Cloud project information\n",
"\n",
"To get started using the Text-to-Speech API, you must have an existing Google Cloud project and [enable the API](https://console.cloud.google.com/flows/enableapi?apiid=texttospeech.googleapis.com).\n",
"\n",
"Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment).\n",
"\n",
"Please note the **available regions** for Chirp 3 Instant Custom voice, see [documentation](https://cloud.google.com/text-to-speech/docs/chirp3-instant-custom-voice#regional_availability)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WIQyBhAn_9tK"
},
"outputs": [],
"source": [
"# Use the environment variable if the user doesn't provide Project ID.\n",
"import os\n",
"\n",
"PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
"if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
"\n",
"TTS_LOCATION = \"global\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PYVzRl1bmgjk"
},
"outputs": [],
"source": [
"! gcloud config set project {PROJECT_ID}\n",
"! gcloud auth application-default set-quota-project {PROJECT_ID}\n",
"! gcloud auth application-default login -q"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JR7BVajtKyGh"
},
"source": [
"### Get Google Credentials\n",
"\n",
"Use the `google.auth` library to automatically find and load your credentials.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ChWHE3jIDREc"
},
"outputs": [],
"source": [
"import google.auth\n",
"import google.auth.transport.requests\n",
"\n",
"credentials, _ = google.auth.default()\n",
"authentication = google.auth.transport.requests.Request()\n",
"credentials.refresh(authentication)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5-X8IbjiAtM3"
},
"source": [
"### Import libraries\n",
"\n",
"Import Python tools you'll need."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qqm0OQpAYCph"
},
"outputs": [],
"source": [
"import base64\n",
"import json\n",
"import os\n",
"\n",
"from IPython.display import Audio, display\n",
"import gradio as gr\n",
"import numpy as np\n",
"import requests"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sP8GBj3tBAC1"
},
"source": [
"### Set constants\n",
"\n",
"Initiate the API endpoint and the text to speech client.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "rXTVeU1uBBqY"
},
"outputs": [],
"source": [
"API_ENDPOINT = (\n",
" f\"{TTS_LOCATION}-texttospeech.googleapis.com\"\n",
" if TTS_LOCATION != \"global\"\n",
" else \"texttospeech.googleapis.com\"\n",
")\n",
"\n",
"ACCESS_TOKEN = credentials.token"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "M7WQQFp_RvGH"
},
"source": [
"### Helpers\n",
"\n",
"To keep our main logic clean and readable, you define several helper functions here. These encapsulate tasks like making API calls, processing audio data, and interacting with the Gradio interface components.\n",
"\n",
"* **`create_instant_custom_voice_key(reference_audio_bytes, consent_audio_bytes)`** to create a temporary custom voice key.\n",
"\n",
"* **`create_voice_with_masking(reference_audio, consent_audio)`** to mask the custom voice key.\n",
"\n",
"* **`synthesize_text_with_cloned_voice(voice_key, text)`** to create your custom voice using the `voice_cloning_key` and the desired text as input.\n",
"\n",
"* **`wav_to_base64(file_path)`** to read a WAV audio file from a local path, encode its binary content into a base64 string (which is how audio data is sent in the JSON payload), and return the string.\n",
"\n",
"* **`create_voice(reference_audio, consent_audio, progress)`** to create the custom voice in the Gradio app using \"Create Voice\" button.\n",
"\n",
"* **`generate_speech(voice_key, text, progress)`**: to synthesize any text with the custom voice using \"Generate Speech\" button in the Gradio app.\n",
"\n",
"* **`reset_interface()`** to effectively reset the Gradio UI."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pWrNWdV_RxGn"
},
"outputs": [],
"source": [
"def create_instant_custom_voice_key(\n",
" reference_audio_bytes: bytes, consent_audio_bytes: bytes\n",
") -> str:\n",
" \"\"\"Creates a temporary custom voice key\"\"\"\n",
"\n",
" url = f\"https://{API_ENDPOINT}/v1beta1/voices:generateVoiceCloningKey\"\n",
"\n",
" request_body = {\n",
" \"reference_audio\": {\n",
" \"audio_config\": {\"audio_encoding\": \"LINEAR16\", \"sample_rate_hertz\": 24000},\n",
" \"content\": reference_audio_bytes,\n",
" },\n",
" \"voice_talent_consent\": {\n",
" \"audio_config\": {\"audio_encoding\": \"LINEAR16\", \"sample_rate_hertz\": 24000},\n",
" \"content\": consent_audio_bytes,\n",
" },\n",
" \"consent_script\": \"I am the owner of this voice and I consent to Google using this voice to create a synthetic voice model.\",\n",
" \"language_code\": \"en-US\",\n",
" }\n",
"\n",
" response = None\n",
" try:\n",
" headers = {\n",
" \"Authorization\": f\"Bearer {ACCESS_TOKEN}\",\n",
" \"x-goog-user-project\": PROJECT_ID,\n",
" \"Content-Type\": \"application/json; charset=utf-8\",\n",
" }\n",
"\n",
" response = requests.post(url, headers=headers, json=request_body)\n",
" response.raise_for_status()\n",
"\n",
" response_json = response.json()\n",
" return response_json.get(\"voiceCloningKey\")\n",
"\n",
" except requests.exceptions.RequestException as e:\n",
" print(f\"Error making API request: {e}\")\n",
" if response is not None and response.text:\n",
" print(\"Response error message:\")\n",
" print(response.text)\n",
" except json.JSONDecodeError as e:\n",
" print(f\"Error decoding JSON response: {e}\")\n",
" except Exception as e:\n",
" print(f\"An unexpected error occurred: {e}\")\n",
"\n",
"\n",
"def synthesize_text_with_cloned_voice(voice_key: str, text: str) -> None:\n",
" \"\"\"Synthesizes text with the cloned voice\"\"\"\n",
"\n",
" url = f\"https://{API_ENDPOINT}/v1beta1/text:synthesize\"\n",
"\n",
" request_body = {\n",
" \"input\": {\"text\": text},\n",
" \"voice\": {\n",
" \"language_code\": \"en-US\",\n",
" \"voice_clone\": {\n",
" \"voice_cloning_key\": voice_key,\n",
" },\n",
" },\n",
" \"audioConfig\": {\"audioEncoding\": \"LINEAR16\", \"sample_rate_hertz\": 24000},\n",
" }\n",
"\n",
" try:\n",
" headers = {\n",
" \"Authorization\": f\"Bearer {ACCESS_TOKEN}\",\n",
" \"x-goog-user-project\": PROJECT_ID,\n",
" \"Content-Type\": \"application/json; charset=utf-8\",\n",
" }\n",
"\n",
" response = requests.post(url, headers=headers, json=request_body)\n",
" response.raise_for_status()\n",
"\n",
" response_json = response.json()\n",
" audio_content = response_json.get(\"audioContent\")\n",
"\n",
" if audio_content:\n",
" display(Audio(base64.b64decode(audio_content), rate=24000))\n",
" else:\n",
" print(\"Error: Audio content not found in the response.\")\n",
" print(response_json)\n",
"\n",
" except requests.exceptions.RequestException as e:\n",
" print(f\"Error making API request: {e}\")\n",
" except json.JSONDecodeError as e:\n",
" print(f\"Error decoding JSON response: {e}\")\n",
" except Exception as e:\n",
" print(f\"An unexpected error occurred: {e}\")\n",
"\n",
"\n",
"def wav_to_base64(file_path: str) -> str:\n",
" \"\"\"Convert a WAV file to base64 encoded string\"\"\"\n",
" try:\n",
" with open(file_path, \"rb\") as wav_file:\n",
" encoded_string = base64.b64encode(wav_file.read()).decode(\"utf-8\")\n",
" return encoded_string\n",
" except FileNotFoundError:\n",
" print(f\"Error: File not found at {file_path}\")\n",
" return None\n",
" except Exception as e:\n",
" print(f\"An error occurred: {e}\")\n",
" return None\n",
"\n",
"\n",
"def create_voice(\n",
" reference_audio: gr.Audio,\n",
" consent_audio: gr.Audio,\n",
" progress: gr.Progress | None = None,\n",
") -> str:\n",
" \"\"\"Create a custom voice using reference and consent audio\"\"\"\n",
" if reference_audio is None or consent_audio is None:\n",
" return \"Please upload both reference and consent audio files.\"\n",
"\n",
" if not progress:\n",
" progress = gr.Progress()\n",
"\n",
" progress(0.2, desc=\"Processing audio files...\")\n",
" reference_audio_b64 = wav_to_base64(reference_audio)\n",
" consent_audio_b64 = wav_to_base64(consent_audio)\n",
"\n",
" if reference_audio_b64 is None or consent_audio_b64 is None:\n",
" return \"Error processing audio files.\"\n",
"\n",
" progress(0.5, desc=\"Creating voice clone...\")\n",
" voice_key = create_instant_custom_voice_key(reference_audio_b64, consent_audio_b64)\n",
"\n",
" if voice_key:\n",
" progress(1.0, desc=\"Voice created successfully!\")\n",
" return voice_key\n",
" else:\n",
" return \"Failed to create voice. Check the logs for details.\"\n",
"\n",
"\n",
"def create_voice_with_masking(\n",
" reference_audio: gr.Audio, consent_audio: gr.Audio\n",
") -> tuple:\n",
" \"\"\"A wrapper function for create_voice to handle masking\"\"\"\n",
" key = create_voice(reference_audio, consent_audio)\n",
" if key and len(key) > 8:\n",
" masked_key = key[:5] + \"...\"\n",
" else:\n",
" masked_key = key\n",
" return key, masked_key\n",
"\n",
"\n",
"def generate_speech(\n",
" voice_key: str, text: str, progress: gr.Progress | None = None\n",
") -> tuple:\n",
" \"\"\"Generate speech using the cloned voice\"\"\"\n",
" if not voice_key or not text:\n",
" return None, \"Please create a voice key and enter text to synthesize.\"\n",
"\n",
" if not progress:\n",
" progress = gr.Progress()\n",
"\n",
" progress(0.3, desc=\"Generating speech...\")\n",
"\n",
" try:\n",
" url = f\"https://{API_ENDPOINT}/v1beta1/text:synthesize\"\n",
"\n",
" request_body = {\n",
" \"input\": {\"text\": text},\n",
" \"voice\": {\n",
" \"language_code\": \"en-US\",\n",
" \"voice_clone\": {\n",
" \"voice_cloning_key\": voice_key,\n",
" },\n",
" },\n",
" \"audioConfig\": {\"audioEncoding\": \"LINEAR16\", \"sample_rate_hertz\": 24000},\n",
" }\n",
"\n",
" headers = {\n",
" \"Authorization\": f\"Bearer {ACCESS_TOKEN}\",\n",
" \"x-goog-user-project\": PROJECT_ID,\n",
" \"Content-Type\": \"application/json; charset=utf-8\",\n",
" }\n",
"\n",
" progress(0.6, desc=\"Processing audio...\")\n",
" response = requests.post(url, headers=headers, json=request_body)\n",
" response.raise_for_status()\n",
"\n",
" response_json = response.json()\n",
" audio_content = response_json.get(\"audioContent\")\n",
"\n",
" if audio_content:\n",
" progress(1.0, desc=\"Speech generated!\")\n",
" audio_bytes = base64.b64decode(audio_content)\n",
" audio_array = np.frombuffer(audio_bytes, dtype=np.int16)\n",
" return (24000, audio_array), \"Speech generated successfully!\"\n",
" else:\n",
" return None, \"Error: Audio content not found in the response.\"\n",
"\n",
" except Exception as e:\n",
" return None, f\"Error generating speech: {str(e)}\"\n",
"\n",
"\n",
"def reset_interface() -> tuple:\n",
" return None, None, \"\", \"\", None, None, \"Interface reset.\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EdvJRUWRNGHE"
},
"source": [
"# Create an Instant Custom Voice\n",
"\n",
"Let's start by creating the custom voice directly.\n",
"\n",
"You would first define the file paths to your pre-recorded reference audio (`.wav` recommended, ideally a few seconds clear speech) and the consent audio (where the speaker explicitly states the consent script).\n",
"\n",
"Then you use `wav_to_base64` to read these files and encode them into the base64 format required by the API and you create your custom voice using the `create_instant_custom_voice_key` helper function, passing in the base64-encoded audio data.\n",
"\n",
"If the request is successful and your project is allow-listed, the API returns a `voice_cloning_key` which acts as a temporary identifier for your custom voice.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "S_J-TGJQNuKq"
},
"outputs": [],
"source": [
"reference_audio_path = \"[your-reference-audio-path]\" # @param {type: \"string\", placeholder: \"[your-reference-audio-path]\", isTemplate: true}\n",
"consent_audio_path = \"[your-consent-audio-path]\" # @param {type: \"string\", placeholder: \"[your-consent-audio-path]\", isTemplate: true}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "SCdVWsGOA3qW"
},
"outputs": [],
"source": [
"reference_audio_bytes = wav_to_base64(reference_audio_path)\n",
"consent_audio_bytes = wav_to_base64(consent_audio_path)\n",
"\n",
"voice_key = create_instant_custom_voice_key(reference_audio_bytes, consent_audio_bytes)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rPqteN879IL_"
},
"source": [
"# Use Instant Custom Voice\n",
"\n",
"Now that you (theoretically) have a `voice_key`, let's use it to synthesize speech."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JGAZb4Zf9Me-"
},
"source": [
"## Perform a Sync Request\n",
"\n",
"Define the text you want your custom voice to say in the `text_to_synthesize` variable. Try different sentences!\n",
"\n",
"Then, call the `synthesize_text_with_cloned_voice` helper function, providing the `voice_key` you obtained earlier and the text. This function sends the request to the `text:synthesize` endpoint, specifying your custom voice.\n",
"\n",
"If successful, it gets the audio data back, decodes it, and should play it directly in the notebook output below the cell using an embedded audio player."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "djLao152cR13"
},
"outputs": [],
"source": [
"text_to_synthesize = \"\"\"\n",
"Breaking news! Chirp 3, Google Cloud's audio model, now has Instant Custom Voice.\n",
"\n",
"With Instant Custom Voice, you can generate custom voices with just 10 seconds of audio to empower your AI narration.\n",
"Chirp 3 Instant Custom Voice is now available in preview with allowlist. Check out the link below.\n",
"\n",
"And yes, this voice is generated using Chirp 3 Instant Custom Voice!\n",
"\"\"\"\n",
"synthesize_text_with_cloned_voice(voice_key, text_to_synthesize)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3X0qY28fLSfz"
},
"source": [
"# Build a simple Instant custom voice app\n",
"\n",
"While calling the API directly works, it's often more convenient to have an interactive interface. Let's build a simple web app using Gradio to easily upload audio, create a voice, and synthesize text."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uXoJqKduNhzq"
},
"source": [
"### Define the app\n",
"\n",
"Here, you define the structure and components of our Gradio user interface."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jMISBY0fSDFy"
},
"outputs": [],
"source": [
"with gr.Blocks(\n",
" theme=gr.themes.Default(\n",
" primary_hue=\"blue\", secondary_hue=\"blue\", neutral_hue=\"gray\"\n",
" )\n",
") as app:\n",
"\n",
" # Create a state variable to store the full voice key\n",
" full_voice_key = gr.State(\"\")\n",
"\n",
" # Define title\n",
" gr.Markdown(\n",
" \"\"\"\n",
" # Chirp 3 - Instant custom voice demo\n",
" Upload reference and consent audio files to create a custom voice, then synthesize speech.\n",
" \"\"\"\n",
" )\n",
"\n",
" # Define input and output components\n",
" with gr.Row():\n",
" with gr.Column(scale=1):\n",
" reference_audio = gr.Audio(\n",
" label=\"Reference Voice\", type=\"filepath\", elem_id=\"reference_audio\"\n",
" )\n",
" consent_audio = gr.Audio(\n",
" label=\"Consent Audio\", type=\"filepath\", elem_id=\"consent_audio\"\n",
" )\n",
"\n",
" create_btn = gr.Button(\"Create Voice\", variant=\"primary\")\n",
" voice_key_output = gr.Textbox(label=\"Voice Key\", elem_id=\"voice_key\")\n",
"\n",
" with gr.Column(scale=1):\n",
" text_input = gr.Textbox(\n",
" label=\"Text to Synthesize\",\n",
" placeholder=\"Enter the text you want the voice to say...\",\n",
" lines=5,\n",
" elem_id=\"text_input\",\n",
" )\n",
" generate_btn = gr.Button(\"Generate Speech\", variant=\"primary\")\n",
" audio_output = gr.Audio(label=\"Generated Audio\", elem_id=\"audio_output\")\n",
" status_output = gr.Textbox(label=\"Status\", elem_id=\"status_output\")\n",
"\n",
" with gr.Row():\n",
" clear_btn = gr.Button(\"Clear All\", variant=\"secondary\")\n",
"\n",
" # Set up event handlers\n",
" create_btn.click(\n",
" create_voice_with_masking,\n",
" inputs=[reference_audio, consent_audio],\n",
" outputs=[full_voice_key, voice_key_output],\n",
" )\n",
"\n",
" generate_btn.click(\n",
" generate_speech,\n",
" inputs=[full_voice_key, text_input],\n",
" outputs=[audio_output, status_output],\n",
" )\n",
"\n",
" clear_btn.click(\n",
" reset_interface,\n",
" inputs=[],\n",
" outputs=[\n",
" reference_audio,\n",
" consent_audio,\n",
" voice_key_output,\n",
" full_voice_key,\n",
" text_input,\n",
" audio_output,\n",
" status_output,\n",
" ],\n",
" )\n",
"\n",
" # Apply custom CSS for Google styling\n",
" gr.Markdown(\n",
" \"\"\"\n",
" <style>\n",
" .gradio-container {\n",
" font-family: 'Google Sans', 'Roboto', sans-serif !important;\n",
" }\n",
" .gr-button-primary {\n",
" background-color: #4285F4 !important;\n",
" }\n",
" .gr-button-secondary {\n",
" border-color: #4285F4 !important;\n",
" color: #4285F4 !important;\n",
" }\n",
" h1 {\n",
" font-family: 'Google Sans', 'Roboto', sans-serif !important;\n",
" color: #202124 !important;\n",
" }\n",
" </style>\n",
" \"\"\"\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "GpCcdY4ANkRl"
},
"source": [
"### Launch the app\n",
"\n",
"Showtime! This cell launches the Gradio app we just defined."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YJ2-accJL_tn"
},
"outputs": [],
"source": [
"app.launch(share=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xV-Nj4F7mPNF"
},
"source": [
"Close the app once you finish to play with it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UTPbraYfNeJk"
},
"outputs": [],
"source": [
"app.close()"
]
}
],
"metadata": {
"colab": {
"name": "get_started_with_chirp3_instant_custom_voice.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment