Skip to content

Instantly share code, notes, and snippets.

@paulweezydesign
Created January 2, 2026 01:39
Show Gist options
  • Select an option

  • Save paulweezydesign/b134841d407cc82bd389eafd26489d60 to your computer and use it in GitHub Desktop.

Select an option

Save paulweezydesign/b134841d407cc82bd389eafd26489d60 to your computer and use it in GitHub Desktop.

Better Auth

Better Auth is a comprehensive, framework-agnostic authentication and authorization library for TypeScript that provides enterprise-grade features through a flexible plugin architecture. It handles everything from basic email/password authentication to advanced features like multi-tenancy, two-factor authentication, passkeys, WebAuthn, phone authentication, username authentication, anonymous sessions, API keys, JWT tokens, Sign-In with Ethereum (SIWE), OAuth proxy, SSO/SAML, SCIM (System for Cross-domain Identity Management), Stripe integration, OpenID Connect (OIDC) provider capabilities, Model Context Protocol (MCP) OAuth authentication, captcha verification, Google One Tap authentication, OpenAPI documentation generation, and last login method tracking. The library supports multiple database adapters (Prisma, Drizzle, MongoDB, Memory), integrates seamlessly with popular frameworks (Next.js, SvelteKit, Solid Start, React Start (TanStack Start), Expo/React Native), and provides type-safe client libraries for React, Vue, Svelte, Solid, and Lynx.

The architecture centers around a server-side betterAuth() instance that handles authentication logic and exposes REST API endpoints, paired with framework-specific client libraries that provide type-safe methods and hooks for managing authentication state. Plugins extend functionality by adding endpoints, database schemas, hooks, and middleware, enabling complex features like organizations, two-factor authentication, passkey support, phone number authentication, username authentication, anonymous sessions, bearer tokens, API keys, access control, multi-session support, device authorization, OpenID Connect provider capabilities for building OAuth 2.0 authorization servers, OAuth proxy for development environments, Model Context Protocol (MCP) OAuth integration for AI agents, Stripe payment and subscription management, SSO/SAML integration, SCIM user provisioning for enterprise identity management, captcha protection, Google One Tap, automatic OpenAPI documentation, last login method tracking, custom session handling, and additional user/session fields with minimal configuration. The system is designed for production use with built-in rate limiting, CSRF protection, secure cookie handling, password breach checking via HaveIBeenPwned integration, and comprehensive error handling.

Initialize Server Authentication

Creates a Better Auth server instance with database, social providers, and plugins.

import { betterAuth } from "better-auth";
import { drizzleAdapter } from "better-auth/adapters/drizzle";
import { organization, twoFactor } from "better-auth/plugins";
import { passkey } from "@better-auth/passkey";
import { db } from "./db";

export const auth = betterAuth({
  appName: "My Application",
  baseURL: process.env.BETTER_AUTH_URL || "http://localhost:3000",
  secret: process.env.BETTER_AUTH_SECRET,

  database: drizzleAdapter(db, {
    provider: "pg",
    schema: schema,
  }),

  emailAndPassword: {
    enabled: true,
    minPasswordLength: 8,
    async sendResetPassword({ user, url, token }) {
      await sendEmail({
        to: user.email,
        subject: "Reset your password",
        html: `Click <a href="${url}">here</a> to reset your password`,
      });
    },
  },

  emailVerification: {
    async sendVerificationEmail({ user, url, token }) {
      await sendEmail({
        to: user.email,
        subject: "Verify your email",
        html: `Click <a href="${url}">here</a> to verify`,
      });
    },
    sendOnSignUp: true,
    autoSignInAfterVerification: true,
  },

  socialProviders: {
    github: {
      clientId: process.env.GITHUB_CLIENT_ID!,
      clientSecret: process.env.GITHUB_CLIENT_SECRET!,
    },
    google: {
      clientId: process.env.GOOGLE_CLIENT_ID!,
      clientSecret: process.env.GOOGLE_CLIENT_SECRET!,
      accessType: "offline",
    },
  },

  session: {
    expiresIn: 60 * 60 * 24 * 7, // 7 days
    updateAge: 60 * 60 * 24, // 1 day
    freshAge: 60 * 10, // 10 minutes
  },

  rateLimit: {
    enabled: true,
    window: 60,
    max: 10,
  },

  plugins: [
    organization(),
    twoFactor({
      issuer: "My App",
    }),
    passkey(),
  ],
});

export type Session = typeof auth.$Infer.Session;

Create Authentication API Routes

Exposes Better Auth REST endpoints in your application framework.

// Next.js App Router: app/api/auth/[...all]/route.ts
import { auth } from "@/lib/auth";
import { toNextJsHandler } from "better-auth/next-js";

export const { GET, POST } = toNextJsHandler(auth);

// SvelteKit: src/routes/api/auth/[...all]/+server.ts
import { auth } from "$lib/auth";
import { svelteKitHandler } from "better-auth/svelte-kit";

export const { GET, POST } = svelteKitHandler(auth);

// Solid Start: src/routes/api/auth/[...all].ts
import { auth } from "~/lib/auth";
import { solidStartHandler } from "better-auth/solid-start";

export const { GET, POST } = solidStartHandler(auth);

// React Start (TanStack Start)
import { auth } from "~/lib/auth";
import { reactStartHandler } from "better-auth/react-start";

export const { GET, POST } = reactStartHandler(auth);

// Express.js
import express from "express";
import { auth } from "./auth";

const app = express();

app.all("/api/auth/*", async (req, res) => {
  return auth.handler(new Request(
    `${req.protocol}://${req.get("host")}${req.originalUrl}`,
    {
      method: req.method,
      headers: req.headers as HeadersInit,
      body: req.method !== "GET" ? JSON.stringify(req.body) : undefined,
    }
  )).then(response => {
    res.status(response.status);
    response.headers.forEach((value, key) => res.setHeader(key, value));
    return response.text().then(body => res.send(body));
  });
});

Initialize Client SDK

Creates a type-safe client for authentication operations with framework-specific hooks.

// React Client
import { createAuthClient } from "better-auth/react";
import {
  organizationClient,
  twoFactorClient,
} from "better-auth/client/plugins";
import { passkeyClient } from "@better-auth/passkey/client";

export const authClient = createAuthClient({
  baseURL: process.env.NEXT_PUBLIC_APP_URL,

  plugins: [
    organizationClient(),
    twoFactorClient({
      twoFactorPage: "/two-factor",
      onTwoFactorRedirect() {
        window.location.href = "/two-factor";
      },
    }),
    passkeyClient(),
  ],

  fetchOptions: {
    onError(error) {
      if (error.error.status === 429) {
        toast.error("Too many requests. Please try again later.");
      } else if (error.error.status === 401) {
        console.error("Unauthorized");
      }
    },
    onSuccess(data) {
      console.log("Auth action successful:", data);
    },
  },
});

export const {
  signIn,
  signUp,
  signOut,
  useSession,
  organization,
  twoFactor,
  passkey,
} = authClient;

// Vue Client
import { createAuthClient } from "better-auth/vue";
export const authClient = createAuthClient({ /* config */ });

// Svelte Client
import { createAuthClient } from "better-auth/svelte";
export const authClient = createAuthClient({ /* config */ });

// Solid Client
import { createAuthClient } from "better-auth/solid";
export const authClient = createAuthClient({ /* config */ });

// Lynx Client
import { createAuthClient } from "better-auth/lynx";
export const authClient = createAuthClient({ /* config */ });

User Registration and Sign In

Handles email/password authentication with validation and session management.

import { authClient } from "./auth-client";

// Email/Password Sign Up
try {
  const { data, error } = await authClient.signUp.email({
    name: "John Doe",
    email: "john@example.com",
    password: "SecurePassword123!",
    image: "https://example.com/avatar.jpg", // optional
    callbackURL: "/dashboard", // redirect after sign up
  });

  if (error) {
    console.error("Sign up failed:", error.message);
    return;
  }

  console.log("User created:", data.user);
  console.log("Session created:", data.session);
} catch (err) {
  console.error("Unexpected error:", err);
}

// Email/Password Sign In
const { data, error } = await authClient.signIn.email({
  email: "john@example.com",
  password: "SecurePassword123!",
  rememberMe: true, // extends session duration
  callbackURL: "/dashboard",
});

// Sign Out
await authClient.signOut({
  fetchOptions: {
    onSuccess() {
      window.location.href = "/";
    },
  },
});

// Sign Out from All Devices
await authClient.signOut({
  revokeAllSessions: true,
});

Social OAuth Authentication

Initiates OAuth flows with third-party providers for passwordless sign-in.

import { authClient } from "./auth-client";

// GitHub OAuth Sign In
await authClient.signIn.social({
  provider: "github",
  callbackURL: "/dashboard",
});

// Google OAuth Sign In
await authClient.signIn.social({
  provider: "google",
  callbackURL: "/dashboard",
});

// Discord, Microsoft, Apple, etc.
await authClient.signIn.social({
  provider: "discord",
  callbackURL: "/dashboard",
});

// Link Additional Social Account
await authClient.linkSocial({
  provider: "github",
  callbackURL: "/settings/accounts",
});

// Unlink Social Account
await authClient.unlinkAccount({
  accountId: "acc_123456",
});

// List All Linked Accounts
const { data } = await authClient.listAccounts();
data?.forEach(account => {
  console.log(`${account.provider}: ${account.providerId}`);
});

Session Management with Hooks

Retrieves and monitors authentication state in React components.

"use client";

import { useSession } from "./auth-client";
import { Loader } from "./components/loader";

export function ProfilePage() {
  const { data: session, isPending, error } = useSession();

  if (isPending) {
    return <Loader />;
  }

  if (error) {
    return <div>Error loading session: {error.message}</div>;
  }

  if (!session) {
    return <div>Not authenticated. Please sign in.</div>;
  }

  return (
    <div>
      <h1>Welcome, {session.user.name}!</h1>
      <p>Email: {session.user.email}</p>
      <p>Email verified: {session.user.emailVerified ? "Yes" : "No"}</p>
      <img src={session.user.image} alt="Profile" />

      <div>
        <h2>Session Info</h2>
        <p>Session ID: {session.session.id}</p>
        <p>Expires: {new Date(session.session.expiresAt).toLocaleString()}</p>
        <p>IP Address: {session.session.ipAddress}</p>
        <p>User Agent: {session.session.userAgent}</p>
      </div>
    </div>
  );
}

// Update User Profile
import { authClient } from "./auth-client";

async function updateProfile() {
  const { data, error } = await authClient.updateUser({
    name: "John Smith",
    image: "https://example.com/new-avatar.jpg",
  });

  if (!error) {
    console.log("Profile updated:", data);
  }
}

// Change Email
await authClient.changeEmail({
  newEmail: "newemail@example.com",
  callbackURL: "/verify-new-email",
});

// Change Password
await authClient.changePassword({
  currentPassword: "OldPassword123!",
  newPassword: "NewPassword456!",
  revokeOtherSessions: true,
});

Server-Side Session Access

Validates sessions in server components, API routes, and middleware.

// Next.js App Router Server Component
import { auth } from "@/lib/auth";
import { headers } from "next/headers";
import { redirect } from "next/navigation";

export default async function ProtectedPage() {
  const session = await auth.api.getSession({
    headers: await headers(),
  });

  if (!session) {
    redirect("/sign-in");
  }

  return (
    <div>
      <h1>Protected Content</h1>
      <p>Welcome, {session.user.name}!</p>
    </div>
  );
}

// Next.js API Route
import { auth } from "@/lib/auth";

export async function GET(req: Request) {
  const session = await auth.api.getSession({
    headers: req.headers,
  });

  if (!session) {
    return new Response("Unauthorized", { status: 401 });
  }

  // Perform authenticated operation
  const data = await fetchUserData(session.user.id);

  return Response.json({ data });
}

// SvelteKit Load Function
import { auth } from "$lib/auth";

export async function load({ request }) {
  const session = await auth.api.getSession({
    headers: request.headers,
  });

  if (!session) {
    throw redirect(302, "/sign-in");
  }

  return { session };
}

// Middleware for Route Protection
import { auth } from "@/lib/auth";

export async function middleware(request: Request) {
  const session = await auth.api.getSession({
    headers: request.headers,
  });

  const isProtectedRoute = request.url.includes("/dashboard");

  if (isProtectedRoute && !session) {
    return Response.redirect(new URL("/sign-in", request.url));
  }
}

Database Adapter Configuration

Connects Better Auth to your database using various ORMs and drivers.

// Drizzle ORM Adapter (PostgreSQL, MySQL, SQLite)
import { drizzleAdapter } from "better-auth/adapters/drizzle";
import { drizzle } from "drizzle-orm/postgres-js";
import postgres from "postgres";
import * as schema from "./schema";

const client = postgres(process.env.DATABASE_URL!);
const db = drizzle(client, { schema });

export const auth = betterAuth({
  database: drizzleAdapter(db, {
    provider: "pg", // "pg" | "mysql" | "sqlite"
    schema: schema,
    usePlural: false, // Use singular table names
    camelCase: false, // Use snake_case column names
  }),
});

// Prisma Adapter
import { prismaAdapter } from "better-auth/adapters/prisma";
import { PrismaClient } from "@prisma/client";

const prisma = new PrismaClient();

export const auth = betterAuth({
  database: prismaAdapter(prisma, {
    provider: "postgresql",
    usePlural: false,
  }),
});

// MongoDB Adapter
import { mongodbAdapter } from "better-auth/adapters/mongodb";
import { MongoClient } from "mongodb";

const mongoClient = new MongoClient(process.env.MONGODB_URL!);
await mongoClient.connect();

export const auth = betterAuth({
  database: mongodbAdapter(mongoClient.db("auth")),
});

// Memory Adapter (Testing Only)
import { memoryAdapter } from "better-auth/adapters/memory";

export const auth = betterAuth({
  database: memoryAdapter(),
});

Organization Plugin for Multi-Tenancy

Enables team/workspace management with roles, permissions, and invitations.

import { betterAuth } from "better-auth";
import { organization } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    organization({
      async sendInvitationEmail({ email, organization, inviter, url }) {
        await sendEmail({
          to: email,
          subject: `Join ${organization.name}`,
          html: `${inviter.user.name} invited you to join ${organization.name}. <a href="${url}">Accept invitation</a>`,
        });
      },

      allowUserToCreateOrganization: true,
      organizationLimit: 10,

      roles: {
        owner: ["create", "read", "update", "delete", "invite", "remove"],
        admin: ["read", "update", "invite"],
        member: ["read"],
      },
    }),
  ],
});

// Client Usage
import { organizationClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [organizationClient()],
});

// Create Organization
const { data } = await authClient.organization.create({
  name: "Acme Corporation",
  slug: "acme-corp",
  logo: "https://example.com/logo.png",
  metadata: { industry: "Technology" },
});

// Invite Member
await authClient.organization.inviteMember({
  email: "member@example.com",
  role: "admin",
  organizationId: data.id,
});

// List Members
const { data: members } = await authClient.organization.listMembers({
  organizationId: "org_123",
});

// Update Member Role
await authClient.organization.updateMemberRole({
  memberId: "mem_456",
  role: "owner",
  organizationId: "org_123",
});

// Remove Member
await authClient.organization.removeMember({
  memberId: "mem_456",
  organizationId: "org_123",
});

// Set Active Organization
await authClient.organization.setActive({
  organizationId: "org_123",
});

// Check Permissions
const hasPermission = await authClient.organization.hasPermission({
  permissions: {
    post: ["create", "delete"],
    user: ["invite"],
  },
});

Two-Factor Authentication Plugin

Adds TOTP authenticator apps, backup codes, and OTP support for enhanced security.

import { betterAuth } from "better-auth";
import { twoFactor } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    twoFactor({
      issuer: "My Application",

      otpOptions: {
        async sendOTP({ user, otp }) {
          await sendEmail({
            to: user.email,
            subject: "Your verification code",
            html: `Your code is: ${otp}`,
          });
        },

        period: 300, // 5 minutes
        length: 6,
      },

      backupCodeLength: 10,
      backupCodeCount: 10,
    }),
  ],
});

// Client Usage
import { twoFactorClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [
    twoFactorClient({
      twoFactorPage: "/two-factor",
      onTwoFactorRedirect() {
        router.push("/two-factor");
      },
    }),
  ],
});

// Enable 2FA
const { data } = await authClient.twoFactor.enable({
  password: "userPassword123",
});

console.log("TOTP URI:", data.totpURI); // For QR code generation
console.log("Backup codes:", data.backupCodes);

// Verify TOTP Setup
await authClient.twoFactor.verifyTotp({
  code: "123456",
});

// Send OTP via Email
await authClient.twoFactor.sendOtp();

// Verify OTP
await authClient.twoFactor.verifyOtp({
  code: "123456",
});

// Use Backup Code
await authClient.twoFactor.useBackupCode({
  code: "ABCD-1234-EFGH",
});

// Generate New Backup Codes
const { data: newCodes } = await authClient.twoFactor.regenerateBackupCodes({
  password: "userPassword123",
});

// Disable 2FA
await authClient.twoFactor.disable({
  password: "userPassword123",
});

Passkey/WebAuthn Plugin (Separate Package)

Implements passwordless authentication using biometric or hardware security keys.

// Install the passkey package
// npm install @better-auth/passkey

import { betterAuth } from "better-auth";
import { passkey } from "@better-auth/passkey";

export const auth = betterAuth({
  plugins: [
    passkey({
      rpName: "My Application",
      rpID: "example.com",
      origin: "https://example.com",
    }),
  ],
});

// Client Usage
import { passkeyClient } from "@better-auth/passkey/client";

const authClient = createAuthClient({
  plugins: [passkeyClient()],
});

// Register Passkey (must be signed in)
try {
  const { data } = await authClient.passkey.addPasskey({
    name: "MacBook Pro Touch ID",
  });

  console.log("Passkey registered:", data.id);
} catch (error) {
  console.error("Passkey registration failed:", error);
}

// Sign In with Passkey
await authClient.passkey.signIn({
  callbackURL: "/dashboard",
});

// List User Passkeys
const { data: passkeys } = await authClient.passkey.listPasskeys();

passkeys?.forEach(key => {
  console.log(`${key.name}: ${key.createdAt}`);
});

// Delete Passkey
await authClient.passkey.deletePasskey({
  id: "passkey_123",
});

// Check Passkey Support
if (authClient.passkey.isSupported()) {
  console.log("Passkeys are supported in this browser");
}

Magic Link Authentication Plugin

Sends one-time sign-in links via email for passwordless authentication.

import { betterAuth } from "better-auth";
import { magicLink } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    magicLink({
      async sendMagicLink({ email, url, token }) {
        await sendEmail({
          to: email,
          subject: "Sign in to your account",
          html: `
            <h1>Sign in to My Application</h1>
            <p>Click the link below to sign in:</p>
            <a href="${url}">Sign in</a>
            <p>This link expires in 5 minutes.</p>
          `,
        });
      },

      expiresIn: 300, // 5 minutes
      disableSignUp: false,
    }),
  ],
});

// Client Usage
import { magicLinkClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [magicLinkClient()],
});

// Send Magic Link
const { data, error } = await authClient.magicLink.sendMagicLink({
  email: "user@example.com",
  callbackURL: "/dashboard",
});

if (!error) {
  console.log("Magic link sent! Check your email.");
}

// The verification happens automatically when user clicks the link
// The URL will be: /api/auth/magic-link/verify?token=xxx

Email OTP Plugin

Sends one-time password codes via email for verification and sign-in.

import { betterAuth } from "better-auth";
import { emailOTP } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    emailOTP({
      async sendVerificationOTP({ email, otp, type }) {
        await sendEmail({
          to: email,
          subject: type === "sign-in" ? "Sign in code" : "Verification code",
          html: `Your verification code is: <strong>${otp}</strong>`,
        });
      },

      otpLength: 6,
      expiresIn: 300, // 5 minutes
      sendOnSignUp: true,
      disableSignUp: false,
    }),
  ],
});

// Client Usage
import { emailOTPClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [emailOTPClient()],
});

// Send OTP
await authClient.emailOTP.sendVerificationOtp({
  email: "user@example.com",
  type: "sign-in",
});

// Verify OTP
const { data, error } = await authClient.emailOTP.verifyEmail({
  email: "user@example.com",
  otp: "123456",
});

if (!error) {
  console.log("Signed in successfully:", data.user);
}

Phone Number Authentication Plugin

Enables phone number registration and SMS-based OTP verification.

import { betterAuth } from "better-auth";
import { phoneNumber } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    phoneNumber({
      async sendOTP({ phoneNumber, otp }) {
        await sendSMS({
          to: phoneNumber,
          message: `Your verification code is: ${otp}`,
        });
      },

      otpLength: 6,
      expiresIn: 300, // 5 minutes
    }),
  ],
});

// Client Usage
import { phoneNumberClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [phoneNumberClient()],
});

// Send Phone OTP
await authClient.phoneNumber.sendOtp({
  phoneNumber: "+1234567890",
});

// Verify Phone OTP and Sign In
const { data, error } = await authClient.phoneNumber.verifyOtp({
  phoneNumber: "+1234567890",
  otp: "123456",
});

if (!error) {
  console.log("Signed in with phone:", data.user);
}

Username Authentication Plugin

Adds username-based authentication alongside email/password.

import { betterAuth } from "better-auth";
import { username } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    username({
      minLength: 3,
      maxLength: 20,
      allowedCharacters: /^[a-zA-Z0-9_-]+$/,
    }),
  ],
});

// Client Usage
import { usernameClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [usernameClient()],
});

// Sign Up with Username
await authClient.signUp.username({
  username: "johndoe",
  password: "SecurePassword123!",
  name: "John Doe",
  email: "john@example.com", // optional
});

// Sign In with Username
await authClient.signIn.username({
  username: "johndoe",
  password: "SecurePassword123!",
});

Anonymous Authentication Plugin

Allows users to create temporary anonymous sessions without credentials.

import { betterAuth } from "better-auth";
import { anonymous } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    anonymous({
      sessionDuration: 60 * 60 * 24 * 30, // 30 days
    }),
  ],
});

// Client Usage
import { anonymousClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [anonymousClient()],
});

// Create Anonymous Session
const { data } = await authClient.anonymous.signIn();

console.log("Anonymous session:", data.session);

// Link Anonymous Account to Email
await authClient.anonymous.linkEmail({
  email: "user@example.com",
  password: "SecurePassword123!",
});

Admin Plugin for User Management

Provides administrative endpoints for managing users, sessions, and system operations.

import { betterAuth } from "better-auth";
import { admin } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    admin({
      // Optional: customize admin role check
      async isAdmin(user) {
        return user.role === "admin" || user.email.endsWith("@company.com");
      },
    }),
  ],
});

// Client Usage
import { adminClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [adminClient()],
});

// List All Users
const { data: users } = await authClient.admin.listUsers({
  limit: 50,
  offset: 0,
  sortBy: "createdAt",
  sortDirection: "desc",
  filterBy: "email",
  filterValue: "@company.com",
});

// Create User (Admin)
await authClient.admin.createUser({
  email: "newuser@example.com",
  password: "TempPassword123!",
  name: "New User",
  role: "user",
  emailVerified: true,
});

// Update User
await authClient.admin.updateUser({
  userId: "user_123",
  data: {
    name: "Updated Name",
    role: "admin",
    emailVerified: true,
  },
});

// Ban User
await authClient.admin.banUser({
  userId: "user_123",
  reason: "Violation of terms",
  banUntil: new Date("2025-12-31"),
});

// Unban User
await authClient.admin.unbanUser({
  userId: "user_123",
});

// Impersonate User
await authClient.admin.impersonateUser({
  userId: "user_123",
});

// List User Sessions
const { data: sessions } = await authClient.admin.listUserSessions({
  userId: "user_123",
});

// Revoke User Session
await authClient.admin.revokeUserSession({
  sessionId: "session_456",
});

// Delete User
await authClient.admin.deleteUser({
  userId: "user_123",
});

Bearer Token Plugin

Enables stateless authentication using bearer tokens for API access.

import { betterAuth } from "better-auth";
import { bearer } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    bearer({
      expiresIn: 60 * 60 * 24 * 7, // 7 days
    }),
  ],
});

// Client Usage
import { bearerClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [bearerClient()],
});

// Generate Bearer Token
const { data } = await authClient.bearer.generate();

console.log("Access token:", data.accessToken);

// Use Bearer Token in API Calls
fetch("/api/protected", {
  headers: {
    Authorization: `Bearer ${data.accessToken}`,
  },
});

// Server-side Bearer Token Validation
const session = await auth.api.getSession({
  headers: {
    authorization: `Bearer ${token}`,
  },
});

API Key Plugin

Provides long-lived API keys for server-to-server authentication.

import { betterAuth } from "better-auth";
import { apiKey } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    apiKey({
      // Optional: API key headers to check (default: "x-api-key")
      apiKeyHeaders: "x-api-key", // or ["x-api-key", "authorization"]

      // Optional: default key length (default: 64)
      defaultKeyLength: 64,

      // Optional: key expiration settings
      keyExpiration: {
        defaultExpiresIn: 365, // days
        minExpiresIn: 1,
        maxExpiresIn: 365,
        disableCustomExpiresTime: false,
      },

      // Optional: enable metadata
      enableMetadata: true,

      // Optional: disable key hashing (not recommended)
      disableKeyHashing: false,

      // Optional: require name for API keys
      requireName: true,

      // Optional: enable sessions for API keys
      enableSessionForAPIKeys: true,

      // Optional: rate limiting
      rateLimit: {
        enabled: true,
        timeWindow: 60 * 60 * 24 * 1000, // 24 hours
        maxRequests: 10,
      },
    }),
  ],
});

// Client Usage
import { apiKeyClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [apiKeyClient()],
});

// Create API Key
const { data } = await authClient.apiKey.create({
  name: "Production API Key",
  expiresIn: 60 * 60 * 24 * 365, // 1 year (in seconds)
  metadata: { environment: "production" }, // optional
});

console.log("API Key:", data.key);
console.log("Starts with:", data.startsWith); // First 6 characters

// Get API Key
const { data: apiKey } = await authClient.apiKey.get({
  id: "key_123",
});

// Update API Key
await authClient.apiKey.update({
  id: "key_123",
  name: "Updated API Key Name",
  enabled: false, // disable the key
  metadata: { updated: true },
});

// List API Keys
const { data: keys } = await authClient.apiKey.list();

keys?.forEach(key => {
  console.log(`${key.name}: ${key.startsWith}... (expires: ${key.expiresAt})`);
});

// Delete API Key
await authClient.apiKey.delete({
  id: "key_123",
});

// Server-side: Delete All Expired API Keys
await auth.api.deleteAllExpiredApiKeys();

// Server-side API Key Validation
const session = await auth.api.getSession({
  headers: {
    "x-api-key": apiKey,
  },
});

// Server-side: Verify API Key
const { data: verified } = await auth.api.verifyApiKey({
  key: apiKey,
});

if (verified) {
  console.log("Valid API key for user:", verified.userId);
}

JWT Plugin

Generates and validates JSON Web Tokens for session management.

import { betterAuth } from "better-auth";
import { jwt } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    jwt({
      algorithm: "HS256",
      expiresIn: "7d",
      issuer: "https://example.com",
      audience: ["https://api.example.com"],
    }),
  ],
});

// Client Usage
import { jwtClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [jwtClient()],
});

// Generate JWT
const { data } = await authClient.jwt.generate();

console.log("JWT:", data.token);

// Decode JWT
const decoded = await authClient.jwt.decode({
  token: data.token,
});

console.log("JWT payload:", decoded);

Multi-Session Plugin

Allows users to maintain multiple concurrent sessions across devices.

import { betterAuth } from "better-auth";
import { multiSession } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    multiSession({
      maximumSessions: 5, // Max sessions per user
    }),
  ],
});

// Client Usage
import { multiSessionClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [multiSessionClient()],
});

// List Active Sessions
const { data: sessions } = await authClient.multiSession.listSessions();

sessions?.forEach(session => {
  console.log(`Device: ${session.userAgent}, Last active: ${session.updatedAt}`);
});

// Revoke Specific Session
await authClient.multiSession.revokeSession({
  sessionId: "session_456",
});

// Revoke All Other Sessions
await authClient.multiSession.revokeOtherSessions();

HaveIBeenPwned Plugin

Checks passwords against known data breaches for enhanced security.

import { betterAuth } from "better-auth";
import { haveibeenpwned } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    haveibeenpwned({
      // Automatically check on sign-up and password change
      checkOnSignUp: true,
      checkOnPasswordChange: true,

      // Optional: warn users instead of blocking
      blockCompromisedPasswords: true,
    }),
  ],
});

// The plugin automatically checks passwords against the HIBP database
// and rejects compromised passwords during sign-up and password changes

Sign-In with Ethereum (SIWE) Plugin

Enables Web3 authentication using Ethereum wallet signatures.

import { betterAuth } from "better-auth";
import { siwe } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    siwe({
      domain: "example.com",
      uri: "https://example.com",
      statement: "Sign in with your Ethereum account",
    }),
  ],
});

// Client Usage
import { siweClient } from "better-auth/client/plugins";
import { ethers } from "ethers";

const authClient = createAuthClient({
  plugins: [siweClient()],
});

// Get Nonce
const { data: nonce } = await authClient.siwe.getNonce();

// Create and Sign Message
const provider = new ethers.BrowserProvider(window.ethereum);
const signer = await provider.getSigner();
const address = await signer.getAddress();

const message = await authClient.siwe.prepareMessage({
  address,
  nonce: nonce.nonce,
});

const signature = await signer.signMessage(message);

// Verify and Sign In
await authClient.siwe.signIn({
  message,
  signature,
});

OIDC Provider Plugin

Turns Better Auth into a full OpenID Connect (OIDC) provider, allowing your application to act as an OAuth 2.0 authorization server for other applications.

import { betterAuth } from "better-auth";
import { oidcProvider } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    oidcProvider({
      // OIDC configuration
      codeExpiresIn: 600, // 10 minutes
      accessTokenExpiresIn: 3600, // 1 hour
      refreshTokenExpiresIn: 604800, // 7 days

      // Supported scopes
      scopes: ["openid", "profile", "email", "offline_access"],

      // Optional: use JWT plugin for signing tokens
      useJWTPlugin: true,

      // Optional: trusted clients (no consent screen)
      trustedClients: [
        {
          clientId: "my-trusted-app",
          clientSecret: "secret",
          name: "My Trusted App",
          redirectUrls: ["https://app.example.com/callback"],
          type: "web",
        },
      ],

      // Optional: allow dynamic client registration
      allowDynamicClientRegistration: true,

      // Optional: require PKCE for all clients
      requirePKCE: true,

      // Optional: add custom claims to tokens
      async getAdditionalUserInfoClaim(user, scopes, client) {
        return {
          organization: user.metadata?.organization,
          role: user.role,
        };
      },
    }),
  ],
});

// The plugin exposes the following endpoints:
// - GET /.well-known/openid-configuration (OIDC discovery)
// - GET /oauth2/authorize (authorization endpoint)
// - POST /oauth2/consent (consent handling)
// - POST /oauth2/token (token exchange)
// - GET /oauth2/userinfo (user info endpoint)
// - POST /oauth2/register (dynamic client registration)
// - GET /oauth2/client/:id (get client info)
// - GET/POST /oauth2/endsession (logout endpoint)

// Client Registration Example
const response = await fetch("/api/auth/oauth2/register", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    redirect_uris: ["https://client.example.com/callback"],
    client_name: "My OAuth Client",
    token_endpoint_auth_method: "client_secret_post",
    grant_types: ["authorization_code", "refresh_token"],
  }),
});

const { client_id, client_secret } = await response.json();

// Authorization Code Flow Example
// 1. Redirect user to authorization endpoint
const authURL = new URL("/api/auth/oauth2/authorize", "https://auth.example.com");
authURL.searchParams.set("client_id", client_id);
authURL.searchParams.set("redirect_uri", "https://client.example.com/callback");
authURL.searchParams.set("response_type", "code");
authURL.searchParams.set("scope", "openid profile email");
authURL.searchParams.set("state", "random-state");
window.location.href = authURL.toString();

// 2. Exchange code for tokens
const tokenResponse = await fetch("/api/auth/oauth2/token", {
  method: "POST",
  headers: { "Content-Type": "application/x-www-form-urlencoded" },
  body: new URLSearchParams({
    grant_type: "authorization_code",
    code: authorizationCode,
    redirect_uri: "https://client.example.com/callback",
    client_id: client_id,
    client_secret: client_secret,
  }),
});

const { access_token, refresh_token, id_token } = await tokenResponse.json();

// 3. Get user info
const userInfo = await fetch("/api/auth/oauth2/userinfo", {
  headers: { Authorization: `Bearer ${access_token}` },
}).then(res => res.json());

console.log("User info:", userInfo);

OAuth Proxy Plugin

Enables OAuth authentication in development environments by proxying OAuth callbacks through your production server, solving redirect URI mismatch issues.

import { betterAuth } from "better-auth";
import { oAuthProxy } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    oAuthProxy({
      // Optional: specify current URL (auto-detected by default)
      currentURL: process.env.CURRENT_URL,

      // Optional: production URL (defaults to BETTER_AUTH_URL)
      productionURL: process.env.BETTER_AUTH_URL,
    }),
  ],
});

// How it works:
// 1. In development (localhost:3000), initiate OAuth sign-in
// 2. Plugin detects non-production environment
// 3. Modifies callback URL to point to production server
// 4. Production server receives OAuth callback
// 5. Production server encrypts session cookies
// 6. Redirects back to development with encrypted cookies
// 7. Development server decrypts and sets cookies locally

// No client-side changes needed - plugin handles everything automatically
await authClient.signIn.social({
  provider: "github",
  callbackURL: "/dashboard",
});

// The plugin exposes: /oauth-proxy-callback endpoint
// This endpoint handles the proxy callback and cookie forwarding

MCP (Model Context Protocol) Plugin

Enables OAuth authentication for AI agents and Model Context Protocol clients, allowing AI tools to securely authenticate with your Better Auth instance.

import { betterAuth } from "better-auth";
import { mcp } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    mcp({
      // Login page where users will be redirected to authenticate
      loginPage: "/sign-in",

      // Optional: resource identifier (defaults to origin)
      resource: "https://example.com",

      // Optional: customize OIDC configuration
      oidcConfig: {
        // OIDC provider options
        scopes: ["openid", "profile", "email"],
        accessTokenExpiresIn: 3600, // 1 hour
      },
    }),
  ],
});

// The MCP plugin exposes OAuth 2.0 endpoints for AI agents:
// - GET /.well-known/oauth-authorization-server (OAuth server metadata)
// - GET /.well-known/oauth-protected-resource (Protected resource metadata)
// - GET /mcp/authorize (Authorization endpoint)
// - POST /mcp/token (Token endpoint)
// - GET /mcp/userinfo (User info endpoint)
// - GET /mcp/jwks (JSON Web Key Set)
// - POST /mcp/register (Client registration)

// AI agents can use these endpoints to:
// 1. Discover authentication capabilities
// 2. Initiate OAuth flows
// 3. Exchange authorization codes for tokens
// 4. Access user information

// Example: AI agent authentication flow
// 1. Agent discovers OAuth endpoints from /.well-known/oauth-authorization-server
// 2. Agent redirects user to /mcp/authorize with client credentials
// 3. User authenticates through loginPage
// 4. Agent receives authorization code
// 5. Agent exchanges code for access token at /mcp/token
// 6. Agent uses access token to access protected resources

// Server-side: Verify MCP OAuth tokens
const session = await auth.api.getSession({
  headers: {
    authorization: `Bearer ${accessToken}`,
  },
});

if (session) {
  console.log("Authenticated AI agent for user:", session.user);
}

Device Authorization Plugin

Implements OAuth 2.0 device authorization flow for limited-input devices.

import { betterAuth } from "better-auth";
import { deviceAuthorization } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    deviceAuthorization({
      deviceCodeExpiresIn: 600, // 10 minutes
      userCodeLength: 8,
      pollingInterval: 5, // seconds
    }),
  ],
});

// Client Usage (Device)
import { deviceAuthorizationClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [deviceAuthorizationClient()],
});

// Request Device Code
const { data } = await authClient.deviceAuthorization.requestCode();

console.log("User code:", data.userCode);
console.log("Verification URL:", data.verificationUri);

// Poll for Authorization
const result = await authClient.deviceAuthorization.pollForAuthorization({
  deviceCode: data.deviceCode,
});

// User visits verification URL and enters user code to authorize

Access Control Plugin

Provides fine-grained permission and role-based access control.

import { betterAuth } from "better-auth";
import { access } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    access({
      permissions: {
        post: ["create", "read", "update", "delete"],
        comment: ["create", "read", "delete"],
        user: ["read", "update"],
      },

      roles: {
        admin: {
          post: ["create", "read", "update", "delete"],
          comment: ["create", "read", "delete"],
          user: ["read", "update"],
        },
        editor: {
          post: ["create", "read", "update"],
          comment: ["create", "read"],
        },
        viewer: {
          post: ["read"],
          comment: ["read"],
        },
      },
    }),
  ],
});

// Client Usage
import { accessClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [accessClient()],
});

// Check Permission
const canEdit = await authClient.access.hasPermission({
  resource: "post",
  action: "update",
});

// Get User Permissions
const { data: permissions } = await authClient.access.getPermissions();

console.log("User permissions:", permissions);

One-Time Token Plugin

Generates single-use tokens for secure one-time operations.

import { betterAuth } from "better-auth";
import { oneTimeToken } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    oneTimeToken({
      expiresIn: 60 * 15, // 15 minutes
    }),
  ],
});

// Client Usage
import { oneTimeTokenClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [oneTimeTokenClient()],
});

// Generate One-Time Token
const { data } = await authClient.oneTimeToken.generate({
  action: "download-report",
  metadata: { reportId: "123" },
});

console.log("Token:", data.token);

// Verify and Consume Token (server-side)
const result = await auth.api.oneTimeToken.verify({
  token: data.token,
});

if (result.valid) {
  console.log("Token metadata:", result.metadata);
  // Perform action and token is automatically consumed
}

Generic OAuth Plugin

Adds support for custom OAuth 2.0 providers not built into Better Auth.

import { betterAuth } from "better-auth";
import { genericOAuth } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    genericOAuth({
      config: [
        {
          providerId: "custom-provider",
          clientId: process.env.CUSTOM_CLIENT_ID!,
          clientSecret: process.env.CUSTOM_CLIENT_SECRET!,
          authorizationUrl: "https://oauth.example.com/authorize",
          tokenUrl: "https://oauth.example.com/token",
          userInfoUrl: "https://oauth.example.com/userinfo",
          scopes: ["openid", "profile", "email"],
        },
      ],
    }),
  ],
});

// Client Usage - Same as other OAuth providers
await authClient.signIn.social({
  provider: "custom-provider",
  callbackURL: "/dashboard",
});

Custom Plugin Development

Creates reusable plugins with endpoints, schemas, hooks, and middleware.

import { createAuthEndpoint, sessionMiddleware } from "better-auth";
import type { BetterAuthPlugin } from "better-auth";
import { z } from "zod";

export const customSubscriptionPlugin = (options?: {
  plans?: string[];
}) => {
  return {
    id: "subscription",

    // Add database schema
    schema: {
      subscription: {
        fields: {
          userId: {
            type: "string",
            required: true,
            references: { model: "user", field: "id" },
          },
          plan: { type: "string", required: true },
          status: { type: "string", required: true },
          currentPeriodEnd: { type: "date", required: true },
          cancelAtPeriodEnd: { type: "boolean", required: true },
        },
      },
    },

    // Add custom endpoints
    endpoints: {
      getSubscription: createAuthEndpoint(
        "/subscription/get",
        {
          method: "GET",
          use: [sessionMiddleware],
        },
        async (ctx) => {
          const subscription = await ctx.context.adapter.findOne({
            model: "subscription",
            where: [{ field: "userId", value: ctx.context.session.user.id }],
          });

          return { subscription };
        }
      ),

      updateSubscription: createAuthEndpoint(
        "/subscription/update",
        {
          method: "POST",
          body: z.object({
            plan: z.enum(["free", "pro", "enterprise"]),
          }),
          use: [sessionMiddleware],
        },
        async (ctx) => {
          const updated = await ctx.context.adapter.update({
            model: "subscription",
            where: [{ field: "userId", value: ctx.context.session.user.id }],
            update: { plan: ctx.body.plan },
          });

          return { subscription: updated };
        }
      ),
    },

    // Add hooks
    hooks: {
      after: [
        {
          matcher: (context) => context.path === "/sign-up/email",
          handler: async (ctx) => {
            // Create free subscription on sign up
            await ctx.context.adapter.create({
              model: "subscription",
              data: {
                userId: ctx.context.session.user.id,
                plan: "free",
                status: "active",
                currentPeriodEnd: new Date(Date.now() + 30 * 24 * 60 * 60 * 1000),
                cancelAtPeriodEnd: false,
              },
            });
          },
        },
      ],
    },

    // Add rate limiting
    rateLimit: [
      {
        pathMatcher: (path) => path.startsWith("/subscription/"),
        window: 60,
        max: 10,
      },
    ],

    // Type inference for client
    $Infer: {
      Subscription: {} as {
        plan: string;
        status: string;
        currentPeriodEnd: Date;
      },
    },
  } satisfies BetterAuthPlugin;
};

// Usage
export const auth = betterAuth({
  plugins: [
    customSubscriptionPlugin({
      plans: ["free", "pro", "enterprise"],
    }),
  ],
});

SSO/SAML Plugin (Separate Package)

Enables enterprise Single Sign-On with SAML 2.0 and OIDC support for corporate authentication.

// Install the SSO package
// npm install @better-auth/sso

import { betterAuth } from "better-auth";
import { sso } from "@better-auth/sso";

export const auth = betterAuth({
  plugins: [
    sso({
      // SAML Configuration
      saml: {
        providers: [
          {
            providerId: "okta",
            entityId: "https://your-app.com",
            assertionConsumerServiceURL: "https://your-app.com/api/auth/saml/acs",
            singleSignOnServiceURL: process.env.OKTA_SSO_URL!,
            x509Certificate: process.env.OKTA_CERT!,
          },
        ],
      },

      // OIDC Configuration
      oidc: {
        providers: [
          {
            providerId: "azure-ad",
            clientId: process.env.AZURE_CLIENT_ID!,
            clientSecret: process.env.AZURE_CLIENT_SECRET!,
            issuer: "https://login.microsoftonline.com/tenant-id/v2.0",
          },
        ],
      },
    }),
  ],
});

// Client Usage
import { ssoClient } from "@better-auth/sso/client";

const authClient = createAuthClient({
  plugins: [ssoClient()],
});

// Initiate SAML SSO
await authClient.sso.signIn({
  providerId: "okta",
  callbackURL: "/dashboard",
});

// Initiate OIDC SSO
await authClient.sso.signIn({
  providerId: "azure-ad",
  callbackURL: "/dashboard",
});

SCIM Plugin (Separate Package)

Enables enterprise user provisioning through SCIM 2.0 (System for Cross-domain Identity Management) for automated user lifecycle management with identity providers like Okta, Azure AD, and OneLogin.

// Install the SCIM package
// npm install @better-auth/scim

import { betterAuth } from "better-auth";
import { scim } from "@better-auth/scim";

export const auth = betterAuth({
  plugins: [
    scim({
      // Optional: customize token storage (default: "plain")
      storeSCIMToken: "hashed", // "plain" | "hashed"

      // Optional: hook before token generation
      async beforeSCIMTokenGenerated({ user, member, scimToken }) {
        console.log("Generating SCIM token for:", user.email);
      },

      // Optional: hook after token generation
      async afterSCIMTokenGenerated({ user, member, scimToken, scimProvider }) {
        // Send notification or log the event
        await notifyAdmin(`SCIM token created for ${scimProvider.providerId}`);
      },
    }),
  ],
});

// The plugin automatically exposes SCIM 2.0 endpoints:
// - POST /api/auth/scim/generate-token (Generate SCIM token)
// - GET /api/auth/scim/v2/ServiceProviderConfig (SCIM metadata)
// - GET /api/auth/scim/v2/Schemas (Supported schemas)
// - GET /api/auth/scim/v2/ResourceTypes (Supported resource types)
// - POST /api/auth/scim/v2/Users (Create user)
// - GET /api/auth/scim/v2/Users (List users)
// - GET /api/auth/scim/v2/Users/:userId (Get user)
// - PUT /api/auth/scim/v2/Users/:userId (Update user)
// - PATCH /api/auth/scim/v2/Users/:userId (Patch user)
// - DELETE /api/auth/scim/v2/Users/:userId (Delete user)

// Client Usage
import { scimClient } from "@better-auth/scim/client";

const authClient = createAuthClient({
  plugins: [scimClient()],
});

// Generate SCIM Token (requires authenticated user)
const { data } = await authClient.scim.generateToken({
  providerId: "okta",
  organizationId: "org_123", // Optional: restrict to organization
});

console.log("SCIM Token:", data.scimToken);
// Share this token with your identity provider

// Configure in your identity provider (e.g., Okta):
// - SCIM Base URL: https://your-app.com/api/auth/scim/v2
// - Authentication: Bearer Token
// - Bearer Token: [scimToken from above]

// Identity provider can now:
// 1. Create users automatically when assigned to the app
// 2. Update user information when changed in the IdP
// 3. Deactivate/delete users when unassigned from the app
// 4. Sync user attributes (name, email, etc.)

// Example: Okta creates a user via SCIM
// POST /api/auth/scim/v2/Users
// Authorization: Bearer [scimToken]
// {
//   "schemas": ["urn:ietf:params:scim:schemas:core:2.0:User"],
//   "userName": "user@example.com",
//   "name": {
//     "givenName": "John",
//     "familyName": "Doe"
//   },
//   "emails": [
//     { "primary": true, "value": "user@example.com" }
//   ],
//   "active": true
// }

// Server-side: Query SCIM-provisioned users
const scimUsers = await auth.api.listUsers({
  headers: request.headers,
  query: {
    filter: 'emails[type eq "work" and value co "@example.com"]'
  },
});

// SCIM supports filtering with standard operators:
// - eq (equal): userName eq "john@example.com"
// - co (contains): name.givenName co "John"
// - sw (starts with): userName sw "john"
// - pr (present): emails pr
// - and/or: emails pr and userName eq "john@example.com"

Expo/React Native Integration (Separate Package)

Provides native mobile authentication for Expo and React Native applications.

// Install the Expo package
// npm install @better-auth/expo expo-web-browser expo-secure-store expo-constants expo-linking expo-crypto

// Server setup (same as web)
import { betterAuth } from "better-auth";

export const auth = betterAuth({
  // ... your config
});

// Expo API Route: app/api/auth/[...route]+api.ts
import { auth } from "~/lib/auth";

export async function GET(request: Request) {
  return auth.handler(request);
}

export async function POST(request: Request) {
  return auth.handler(request);
}

// Client Setup
import { createAuthClient } from "@better-auth/expo/client";

export const authClient = createAuthClient({
  baseURL: "https://your-api.com",
  plugins: [
    // Add your plugins
  ],
});

// Usage in React Native
import { useSession } from "@better-auth/expo/client";

function App() {
  const { data: session, isPending } = useSession();

  if (isPending) {
    return <ActivityIndicator />;
  }

  return (
    <View>
      {session ? (
        <Text>Welcome, {session.user.name}!</Text>
      ) : (
        <Button
          title="Sign In"
          onPress={async () => {
            await authClient.signIn.email({
              email: "user@example.com",
              password: "password",
            });
          }}
        />
      )}
    </View>
  );
}

// OAuth with Expo
await authClient.signIn.social({
  provider: "google",
  callbackURL: "myapp://auth/callback",
});

Stripe Plugin (Separate Package)

Integrates Stripe for subscription and payment management with automatic user-to-customer linking.

// Install the Stripe package
// npm install @better-auth/stripe stripe

import { betterAuth } from "better-auth";
import { stripe } from "@better-auth/stripe";
import Stripe from "stripe";

const stripeClient = new Stripe(process.env.STRIPE_SECRET_KEY!);

export const auth = betterAuth({
  plugins: [
    stripe({
      stripe: stripeClient,

      // Optional: customize customer creation
      async createCustomer(user) {
        return {
          email: user.email,
          name: user.name,
          metadata: {
            userId: user.id,
          },
        };
      },

      // Optional: handle webhook events
      async onWebhook(event) {
        if (event.type === "customer.subscription.created") {
          const subscription = event.data.object;
          console.log("New subscription:", subscription.id);
        }
      },
    }),
  ],
});

// Client Usage
import { stripeClient } from "@better-auth/stripe/client";

const authClient = createAuthClient({
  plugins: [stripeClient()],
});

// Get or Create Stripe Customer
const { data: customer } = await authClient.stripe.getCustomer();

console.log("Stripe customer ID:", customer.id);

// Create Checkout Session
const { data: session } = await authClient.stripe.createCheckoutSession({
  priceId: "price_1234567890",
  successUrl: "https://example.com/success",
  cancelUrl: "https://example.com/cancel",
});

// Redirect to Stripe checkout
window.location.href = session.url;

// Create Customer Portal Session
const { data: portal } = await authClient.stripe.createPortalSession({
  returnUrl: "https://example.com/account",
});

// Redirect to customer portal
window.location.href = portal.url;

// Get Customer Subscriptions
const { data: subscriptions } = await authClient.stripe.getSubscriptions();

subscriptions?.forEach(sub => {
  console.log(`Subscription: ${sub.id}, Status: ${sub.status}`);
});

// Server-side: Handle Stripe webhooks
// app/api/stripe/webhook/route.ts
import { auth } from "@/lib/auth";

export async function POST(req: Request) {
  const signature = req.headers.get("stripe-signature");
  const body = await req.text();

  // Better Auth automatically handles webhook verification
  // and calls the onWebhook handler configured in the plugin
  return auth.handler(req);
}

Custom Session Plugin

Allows extending session data with custom fields and logic for application-specific needs.

import { betterAuth } from "better-auth";
import { customSession } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    customSession({
      // Extend session schema
      schema: {
        session: {
          fields: {
            ipAddress: {
              type: "string",
              required: false,
            },
            userAgent: {
              type: "string",
              required: false,
            },
            metadata: {
              type: "json",
              required: false,
            },
          },
        },
      },

      // Customize session creation
      async onSessionCreate(session, context) {
        return {
          ...session,
          ipAddress: context.request.headers.get("x-forwarded-for"),
          userAgent: context.request.headers.get("user-agent"),
          metadata: { loginMethod: "email" },
        };
      },

      // Customize session retrieval
      async onSessionGet(session, context) {
        // Add runtime data to session
        return {
          ...session,
          activeDevices: await getActiveDevices(session.userId),
        };
      },
    }),
  ],
});

// Type-safe access to custom session data
const session = await auth.api.getSession({ headers });
console.log("IP Address:", session?.ipAddress);
console.log("Metadata:", session?.metadata);

Additional Fields Plugin

Extends user and session objects with custom fields for application-specific data.

import { betterAuth } from "better-auth";

export const auth = betterAuth({
  user: {
    additionalFields: {
      role: {
        type: "string",
        required: false,
        defaultValue: "user",
      },
      bio: {
        type: "string",
        required: false,
      },
      dateOfBirth: {
        type: "date",
        required: false,
      },
      isVerified: {
        type: "boolean",
        required: false,
        defaultValue: false,
      },
    },
  },
  session: {
    additionalFields: {
      deviceName: {
        type: "string",
        required: false,
      },
    },
  },
});

// Client Usage
import { inferAdditionalFields } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [
    inferAdditionalFields({
      user: {
        role: {
          type: "string",
          required: false,
        },
        bio: {
          type: "string",
          required: false,
        },
        dateOfBirth: {
          type: "date",
          required: false,
        },
        isVerified: {
          type: "boolean",
          required: false,
        },
      },
      session: {
        deviceName: {
          type: "string",
          required: false,
        },
      },
    }),
  ],
});

// Access additional fields in session
const { data: session } = useSession();
console.log("User role:", session?.user.role);
console.log("User bio:", session?.user.bio);
console.log("Device name:", session?.session.deviceName);

// Update user with additional fields
await authClient.updateUser({
  role: "admin",
  bio: "Software developer",
  isVerified: true,
});

Captcha Plugin

Adds bot protection to authentication endpoints using Google reCAPTCHA or Cloudflare Turnstile.

import { betterAuth } from "better-auth";
import { captcha } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    captcha({
      // Use Google reCAPTCHA v2 or v3
      provider: "recaptcha", // "recaptcha" | "turnstile"
      secretKey: process.env.RECAPTCHA_SECRET_KEY!,

      // Optional: override site verify URL
      siteVerifyURLOverride: "https://custom-verify-url.com",

      // Optional: specify which endpoints to protect (defaults to common auth endpoints)
      endpoints: [
        "/sign-in/email",
        "/sign-up/email",
        "/reset-password",
      ],
    }),

    // Or use Cloudflare Turnstile
    captcha({
      provider: "turnstile",
      secretKey: process.env.TURNSTILE_SECRET_KEY!,
    }),
  ],
});

// Client Usage - Add captcha response header
// For Google reCAPTCHA
const recaptchaToken = await grecaptcha.execute();

await authClient.signIn.email({
  email: "user@example.com",
  password: "password",
  fetchOptions: {
    headers: {
      "x-captcha-response": recaptchaToken,
    },
  },
});

// For Cloudflare Turnstile
const turnstileToken = await turnstile.getResponse();

await authClient.signUp.email({
  email: "user@example.com",
  password: "password",
  name: "User",
  fetchOptions: {
    headers: {
      "x-captcha-response": turnstileToken,
    },
  },
});

// The plugin automatically verifies the captcha token before processing the request
// If verification fails, the request is rejected with a 400 error

Google One Tap Plugin

Enables passwordless authentication using Google One Tap for seamless sign-in.

import { betterAuth } from "better-auth";
import { oneTap } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    oneTap({
      // Optional: disable signup via One Tap
      disableSignup: false,

      // Optional: specify Google Client ID (uses socialProviders.google.clientId if not provided)
      clientId: process.env.GOOGLE_CLIENT_ID,
    }),
  ],
});

// Client Usage
import { oneTapClient } from "better-auth/client/plugins";

const authClient = createAuthClient({
  plugins: [oneTapClient()],
});

// Initialize Google One Tap on your page
google.accounts.id.initialize({
  client_id: "YOUR_GOOGLE_CLIENT_ID",
  callback: async (response) => {
    // Send the ID token to Better Auth
    const { data, error } = await authClient.oneTap.signIn({
      idToken: response.credential,
    });

    if (!error) {
      console.log("Signed in with Google One Tap:", data.user);
    }
  },
});

// Display the One Tap prompt
google.accounts.id.prompt();

// Or render the One Tap button
google.accounts.id.renderButton(
  document.getElementById("oneTapButton"),
  {
    theme: "outline",
    size: "large",
  }
);

Last Login Method Plugin

Tracks the last authentication method used by each user for analytics and UX improvements.

import { betterAuth } from "better-auth";
import { lastLoginMethod } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    lastLoginMethod({
      // Optional: custom cookie name
      cookieName: "better-auth.last_used_login_method",

      // Optional: cookie expiration (default: 30 days)
      maxAge: 60 * 60 * 24 * 30,

      // Optional: store in database instead of cookie only
      storeInDatabase: true,

      // Optional: custom method to resolve login method
      customResolveMethod(ctx) {
        // Extract login method from context
        if (ctx.path.includes("/sign-in/social")) {
          return "oauth";
        }
        if (ctx.path.includes("/passkey/")) {
          return "passkey";
        }
        return "email";
      },
    }),
  ],
});

// When storeInDatabase is enabled, the plugin adds a lastLoginMethod field to the user table

// Access last login method on the client
const { data: session } = useSession();
if (session) {
  console.log("Last login method:", session.user.lastLoginMethod);
}

// Server-side access
const session = await auth.api.getSession({ headers });
if (session) {
  console.log("User's last login method:", session.user.lastLoginMethod);
}

// The plugin automatically tracks:
// - "email" for email/password authentication
// - "oauth" for social authentication
// - "passkey" for passkey authentication
// - "magic-link" for magic link authentication
// - "phone" for phone number authentication
// - Custom values from customResolveMethod

OpenAPI Plugin

Automatically generates OpenAPI 3.0 documentation for all Better Auth endpoints with interactive API explorer.

import { betterAuth } from "better-auth";
import { openAPI } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    openAPI({
      // Optional: customize the path (default: "/reference")
      path: "/api-docs",

      // Optional: customize Scalar theme
      theme: "purple", // "default" | "alternate" | "moon" | "purple" | "solarized" | etc.

      // Optional: add custom security nonce for CSP
      nonce: process.env.CSP_NONCE,
    }),
  ],
});

// The plugin automatically exposes an interactive API reference at:
// http://localhost:3000/api/auth/api-docs

// Access OpenAPI specification JSON:
// http://localhost:3000/api/auth/api-docs/openapi.json

// The documentation includes:
// - All authentication endpoints with request/response schemas
// - Security schemes and authentication requirements
// - Interactive try-it-out functionality
// - Example requests and responses
// - Type definitions and validation rules
// - Rate limiting information

// You can also use the OpenAPI spec with other tools:
// - Import into Postman, Insomnia, or other API clients
// - Generate client SDKs in multiple languages
// - Set up API testing and monitoring
// - Create custom documentation sites

Rate Limiting Configuration

Controls request frequency to prevent abuse and protect endpoints from attacks.

import { betterAuth } from "better-auth";

export const auth = betterAuth({
  // Global rate limiting
  rateLimit: {
    enabled: true,
    window: 60, // seconds
    max: 10, // requests per window
    storage: "memory", // "memory" | "database"

    // Custom rate limit rules
    customRules: [
      {
        pathMatcher: (path) => path === "/sign-in/email",
        window: 60,
        max: 5, // More restrictive for sign-in
      },
      {
        pathMatcher: (path) => path === "/sign-up/email",
        window: 60,
        max: 3, // Even more restrictive for sign-up
      },
      {
        pathMatcher: (path) => path.startsWith("/reset-password"),
        window: 300, // 5 minutes
        max: 3,
      },
    ],
  },
});

// Plugin-specific rate limiting
import { twoFactor } from "better-auth/plugins";

export const auth = betterAuth({
  plugins: [
    twoFactor({
      // Plugin adds its own rate limits
      rateLimit: {
        window: 60,
        max: 3, // Only 3 2FA attempts per minute
      },
    }),
  ],
});

Better Auth provides a production-ready authentication system that scales from simple email/password flows to complex multi-tenant applications with enterprise SSO/SAML, SCIM user provisioning, Web3 authentication, device authorization, OpenID Connect provider capabilities for building OAuth 2.0 authorization servers, OAuth proxy for seamless development workflows, Model Context Protocol (MCP) OAuth integration for AI agent authentication, Stripe payment integration, captcha verification for bot protection, Google One Tap authentication, automatic OpenAPI documentation generation, last login method tracking, and native mobile support via Expo/React Native. The plugin architecture enables rapid feature development while maintaining type safety across the entire stack, supporting advanced features like phone number authentication, username login, anonymous sessions, bearer tokens, API keys, JWT tokens, password breach checking via HaveIBeenPwned, custom session handling, additional user/session fields, OIDC provider, OAuth proxy, MCP OAuth, Stripe subscriptions, SCIM enterprise provisioning, captcha protection, Google One Tap, OpenAPI docs, last login tracking, and fine-grained access control. Database adapters abstract away ORM-specific details, allowing seamless switching between Prisma, Drizzle, MongoDB, or in-memory storage. Framework integrations handle the nuances of Next.js App Router, SvelteKit, Solid Start, React Start (TanStack Start), Expo, and other modern frameworks, ensuring cookies, headers, and redirects work correctly in server components, API routes, edge runtimes, and native mobile apps.

The client libraries provide a unified API across React, Vue, Svelte, Solid, Lynx, and React Native with framework-specific primitives like hooks, stores, and signals for reactive session management. Plugins like organization, two-factor, passkey, magic link, email OTP, phone number, username, anonymous, bearer, API key, JWT, multi-session, SIWE, device authorization, access control, custom session, OIDC provider, OAuth proxy, MCP OAuth, Stripe, SCIM, SSO/SAML, captcha, Google One Tap, OpenAPI documentation, and last login method tracking add complete features with minimal configuration, including database schemas, API endpoints, hooks, rate limiting, and client methods. Additional features can be added through the user/session additionalFields configuration for custom data requirements. Separate packages like @better-auth/passkey, @better-auth/sso, @better-auth/scim, @better-auth/expo, @better-auth/stripe, and @better-auth/oauth-provider provide specialized functionality with their own dependencies. The system's security defaults include automatic CSRF protection, secure cookie handling with httpOnly and sameSite flags, bcrypt password hashing, rate limiting, captcha verification support, password breach checking via HaveIBeenPwned integration, and session validation on every request, making it suitable for production use without extensive security hardening.

Install BMasterAI with All Features

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/getting-started.md

Installs BMasterAI along with all optional dependencies, enabling the full suite of features.

pip install bmasterai[all]

Install BMasterAI

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/getting-started.md

Installs the BMasterAI library using pip. This is the basic installation for getting started.

pip install bmasterai

Install BMasterAI for Development

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/getting-started.md

Clones the BMasterAI repository and installs it in editable mode with development dependencies. Useful for contributing to the project.

git clone https://github.com/travis-burmaster/bmasterai.git
cd bmasterai
pip install -e .[dev]

Install RAG Dependencies and Set Up Environment

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/getting-started.md

Installs necessary dependencies for Retrieval-Augmented Generation (RAG) using Qdrant and sets up required environment variables for API keys and connection details.

pip install -r examples/minimal-rag/requirements_qdrant.txt

export QDRANT_URL="https://your-cluster.qdrant.io"
export QDRANT_API_KEY="your-qdrant-api-key"
export OPENAI_API_KEY="your-openai-api-key"

Install Gradio and Anthropic for Web Interface

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/getting-started.md

Installs the Gradio library for building web interfaces and the Anthropic library for accessing Claude models. This is required for the web interface example.

pip install gradio anthropic

Create and Run a Basic BMasterAI Agent

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/getting-started.md

Demonstrates how to create a simple AI agent using BMasterAI's logging and monitoring utilities. It includes initializing the agent, executing tasks, and tracking performance. The example shows how to configure logging, start monitoring, log events like agent start and task completion/errors, and track task durations.

from bmasterai.logging import configure_logging, get_logger, LogLevel, EventType
from bmasterai.monitoring import get_monitor
import time

# Configure logging
configure_logging(log_level=LogLevel.INFO)

# Start monitoring
monitor = get_monitor()
monitor.start_monitoring()

class MyFirstAgent:
    def __init__(self, agent_id: str, name: str):
        self.agent_id = agent_id
        self.name = name
        self.logger = get_logger()
        self.monitor = monitor
        
        # Log agent creation
        self.logger.log_event(
            self.agent_id,
            EventType.AGENT_START,
            f"Agent {self.name} initialized",
            metadata={"name": self.name}
        )
    
    def execute_task(self, task_name: str, task_data: dict = None):
        start_time = time.time()
        
        try:
            # Log task start
            self.logger.log_event(
                self.agent_id,
                EventType.TASK_START,
                f"Starting task: {task_name}",
                metadata={"task_data": task_data or {}}
            )
            
            # Simulate task execution
            time.sleep(1)  # Your actual task logic here
            result = {"status": "completed", "task": task_name}
            
            # Calculate duration and track performance
            duration_ms = (time.time() - start_time) * 1000
            self.monitor.track_task_duration(self.agent_id, task_name, duration_ms)
            
            # Log task completion
            self.logger.log_event(
                self.agent_id,
                EventType.TASK_COMPLETE,
                f"Completed task: {task_name}",
                duration_ms=duration_ms,
                metadata={"result": result}
            )
            
            return result
            
        except Exception as e:
            # Log error
            self.logger.log_event(
                self.agent_id,
                EventType.TASK_ERROR,
                f"Task failed: {task_name} - {str(e)}",
                level=LogLevel.ERROR
            )
            
            self.monitor.track_error(self.agent_id, "task_execution")
            return {"status": "failed", "error": str(e)}

# Create and run your agent
if __name__ == "__main__":
    agent = MyFirstAgent("my-agent-001", "MyFirstAgent")
    
    # Execute some tasks
    result1 = agent.execute_task("data_processing", {"input": "sample_data"})
    result2 = agent.execute_task("analysis", {"type": "basic"})
    
    print(f"Task 1 result: {result1}")
    print(f"Task 2 result: {result2}")
    
    # Get performance dashboard
    dashboard = monitor.get_agent_dashboard("my-agent-001")
    print(f"Agent performance: {dashboard}")

Run Python Agent Script

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/getting-started.md

Executes the Python script containing the BMasterAI agent example. This command starts the agent and its tasks.

python my_first_agent.py

Run All Examples

Source: https://github.com/travis-burmaster/bmasterai/blob/main/INSTALL.md

Navigates to the examples directory and executes a script to run all provided examples.

cd examples
python run_examples.py

BMasterAI CLI: Start Monitoring

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/getting-started.md

Starts the BMasterAI monitoring service. This command enables the collection and display of performance metrics and logs.

bmasterai monitor

Test RAG Connection and Launch Interface

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/getting-started.md

Tests the connection to the Qdrant vector database and launches a Gradio-based web interface for interacting with the RAG system. This allows users to test the setup and use the RAG functionality interactively.

python examples/minimal-rag/test_qdrant_connection.py
python examples/minimal-rag/gradio_qdrant_rag.py

Local Development Setup

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/gradio-anthropic/README.md

Steps to set up the Gradio Anthropic example locally. This involves navigating to the example directory, installing Python dependencies, configuring environment variables, and running the application.

cd examples/gradio-anthropic
pip install -r requirements.txt
cp .env.example .env
# Edit .env and add your ANTHROPIC_API_KEY
python gradio-app-bmasterai.py

Clone Repository and Navigate

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/rag-qdrant/README.md

Clones the BMasterAI repository from GitHub and changes the directory to the RAG example setup. This is the initial step to get the project code.

git clone https://github.com/travis-burmaster/bmasterai.git
cd bmasterai/examples/rag-qdrant

Development Workflow Example

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/cli/overview.md

A practical example demonstrating a typical development workflow using the BMasterAI CLI, starting with project initialization and then launching the monitoring service in the background.

# 1. Create new project
bmasterai init my-ai-app
cd my-ai-app

# 2. Start monitoring in background
bmasterai monitor &

Install BMasterAI Qdrant RAG Dependencies

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/minimal-rag/README_qdrant_cloud.md

Installs necessary Python packages for the BMasterAI Qdrant Cloud RAG example using pip. This includes the BMasterAI framework, Qdrant client, OpenAI library, and sentence transformers.

pip install -r requirements_qdrant.txt

# Or install individually
pip install bmasterai qdrant-client openai sentence-transformers numpy

Run Individual Examples

Source: https://github.com/travis-burmaster/bmasterai/blob/main/INSTALL.md

Executes specific example functionalities using Python's -c flag. Demonstrates stateful agents, multi-agent coordination, and advanced monitoring.

python -c "from examples.enhanced_examples import example_stateful_agent_with_logging; example_stateful_agent_with_logging()"
python -c "from examples.enhanced_examples import example_multi_agent_coordination_with_logging; example_multi_agent_coordination_with_logging()"
python -c "from examples.enhanced_examples import example_advanced_monitoring_and_alerts; example_advanced_monitoring_and_alerts()"

BMasterAI CLI: Initialize Project

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/getting-started.md

Initializes a new BMasterAI project with a specified name. This command sets up the basic directory structure and configuration files for a new AI project.

bmasterai init my-ai-project
cd my-ai-project

Run BMasterAI Qdrant RAG Example

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/minimal-rag/README_qdrant_cloud.md

Executes the main Python script to start the BMasterAI Qdrant Cloud RAG system, initiating document processing, vector storage, and query handling.

python bmasterai_rag_qdrant_cloud.py

Kubernetes Backup: Velero Setup and Backup Creation

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Illustrates how to set up and use Velero for Kubernetes backups. It includes applying the Velero installation manifest and creating manual backups for specific namespaces.

Kubernetes Backup with Velero:

  Install Velero (example for v1.12.0):
    kubectl apply -f https://github.com/vmware-tanzu/velero/releases/latest/download/velero-v1.12.0-linux-amd64.tar.gz
    - Downloads and applies the Velero installation manifest to set up the backup tool.

  Create a backup:
    velero backup create bmasterai-backup --include-namespaces bmasterai
    - Initiates a backup of all resources within the 'bmasterai' namespace, naming it 'bmasterai-backup'.

  Schedule regular backups:
    velero schedule create bmasterai-daily --schedule="0 2 * * *" --include-namespaces bmasterai
    - Configures a daily backup job (at 2 AM UTC) for the 'bmasterai' namespace.

Install ArgoCD for GitOps Automation

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes. This involves creating the ArgoCD namespace and applying the official installation manifest.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Basic Python Installation

Source: https://github.com/travis-burmaster/bmasterai/blob/main/README.md

Installs the core BMasterAI framework using pip. This is the recommended starting point for most users.

pip install bmasterai

Install AWS CLI and eksctl

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs the necessary command-line tools for interacting with Amazon EKS, including the AWS CLI and eksctl.

aws --version
eksctl version

Install and View kube-bench Results

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs the kube-bench tool to run CIS Kubernetes Benchmarks and then retrieves the job logs to view the results. Ensure you have kubectl configured to connect to your cluster.

# Install kube-bench
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml

# View results
kubectl logs job/kube-bench

Install Kubecost for Cost Monitoring

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs Kubecost, a tool for monitoring and optimizing Kubernetes costs, using Helm. This command adds the Kubecost Helm repository and installs the chart into a dedicated namespace.

helm repo add kubecost https://kubecost.github.io/cost-analyzer/
helm install kubecost kubecost/cost-analyzer \
  --namespace kubecost \
  --create-namespace

Basic Agent Setup and Execution

Source: https://github.com/travis-burmaster/bmasterai/blob/main/README.md

Demonstrates how to configure logging, monitoring, integrations, and run a basic agent with a task execution example.

from bmasterai.logging import configure_logging, LogLevel
from bmasterai.monitoring import get_monitor
from bmasterai.integrations import get_integration_manager, SlackConnector

# Configure logging and monitoring
logger = configure_logging(log_level=LogLevel.INFO)
monitor = get_monitor()
monitor.start_monitoring()

# Setup integrations
integration_manager = get_integration_manager()
slack = SlackConnector(webhook_url="YOUR_SLACK_WEBHOOK")
integration_manager.add_connector("slack", slack)

# Create and run an agent
from bmasterai.examples import EnhancedAgent

agent = EnhancedAgent("agent-001", "DataProcessor")
agent.start()

# Execute tasks with full monitoring
result = agent.execute_task("data_analysis", {"dataset": "sales.csv"})
print(f"Task result: {result}")

# Get performance dashboard
dashboard = monitor.get_agent_dashboard("agent-001")
print(f"Agent performance: {dashboard}")

agent.stop()

Run Test Suite

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/minimal-rag/README_qdrant_cloud.md

Provides the bash commands necessary to install testing dependencies and execute the test suite for the RAG system. This ensures code quality and functionality.

# Install test dependencies
pip install pytest pytest-asyncio

# Run tests
pytest test_qdrant_rag.py -v

Install Project Dependencies

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/examples/rag-tutorials.md

Installs the necessary Python libraries for building RAG systems, including BMasterAI, Qdrant client, OpenAI, and sentence transformers.

pip install bmasterai qdrant-client openai sentence-transformers numpy

Configuration Setup

Source: https://github.com/travis-burmaster/bmasterai/blob/main/INSTALL.md

Copies the default configuration file and sets environment variables for sensitive data like API keys or webhook URLs.

cp config/config.yaml ./config.yaml
export SLACK_WEBHOOK_URL="your-slack-webhook"
export EMAIL_USERNAME="your-email@domain.com"
export EMAIL_PASSWORD="your-app-password"

Install and Access BMasterAI CLI

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/cli/overview.md

Demonstrates how to install the BMasterAI CLI using pip and how to access its help information to verify the installation.

pip install bmasterai
bmasterai --help

Install Trivy Operator and Scan Image

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs the Trivy Kubernetes operator and then scans a Docker image named 'bmasterai:latest' for vulnerabilities. Requires kubectl and Trivy operator installation.

# Install Trivy operator
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/trivy-operator/main/deploy/static/trivy-operator.yaml

# Scan BMasterAI image
trivy image bmasterai:latest

Install Azure CLI

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs the Azure Command-Line Interface, used for managing Azure resources, including AKS clusters.

az version

Install Fluent Bit with Elasticsearch Output

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs Fluent Bit using Helm, configuring it to send logs to an Elasticsearch cluster. Requires Helm and a running Elasticsearch instance.

helm repo add fluent https://fluent.github.io/helm-charts
helm install fluent-bit fluent/fluent-bit \
  --namespace logging \
  --set config.outputs='[OUTPUT]\n    Name es\n    Match *\n    Host elasticsearch-master\n    Port 9200'

Run Simple RAG System

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/examples/rag-tutorials.md

Command to execute the Python script for the RAG system demo. This initiates the system, indexes documents, and processes example queries.

python basic_rag.py

Kubernetes Quick Start Scripts

Source: https://github.com/travis-burmaster/bmasterai/blob/main/README.md

Utilizes provided shell scripts for a quick setup and deployment of BMasterAI on Kubernetes, specifically targeting AWS EKS.

git clone https://github.com/travis-burmaster/bmasterai.git
cd bmasterai
./eks/setup-scripts/01-create-cluster.sh
./eks/setup-scripts/02-deploy-bmasterai.sh

Set Anthropic API Key and Launch Chat Interface

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/getting-started.md

Sets the Anthropic API key as an environment variable and launches a Gradio chat interface that integrates with BMasterAI and Anthropic Claude. This enables conversational AI interactions.

export ANTHROPIC_API_KEY="your-anthropic-api-key"
python examples/gradio-anthropic/gradio-app-bmasterai.py

Install Jaeger for Distributed Tracing

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs the Jaeger tracing system using its official Helm chart. This enables distributed tracing capabilities for services within the cluster.

helm repo add jaegertracing https://jaegertracing.github.io/helm-charts
helm install jaeger jaegertracing/jaeger -n tracing --create-namespace

BMasterAI CLI: Check System Status

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/getting-started.md

Checks the current status of the BMasterAI system and its components. This command is useful for verifying that the environment is set up correctly.

bmasterai status

BMasterAI CLI Commands

Source: https://github.com/travis-burmaster/bmasterai/blob/main/INSTALL.md

Essential commands for managing and interacting with the BMasterAI system. These include starting monitoring services and running integration tests.

BMasterAI CLI:
  monitor
    Starts the BMasterAI monitoring mode.

  test-integrations
    Executes integration tests for the BMasterAI system.

Initialize New Project

Source: https://github.com/travis-burmaster/bmasterai/blob/main/INSTALL.md

Creates a new BMasterAI project directory with a standard file structure, including agent, config, and logs directories.

bmasterai init my-ai-project
cd my-ai-project

Install Gradio for Web Interface

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/examples/rag-tutorials.md

Command to install the Gradio library, typically used for creating web interfaces for machine learning models. This is a prerequisite for building a web UI for the RAG system.

pip install gradio

Quick Installation

Source: https://github.com/travis-burmaster/bmasterai/blob/main/INSTALL.md

Installs the BMasterAI package in editable mode. This is the standard installation for using the framework.

cd bmasterai
pip install -e .

Bash Development Setup and Dependencies

Source: https://github.com/travis-burmaster/bmasterai/blob/main/README.md

Provides essential bash commands for setting up the development environment. This includes cloning the repository, installing project dependencies with development extras, and configuring pre-commit hooks for code quality.

git clone https://github.com/travis-burmaster/bmasterai.git
cd bmasterai
pip install -e .[dev]
pre-commit install

Install and Initialize BMasterAI Locally

Source: https://github.com/travis-burmaster/bmasterai/blob/main/README.md

Installs the BMasterAI package using pip and initializes a new project. This is the recommended approach for local development and testing.

pip install bmasterai
bmasterai init my-project

Install gcloud SDK

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs the Google Cloud SDK, which is required for managing Google Cloud resources, including GKE clusters.

gcloud version

Install Vault Helm Chart

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs HashiCorp Vault using its official Helm chart. Vault is a tool for securely accessing secrets, encrypting data, and managing identities.

helm repo add hashicorp https://helm.releases.hashicorp.com
helm install vault hashicorp/vault -n vault --create-namespace

Execute Advanced RAG Script

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/examples/rag-tutorials.md

This command executes the advanced_rag.py Python script, which is assumed to perform more complex RAG operations or setup. It requires Python to be installed and the script to be present in the current directory.

python advanced_rag.py

Install Prometheus Stack with Helm

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Deploys the kube-prometheus-stack Helm chart, which includes Prometheus, Alertmanager, and Grafana. This provides a comprehensive monitoring solution for Kubernetes.

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace \
  --set grafana.adminPassword=admin123

Deploy BMasterAI with Custom Helm Values

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs the BMasterAI Helm chart using a custom values file for production configurations.

# Add custom values
cat > values-production.yaml << EOF
replicaCount: 3

image:
  repository: bmasterai
  tag: "latest"

resources:
  limits:
    cpu: 1000m
    memory: 1Gi
  requests:
    cpu: 500m
    memory: 512Mi

autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10

persistence:
  enabled: true
  size: 10Gi

secrets:
  openaiApiKey: "eW91ci1hcGkta2V5"  # base64 encoded
  anthropicApiKey: "eW91ci1hbnRocm9waWMta2V5"
EOF

# Deploy with custom values
helm install bmasterai ./helm/bmasterai \
  --namespace bmasterai \
  --values values-production.yaml

Install OPA Gatekeeper

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs the Open Policy Agent (OPA) Gatekeeper, a Kubernetes admission controller, using its official deployment manifest. This is a prerequisite for enforcing custom policies.

kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/release-3.14/deploy/gatekeeper.yaml

Setup Local Qdrant with Docker

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/rag-qdrant/README.md

Starts a local Qdrant vector database instance using Docker. This command maps the default Qdrant port (6333) to the host machine, making it accessible for the RAG system.

docker run -p 6333:6333 qdrant/qdrant

Run RAG System with Queries

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/examples/rag-tutorials.md

This Python script demonstrates how to initialize and use a RAGSystem to process a list of queries. It iterates through predefined questions, sends them to the RAG system for answers, and prints the results. Ensure the RAGSystem class and its dependencies are correctly installed.

from rag_system import RAGSystem
import os

def main():
    # Load environment variables if needed
    # For example, if your RAGSystem needs API keys from .env
    # from dotenv import load_dotenv
    # load_dotenv()

    # Initialize RAG system
    # Assuming RAGSystem takes a configuration path or object
    # For this example, let's assume it loads default config or requires minimal setup
    rag = RAGSystem()

    queries = [
        "What are the core components of a RAG system?",
        "How does retrieval augmentation improve LLM responses?",
        "Explain the benefits of using RAG for AI applications"
    ]

    for query in queries:
        print(f"\n{'='*60}")
        answer = rag.query(query)
        print(f"{'='*60}")

if __name__ == "__main__":
    main()

Install Vertical Pod Autoscaler (VPA)

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs the Vertical Pod Autoscaler (VPA) components into the Kubernetes cluster. This enables automatic adjustment of pod resource requests and limits.

git clone https://github.com/kubernetes/autoscaler.git
cd autoscaler/vertical-pod-autoscaler
./hack/vpa-install.sh

Install AWS Load Balancer Controller on EKS

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs the AWS Load Balancer Controller using Helm to manage AWS Application Load Balancers (ALBs) and Network Load Balancers (NLBs) for Kubernetes services.

helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=bmasterai-cluster

Deploy BMasterAI using Helm

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Deploys the BMasterAI application using a Helm chart, creating a namespace and setting the OpenAI API key.

# Create namespace
kubectl create namespace bmasterai

# Install BMasterAI
helm install bmasterai ./helm/bmasterai \
  --namespace bmasterai \
  --set secrets.openaiApiKey=$(echo -n "your-api-key" | base64)

Development Installation

Source: https://github.com/travis-burmaster/bmasterai/blob/main/README.md

Installs BMasterAI in editable mode with development dependencies, suitable for contributing to the framework.

git clone https://github.com/travis-burmaster/bmasterai.git
cd bmasterai
pip install -e .[dev]

Verify Installation

Source: https://github.com/travis-burmaster/bmasterai/blob/main/INSTALL.md

Checks if the BMasterAI installation was successful by running a status command.

bmasterai status

Clone BMasterAI Repository

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Clones the BMasterAI project repository from GitHub and navigates into the project directory.

git clone https://github.com/travis-burmaster/bmasterai.git
cd bmasterai

Running Tests

Source: https://github.com/travis-burmaster/bmasterai/blob/main/INSTALL.md

Commands for installing test dependencies and running the test suite using pytest, with options for coverage and specific files.

pip install -e .[dev]
pytest
pytest --cov=bmasterai
pytest tests/test_enhanced_functionality.py -v

Basic RAG System Initialization

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/examples/rag-tutorials.md

Initializes a Python script for a basic Retrieval-Augmented Generation (RAG) system using BMasterAI and Qdrant Cloud. It includes logging and monitoring setup.

#!/usr/bin/env python3
"""Basic RAG system with BMasterAI and Qdrant Cloud"""

import os
import time
from typing import List, Dict, Any

# BMasterAI imports
from bmasterai.logging import configure_logging, get_logger, LogLevel, EventType
from bmasterai.monitoring import get_monitor

Start the BMasterAI RAG Application

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/rag-qdrant/README.md

Executes the main Python script to launch the BMasterAI RAG system with a Gradio interface. This command starts the server, making the application accessible via a web browser.

python bmasterai-gradio-rag.py

Configure Qdrant and OpenAI Credentials

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/minimal-rag/README_qdrant_cloud.md

Sets up environment variables for Qdrant Cloud URL, API key, and OpenAI API key, which are essential for the RAG system to connect to these services.

# Qdrant Cloud configuration
export QDRANT_URL="https://your-cluster.qdrant.io"
export QDRANT_API_KEY="your-qdrant-api-key"

# OpenAI configuration
export OPENAI_API_KEY="your-openai-api-key"
QDRANT_URL=https://your-cluster.qdrant.io
QDRANT_API_KEY=your-qdrant-api-key
OPENAI_API_KEY=your-openai-api-key

Development Installation

Source: https://github.com/travis-burmaster/bmasterai/blob/main/INSTALL.md

Installs the BMasterAI package with development and integration dependencies. Recommended for contributing to the project or using all features.

pip install -e .[dev,integrations]

Full Integration Python Installation

Source: https://github.com/travis-burmaster/bmasterai/blob/main/README.md

Installs BMasterAI along with all optional integrations and dependencies, enabling the full suite of features.

pip install bmasterai[all]

Deploy ELK Stack (Elasticsearch & Kibana) with Helm

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs Elasticsearch and Kibana using their respective Helm charts. This sets up a basic ELK stack for centralized logging and analysis.

helm repo add elastic https://helm.elastic.co
helm install elasticsearch elastic/elasticsearch -n logging --create-namespace
helm install kibana elastic/kibana -n logging

Install EBS CSI Driver on EKS

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs the Amazon Elastic Block Store (EBS) Container Storage Interface (CSI) driver on the EKS cluster for persistent volume management.

eksctl create addon --name aws-ebs-csi-driver --cluster bmasterai-cluster

Deploy BMasterAI using kubectl

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Applies all Kubernetes manifests from the k8s directory to deploy BMasterAI.

# Apply all manifests
kubectl apply -f k8s/

Install External Secrets Operator

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Installs the External Secrets Operator (ESO) using Helm. ESO allows Kubernetes to securely fetch secrets from external sources like cloud provider secret managers.

helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets -n external-secrets-system --create-namespace

Docker Compose Deployment

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/gradio-anthropic/README.md

Configuration and command to deploy the application using Docker Compose. This method simplifies multi-container setups and requires setting the API key in a local .env file.

# Set your API key in .env file
echo "ANTHROPIC_API_KEY=your-api-key" > .env

# Start the application
docker-compose up -d

CI/CD Workflow for BMasterAI Health Check

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/cli/overview.md

An example GitHub Actions workflow to check BMasterAI integrations on code push or pull requests. It installs BMasterAI and runs integration tests.

# .github/workflows/bmasterai-check.yml
name: BMasterAI Health Check

on: [push, pull_request]

jobs:
  health-check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Install BMasterAI
        run: pip install bmasterai
      - name: Test integrations
        run: bmasterai test-integrations
        env:
          SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

Configure EKS Managed Node Group with Spot Instances

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

An example configuration using eksctl to create an EKS cluster with managed node groups that utilize EC2 Spot Instances. This helps optimize costs for worker nodes.

# EKS managed node group with spot
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: bmasterai-cluster
  region: us-west-2

managedNodeGroups:
- name: spot-workers
  instanceTypes:
  - m5.large
  - m5.xlarge
  spot: true
  minSize: 1
  maxSize: 10
  desiredCapacity: 3

Kubernetes Troubleshooting: Storage Issues

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Guides on diagnosing storage-related problems by checking Persistent Volumes (PVs) and Persistent Volume Claims (PVCs), describing storage classes, and examining disk usage within pods.

Kubernetes Storage Issue Diagnosis:

  Check Persistent Volumes and Claims:
    kubectl get pv,pvc -n bmasterai
    - Lists all Persistent Volumes and Persistent Volume Claims in the 'bmasterai' namespace to check their status and binding.

  Describe storage class:
    kubectl describe storageclass
    - Shows details of available StorageClasses, which define how storage is provisioned.

  Check disk usage within a pod:
    kubectl exec -it <pod-name> -n bmasterai -- df -h
    - Executes the 'df -h' command inside a pod to check mounted filesystems and disk space usage.

Install Project Dependencies

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/rag-qdrant/README.md

Installs all necessary Python packages listed in the requirements.txt file. This ensures the project has all its dependencies met for execution.

pip install -r requirements.txt

BMasterAI Configuration File Format

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/cli/overview.md

An example of a BMasterAI configuration file in YAML format, detailing settings for logging, monitoring, agents, and various integrations like Slack and email.

# config.yaml
logging:
  level: INFO
  enable_console: true
  enable_file: true
  enable_json: true

monitoring:
  collection_interval: 30

agents:
  default_timeout: 300
  max_retries: 3

integrations:
  slack:
    enabled: true
    webhook_url: "${SLACK_WEBHOOK_URL}"
  
  email:
    enabled: true
    smtp_server: "smtp.gmail.com"
    smtp_port: 587
    username: "${EMAIL_USERNAME}"
    password: "${EMAIL_PASSWORD}"

BMasterAI CLI Commands

Source: https://github.com/travis-burmaster/bmasterai/blob/main/INSTALL.md

Common commands for interacting with the BMasterAI Command Line Interface, including project initialization and status checks.

bmasterai init my-project
bmasterai status

Install General Utilities

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/rag-qdrant/requirements.txt

Includes utility libraries for displaying progress bars, creating command-line interfaces, rich text formatting, and human-readable time/size conversions.

tqdm>=4.65.0
click>=8.1.0
rich>=13.0.0
humanize>=4.7.0

Example Production Requirements Pinning

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/rag-qdrant/requirements.txt

An example of how to pin specific versions of dependencies for production deployments to ensure reproducibility and stability.

# Example production requirements.txt:
# gradio==4.7.1
# requests==2.31.0
# qdrant-client==1.7.0
# sentence-transformers==2.2.2
# etc.

Docker Build and Run

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/gradio-anthropic/README.md

Instructions for building a Docker image for the Gradio application and running it as a container. It includes setting the Anthropic API key as an environment variable.

docker build -t bmasterai-gradio .
docker run -p 7860:7860 -e ANTHROPIC_API_KEY=your-key bmasterai-gradio

Build Docker Image for RAG System (Dockerfile)

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/examples/rag-tutorials.md

This Dockerfile defines the environment for the RAG system. It uses a slim Python 3.9 image, sets the working directory, installs dependencies from requirements.txt, copies the application code, exposes the application port, and specifies the command to run the Python script.

FROM python:3.9-slim

WORKDIR /app

# Install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt

# Copy application
COPY . .

# Expose port
EXPOSE 7860

# Run application
CMD ["python", "production_rag.py"]

Kubernetes Troubleshooting: Service Discovery

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Focuses on verifying service discovery mechanisms, including checking service endpoints, using port-forwarding for direct testing, and inspecting ingress configurations.

Kubernetes Service Discovery Checks:

  Test service endpoints:
    kubectl get endpoints -n bmasterai
    - Lists the IP addresses and ports of pods backing each service in the 'bmasterai' namespace.

  Port forward for testing:
    kubectl port-forward svc/bmasterai-service 8080:80 -n bmasterai
    - Forwards local port 8080 to port 80 of the 'bmasterai-service', allowing direct testing.

  Check ingress configuration:
    kubectl describe ingress -n bmasterai
    - Displays details of Ingress resources in the 'bmasterai' namespace, which manage external access to services.

Install/Upgrade BMasterAI CLI

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/cli/overview.md

Install or upgrade the BMasterAI command-line interface. This is crucial if the 'bmasterai' command is not found or if you need the latest features.

# If bmasterai command is not found
pip install --upgrade bmasterai

Install Optional Production Deployment Tools

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/rag-qdrant/requirements.txt

Optional packages for production deployment, including WSGI/ASGI servers, Docker integration, and Kubernetes client libraries.

# gunicorn>=21.0.0  # WSGI server
# uvicorn>=0.23.0  # ASGI server
# docker>=6.1.0  # Docker integration
# kubernetes>=27.0.0  # Kubernetes deployment

Get Agent Dashboard and System Health

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/minimal-rag/README_qdrant_cloud.md

Retrieves the dashboard specific to an agent and checks the overall system health status. These functions are part of the monitoring capabilities of the BMasterAI framework.

# Get agent-specific dashboard
dashboard = monitor.get_agent_dashboard("qdrant-rag-agent")

# Get system health
health = monitor.get_system_health()

Customizing System Prompt

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/gradio-anthropic/README.md

Python code snippet showing how to set a custom system prompt for the AI model by passing it to the BMasterAIConfig constructor.

config = BMasterAIConfig(
    # ...
    system_prompt="You are a specialized AI assistant for [your use case]."
)

Deploy BMasterAI using Helm

Source: https://github.com/travis-burmaster/bmasterai/blob/main/README.md

Installs BMasterAI on a Kubernetes cluster using Helm. This involves adding the BMasterAI Helm repository and then performing the installation.

helm repo add bmasterai https://travis-burmaster.github.io/bmasterai
helm install bmasterai bmasterai/bmasterai

Deploy BMasterAI with Kustomize

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Applies the Kustomize configuration to deploy the BMasterAI application. This command processes the kustomization.yaml file and applies the resulting manifests to the Kubernetes cluster.

kubectl apply -k .

Create and Navigate Project Directory

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/examples/rag-tutorials.md

Initializes a new project directory for the RAG system and changes the current working directory into it.

mkdir my-rag-system
cd my-rag-system

Helm Values Configuration Example

Source: https://github.com/travis-burmaster/bmasterai/blob/main/README-k8s.md

Customize BMasterAI deployment settings by modifying the Helm values file. This example shows how to configure replica counts, resource requests/limits, and autoscaling parameters.

# values-production.yaml
replicaCount: 5

resources:
  limits:
    cpu: 2000m
    memory: 2Gi
  requests:
    cpu: 500m
    memory: 512Mi

autoscaling:
  enabled: true
  minReplicas: 3
  maxReplicas: 20

secrets:
  openaiApiKey: "eW91ci1hcGkta2V5"  # base64 encoded

Initialize SimpleRAGSystem

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/examples/rag-tutorials.md

Initializes the RAG system by configuring logging, setting up the agent ID, and initializing Qdrant, OpenAI, and the embedding model.

from qdrant_client import QdrantClient
from qdrant_client.models import Distance, VectorParams, PointStruct
from sentence_transformers import SentenceTransformer
import openai
import hashlib
import os
from typing import List, Dict, Any

# Assuming configure_logging, get_logger, get_monitor, EventType, LogLevel are defined elsewhere
# For demonstration, let's mock them:
class MockLogger:
    def log_event(self, agent_id, event_type, message, **kwargs):
        print(f"LOG: Agent='{agent_id}', Event='{event_type}', Msg='{message}', Kwargs={kwargs}")

class MockMonitor:
    def start_monitoring(self):
        print("MONITOR: Monitoring started")

def configure_logging(log_level):
    print(f"LOGGING: Configured with level {log_level}")

def get_logger():
    return MockLogger()

def get_monitor():
    return MockMonitor()

class EventType:
    AGENT_START = "AGENT_START"
    TASK_START = "TASK_START"
    TASK_COMPLETE = "TASK_COMPLETE"
    TASK_ERROR = "TASK_ERROR"

class LogLevel:
    INFO = "INFO"
    ERROR = "ERROR"


class SimpleRAGSystem:
    def __init__(self):
        # Configure BMasterAI
        configure_logging(log_level=LogLevel.INFO)
        self.logger = get_logger()
        self.monitor = get_monitor()
        self.monitor.start_monitoring()
        
        self.agent_id = "simple-rag"
        
        # Initialize components
        self._init_qdrant()
        self._init_openai()
        self._init_embeddings()
        
        self.logger.log_event(
            self.agent_id,
            EventType.AGENT_START,
            "Simple RAG system initialized"
        )
    
    def _init_qdrant(self):
        """Initialize Qdrant client"""
        self.qdrant_client = QdrantClient(
            url=os.getenv("QDRANT_URL"),
            api_key=os.getenv("QDRANT_API_KEY")
        )
        self.collection_name = "simple_rag_demo"
    
    def _init_openai(self):
        """Initialize OpenAI client"""
        openai.api_key = os.getenv("OPENAI_API_KEY")
    
    def _init_embeddings(self):
        """Initialize embedding model"""
        self.embedding_model = SentenceTransformer('all-MiniLM-L6-v2')

Deploy BMasterAI to Kubernetes

Source: https://github.com/travis-burmaster/bmasterai/blob/main/README.md

Clones the BMasterAI repository and executes setup scripts for Kubernetes deployment. This process involves creating a cluster and deploying the BMasterAI application.

git clone https://github.com/travis-burmaster/bmasterai.git
cd bmasterai
./eks/setup-scripts/01-create-cluster.sh
./eks/setup-scripts/02-deploy-bmasterai.sh

Install Development Dependencies

Source: https://github.com/travis-burmaster/bmasterai/blob/main/examples/rag-qdrant/requirements-dev.txt

Installs all development dependencies for the project using pip. This command reads the requirements-dev.txt file, which includes production dependencies.

pip install -r requirements-dev.txt

# Include production requirements
-r requirements.txt

Initialize BMasterAI Project (bmasterai init)

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/cli/overview.md

Initializes a new BMasterAI project with a standard directory structure, including agent templates, configuration files, and log directories. It follows best practices for BMasterAI development.

bmasterai init <project-name>
bmasterai init customer-support-ai
my-project/
├── agents/my_agent.py      # Working agent template
├── config/config.yaml      # Configuration file
└── logs/                   # Log directory
cd customer-support-ai
python agents/my_agent.py

Kubernetes Troubleshooting: Pod Status and Logs

Source: https://github.com/travis-burmaster/bmasterai/blob/main/docs/kubernetes-deployment.md

Provides essential kubectl commands to diagnose pod startup failures. It covers checking pod descriptions, viewing logs (including previous container logs), and inspecting cluster events.

Kubernetes Pod Troubleshooting:

  Check pod status and details:
    kubectl describe pod <pod-name> -n bmasterai
    - Provides detailed information about the pod's state, events, and container status.

  View pod logs:
    kubectl logs <pod-name> -n bmasterai
    - Retrieves logs from the current container in the pod.

    kubectl logs <pod-name> -n bmasterai --previous
    - Retrieves logs from a previous instance of a container that crashed or was restarted.

  Inspect cluster events:
    kubectl get events -n bmasterai --sort-by='.lastTimestamp'
    - Lists recent events in the namespace, sorted by time, useful for identifying cluster-level issues affecting pods.

Claude Code Plugins Repository

Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows through natural language commands. The tool provides an extensible plugin system that allows developers to add custom slash commands, specialized agents, hooks, and MCP servers to enhance functionality. This repository contains official Claude Code plugins that demonstrate the capabilities of the plugin system and provide production-ready extensions for common development workflows.

This repository serves as both a marketplace of bundled plugins and a reference implementation for plugin developers. The plugins cover a wide range of use cases including automated code review, feature development workflows, git automation, hook-based behavior control, and development tools for the Claude Agent SDK. Each plugin is designed to be self-contained, following a standard structure with configuration in .claude-plugin/plugin.json, and can include commands (slash commands), agents (specialized AI agents), skills (reusable agent capabilities), and hooks (event handlers that intercept tool usage).

Installation

Install Claude Code globally via NPM

npm install -g @anthropic-ai/claude-code

Install via Homebrew (MacOS)

brew install --cask claude-code

Install via curl (MacOS/Linux)

curl -fsSL https://claude.ai/install.sh | bash

Navigate to project and start Claude Code

cd your-project
claude

Slash Commands

/commit-push-pr - Automated Git Workflow

# Create branch, commit, push, and open PR in one command
/commit-push-pr

# Claude automatically:
# 1. Creates new branch if on main
# 2. Stages all changes
# 3. Creates commit with descriptive message
# 4. Pushes branch to origin
# 5. Opens pull request using gh CLI

# Example output:
# ✓ Created branch: feature/add-oauth-login
# ✓ Committed: "Add OAuth authentication with Google and GitHub providers"
# ✓ Pushed to origin
# ✓ PR created: https://github.com/owner/repo/pull/123

/dedupe - Find Duplicate GitHub Issues

# Find up to 3 duplicate issues for a given issue
/dedupe 456

# Claude uses 5 parallel agents with diverse search strategies:
# - Keyword-based searches
# - Semantic similarity searches
# - Tag and label matching
# - Description analysis
# - Historical issue patterns

# Automatically filters false positives and comments on issue:
# "Found 3 possible duplicate issues:
# 1. https://github.com/owner/repo/issues/123
# 2. https://github.com/owner/repo/issues/234
# 3. https://github.com/owner/repo/issues/345
#
# This issue will be automatically closed as a duplicate in 3 days."

/oncall-triage - GitHub Issue Triage for Oncall

# Identify critical bugs needing oncall attention
/oncall-triage

# Searches for:
# - Open bugs updated in last 3 days
# - Issues with 50+ engagements (comments + reactions)
# - Blocking issues preventing core functionality

# Evaluates each issue for:
# - User impact (crashes, hangs, unresponsiveness)
# - Workaround availability
# - Core functionality blockage

# Applies "oncall" label to critical issues
# Summary output:
# "Added oncall label to 3 issues:
# #789: Claude Code crashes on startup (M1 Macs)
# #790: Infinite loop when reading large files
# #791: Cannot authenticate with API key"

/code-review - Automated PR Code Review

# Run code review on current PR (output to terminal)
/code-review

# Post review as PR comment
/code-review --comment

# Launches 4 parallel specialized agents:
# Agent #1 & #2: CLAUDE.md compliance (2 agents for redundancy)
# Agent #3: Bug detection in changes
# Agent #4: Historical context analysis via git blame

# Each issue scored 0-100 for confidence
# Only reports issues with ≥80 confidence

# Example review output:
# ## Code review
#
# Found 2 issues:
#
# 1. Missing error handling for OAuth callback (CLAUDE.md compliance)
# https://github.com/owner/repo/blob/abc123.../src/auth.ts#L67-L72
#
# 2. Memory leak: OAuth state not cleaned up (bug detection)
# https://github.com/owner/repo/blob/abc123.../src/auth.ts#L88-L95

/feature-dev - Comprehensive Feature Development Workflow

# Launch guided 7-phase feature development
/feature-dev Add user authentication with OAuth

# Phase 1: Discovery - Clarify requirements
# "Let me understand what you need:
# - Which OAuth providers? (Google, GitHub, custom?)
# - Store tokens or just profile data?
# - Replace existing auth or add alongside?"

# Phase 2: Codebase Exploration - Launch 2-3 code-explorer agents
# "Found similar features:
# - User authentication (src/auth/): JWT tokens, middleware pattern
# - Session management (src/session/): Redis-backed
# Key files: src/auth/AuthService.ts:45, src/middleware/authMiddleware.ts:12"

# Phase 3: Clarifying Questions - Fill gaps
# "Before designing, I need to clarify:
# 1. OAuth provider: Which providers?
# 2. Token storage: Where to store OAuth tokens?
# 3. Error handling: How to handle OAuth failures?"

# Phase 4: Architecture Design - 2-3 approaches with trade-offs
# "Approach 1: Minimal Changes (fast, low risk)
# Approach 2: Clean Architecture (maintainable, more files)
# Approach 3: Pragmatic Balance (recommended)
# Which approach would you like?"

# Phase 5: Implementation - Build with chosen approach
# Phase 6: Quality Review - 3 code-reviewer agents
# Phase 7: Summary - Document what was accomplished

/hookify - Create Custom Behavior Prevention Rules

# Create rule from explicit instruction
/hookify Warn me when I use rm -rf commands

# Analyze conversation for unwanted behaviors
/hookify

# List all active rules
/hookify:list

# Interactive enable/disable interface
/hookify:configure

# Get help with hookify syntax
/hookify:help

# Example: Block dangerous rm commands
# Creates .claude/hookify.dangerous-rm.local.md:
# ---
# name: block-dangerous-rm
# enabled: true
# event: bash
# pattern: rm\s+-rf
# action: block
# ---
#
# ⚠️ **Dangerous rm command detected!**
# This command could delete important files.
# Operation blocked for safety.

# Rules activate immediately - no restart needed
# Supports event types: bash, file, stop, prompt, all
# Actions: warn (show message, allow) or block (prevent execution)

/new-sdk-app - Create Claude Agent SDK Application

# Interactive setup for new Agent SDK project
/new-sdk-app my-customer-agent

# Asks:
# 1. Language: TypeScript or Python?
# 2. Agent type: coding, business, custom?
# 3. Starting point: minimal, basic, or specific example?
# 4. Tooling: npm/yarn/pnpm or pip/poetry?

# Creates:
# - Project structure with latest SDK version
# - Configuration files (tsconfig.json, package.json, etc.)
# - Environment templates (.env.example)
# - Working example code
# - .gitignore with proper exclusions

# Automatically:
# - Runs type checking (TypeScript) or syntax validation (Python)
# - Verifies setup using agent-sdk-verifier

# Example TypeScript project structure:
# my-customer-agent/
# ├── src/
# │   └── index.ts          # Main agent code
# ├── .env.example          # Environment template
# ├── .gitignore
# ├── package.json
# ├── tsconfig.json
# └── README.md

Hook System

PreToolUse Hook Example - Bash Command Validator

#!/usr/bin/env python3
# File: examples/hooks/bash_command_validator_example.py

import json
import re
import sys

# Validation rules: (pattern, message)
_VALIDATION_RULES = [
    (
        r"^grep\b(?!.*\|)",
        "Use 'rg' (ripgrep) instead of 'grep' for better performance"
    ),
    (
        r"^find\s+\S+\s+-name\b",
        "Use 'rg --files | rg pattern' instead of 'find -name'"
    ),
]

def _validate_command(command: str) -> list[str]:
    issues = []
    for pattern, message in _VALIDATION_RULES:
        if re.search(pattern, command):
            issues.append(message)
    return issues

def main():
    input_data = json.load(sys.stdin)
    tool_name = input_data.get("tool_name", "")

    if tool_name != "Bash":
        sys.exit(0)

    command = input_data.get("tool_input", {}).get("command", "")
    issues = _validate_command(command)

    if issues:
        for message in issues:
            print(f"• {message}", file=sys.stderr)
        # Exit code 2 blocks tool call and shows stderr to Claude
        sys.exit(2)

if __name__ == "__main__":
    main()

# Configuration in .claude/settings.json:
# {
#   "hooks": {
#     "PreToolUse": [
#       {
#         "matcher": "Bash",
#         "hooks": [
#           {
#             "type": "command",
#             "command": "python3 /path/to/bash_command_validator_example.py"
#           }
#         ]
#       }
#     ]
#   }
# }

Hookify Rule Example - Security Pattern Detection

<!-- File: .claude/hookify.api-keys-in-typescript.local.md -->
---
name: api-key-in-typescript
enabled: true
event: file
conditions:
  - field: file_path
    operator: regex_match
    pattern: \.tsx?$
  - field: new_text
    operator: regex_match
    pattern: (API_KEY|SECRET|TOKEN)\s*=\s*["']
action: block
---

🔐 **Hardcoded credential in TypeScript!**

Never hardcode API keys, secrets, or tokens in code.

Use environment variables instead:
```typescript
const apiKey = process.env.API_KEY;
if (!apiKey) throw new Error('API_KEY not set');

Add to .env:

API_KEY=your_key_here

### Hookify Rule Example - Require Tests Before Completion

```markdown
<!-- File: .claude/hookify.require-tests.local.md -->
---
name: require-tests-run
enabled: true
event: stop
action: block
conditions:
  - field: transcript
    operator: not_contains
    pattern: npm test|pytest|cargo test
---

⚠️ **Tests not detected in transcript!**

Before stopping, please run tests to verify your changes work correctly.

Run appropriate test command:
- JavaScript/TypeScript: `npm test`
- Python: `pytest`
- Rust: `cargo test`

Agent SDK Verifier

Verify Python Agent SDK Application

# Agent: agent-sdk-verifier-py
# Automatically runs after /new-sdk-app or invoke manually:

# "Verify my Python Agent SDK application"

# Verification checks:
# ✓ SDK installation and version (claude-agent-sdk>=1.0.0)
# ✓ Python environment (requirements.txt, pyproject.toml)
# ✓ Correct SDK imports and usage
# ✓ Agent initialization and configuration
# ✓ Environment variables (.env, API keys)
# ✓ Error handling patterns
# ✓ Documentation completeness

# Example output:
# PASS WITH WARNINGS
#
# Critical Issues: None
#
# Warnings:
# 1. Consider adding error handling for API failures
#    See: https://docs.claude.com/en/api/agent-sdk/error-handling
#
# Passed Checks:
# ✓ SDK version 1.2.0 installed
# ✓ requirements.txt present with correct dependencies
# ✓ Agent properly initialized with API key
# ✓ .env.example provided
# ✓ .gitignore excludes sensitive files

Verify TypeScript Agent SDK Application

// Agent: agent-sdk-verifier-ts
// Automatically runs after /new-sdk-app or invoke manually:

// "Verify my TypeScript Agent SDK application"

// Verification checks:
// ✓ SDK installation and version (@anthropic-ai/claude-agent-sdk)
// ✓ TypeScript configuration (tsconfig.json)
// ✓ Correct SDK imports and usage
// ✓ Type safety (proper TypeScript types)
// ✓ Agent initialization and configuration
// ✓ Environment variables (.env, API keys)
// ✓ Error handling patterns
// ✓ Documentation completeness

// Example output:
// PASS
//
// All checks passed:
// ✓ SDK version 1.2.0 installed
// ✓ tsconfig.json properly configured
// ✓ All imports type-safe
// ✓ Agent initialized with proper types
// ✓ .env.example provided
// ✓ Error boundaries implemented
// ✓ README.md complete

Code Explorer Agent

# Launch code-explorer agent to trace feature implementation
# Agent analyzes execution paths, data flow, and architecture

# Example usage in conversation:
"Launch code-explorer to trace how authentication works"

# Agent output:
# Entry Points:
# 1. src/routes/auth.ts:15 - POST /auth/login endpoint
# 2. src/middleware/authMiddleware.ts:8 - authenticateRequest()
#
# Execution Flow:
# Step 1: POST /auth/login receives credentials
#   → src/routes/auth.ts:15-28
# Step 2: AuthService.login() validates credentials
#   → src/auth/AuthService.ts:45-67
# Step 3: JWT token generated and signed
#   → src/auth/JwtService.ts:23-34
# Step 4: Token stored in session
#   → src/session/SessionManager.ts:89-102
#
# Key Components:
# - AuthService: Credential validation, user lookup
# - JwtService: Token generation, signing, verification
# - SessionManager: Session storage, retrieval, cleanup
#
# Architecture Pattern: Service layer pattern with middleware
# Data Flow: HTTP → Route → Service → Database
#
# Essential Files to Read:
# - src/auth/AuthService.ts (core logic)
# - src/auth/JwtService.ts (token handling)
# - src/middleware/authMiddleware.ts (request auth)

Code Architect Agent

# Launch code-architect agent to design feature implementation
# Agent analyzes patterns, designs architecture, provides blueprint

# Example usage:
"Launch code-architect to design the caching layer"

# Agent output:
# Patterns Found in Codebase:
# - Service layer pattern (AuthService, UserService, etc.)
# - Dependency injection via constructor
# - Configuration via environment variables
# - Repository pattern for data access
#
# Architecture Decision: Redis-backed caching service
#
# Rationale:
# - Matches existing service layer pattern
# - Redis already used for sessions (no new dependency)
# - Supports TTL expiration (matches requirement)
# - Easy integration with existing services
#
# Component Design:
#
# 1. CacheService (src/cache/CacheService.ts)
#    - get(key: string): Promise<T | null>
#    - set(key: string, value: T, ttl?: number): Promise<void>
#    - delete(key: string): Promise<void>
#    - clear(pattern: string): Promise<void>
#
# 2. CacheMiddleware (src/middleware/cacheMiddleware.ts)
#    - Intercepts GET requests
#    - Checks cache before handler
#    - Updates cache after handler
#
# 3. CacheConfig (src/config/cache.ts)
#    - Redis connection settings
#    - Default TTL values
#    - Cache key prefixes
#
# Implementation Map:
# - Create: src/cache/CacheService.ts
# - Create: src/cache/RedisAdapter.ts
# - Create: src/middleware/cacheMiddleware.ts
# - Create: src/config/cache.ts
# - Modify: src/services/ApiService.ts (integrate caching)
# - Modify: src/app.ts (register middleware)
#
# Build Sequence:
# Phase 1: RedisAdapter + CacheService
# Phase 2: CacheMiddleware
# Phase 3: Integration with ApiService
# Phase 4: Configuration and testing

Code Reviewer Agent

# Launch code-reviewer agent after writing code
# Agent checks CLAUDE.md compliance, bugs, code quality

# Example usage:
"Launch code-reviewer to check my recent changes"

# Agent output:
# Code Review Results
#
# Critical Issues (Confidence: 90):
# 1. Missing error handling in OAuth callback
#    File: src/auth/oauth.ts:67-72
#    Issue: OAuth provider failures not caught
#    Fix: Wrap callback in try-catch, handle errors gracefully
#    CLAUDE.md Reference: "Always handle OAuth errors" (CLAUDE.md:45)
#
# 2. Memory leak: OAuth state not cleaned up
#    File: src/auth/oauth.ts:88-95
#    Issue: State map grows unbounded, never cleared
#    Fix: Add cleanup in finally block or use TTL-based Map
#    Bug Type: Resource leak
#
# Important Issues (Confidence: 75):
# 1. Inconsistent naming pattern
#    File: src/utils.ts:23-28
#    Issue: Function uses snake_case instead of camelCase
#    CLAUDE.md Reference: "Use camelCase for functions" (conventions/CLAUDE.md:12)
#
# Low Priority (Confidence: 50):
# 1. Could simplify token refresh logic
#    File: src/auth/oauth.ts:120-135
#    Note: Consider extracting to separate function
#
# Passed Checks:
# ✓ All tests pass
# ✓ No TypeScript errors
# ✓ Follows project file structure
# ✓ No security vulnerabilities detected
#
# Recommendation: Fix critical issues before merging

Plugin Structure

Plugin Configuration File

{
  "$schema": "https://anthropic.com/claude-code/plugin.schema.json",
  "name": "my-plugin",
  "version": "1.0.0",
  "description": "My custom Claude Code plugin",
  "author": {
    "name": "Your Name",
    "email": "your.email@example.com"
  }
}

Standard Plugin Directory Structure

my-plugin/
├── .claude-plugin/
│   └── plugin.json          # Plugin metadata
├── commands/                # Slash commands (optional)
│   ├── my-command.md
│   └── another-command.md
├── agents/                  # Specialized agents (optional)
│   ├── my-agent/
│   │   └── agent.md
│   └── another-agent/
│       └── agent.md
├── skills/                  # Reusable agent capabilities (optional)
│   ├── my-skill/
│   │   └── skill.md
│   └── another-skill/
│       └── skill.md
├── hooks/                   # Event handlers (optional)
│   ├── pretooluse/
│   │   └── validator.py
│   └── posttooluse/
│       └── logger.sh
├── hooks-handlers/          # Hook scripts (optional)
│   ├── session-start.sh
│   └── stop.py
├── .mcp.json               # MCP server configuration (optional)
└── README.md               # Plugin documentation

Plugin Marketplace Configuration

{
  "$schema": "https://anthropic.com/claude-code/marketplace.schema.json",
  "name": "my-marketplace",
  "version": "1.0.0",
  "description": "Collection of custom plugins",
  "owner": {
    "name": "Organization Name",
    "email": "contact@example.com"
  },
  "plugins": [
    {
      "name": "plugin-one",
      "description": "First plugin in marketplace",
      "source": "./plugins/plugin-one",
      "category": "development"
    },
    {
      "name": "plugin-two",
      "description": "Second plugin in marketplace",
      "version": "1.0.0",
      "author": {
        "name": "Author Name",
        "email": "author@example.com"
      },
      "source": "./plugins/plugin-two",
      "category": "productivity"
    }
  ]
}

Summary

The Claude Code plugins repository provides a comprehensive suite of tools that extend Claude Code's capabilities for common development workflows. The plugins demonstrate best practices for building extensible, reusable components that integrate seamlessly with the Claude Code agent system. Key plugin categories include git automation (commit-commands), code quality and review (code-review, pr-review-toolkit), feature development workflows (feature-dev with specialized explorer, architect, and reviewer agents), behavior control (hookify for custom prevention rules), and development tooling (agent-sdk-dev for building Agent SDK applications).

The plugin system follows a standardized structure with JSON configuration, markdown-based command definitions, and support for multiple programming languages for hooks and scripts. Plugins can be distributed through marketplace JSON files and are automatically discovered when installed. The repository serves as both a collection of production-ready plugins and a reference implementation showing how to build commands, agents, skills, hooks, and MCP server integrations. Developers can use these plugins directly, customize them for their specific needs, or use them as templates for building entirely new plugins that extend Claude Code's functionality in domain-specific ways.

Go - Code Execution Example

Source: https://ai.google.dev/gemini-api/docs/code-execution

Go example demonstrating code execution setup with the Gemini API. Shows client initialization, configuration, and response handling in Go.

## Go Code Execution Implementation

### Description
Demonstrates enabling code execution using the Go Gemini API client library.

### Example Code
```go
package main

import (
    "context"
    "fmt"
    "os"
    "google.golang.org/genai"
)

func main() {
    ctx := context.Background()
    client, err := genai.NewClient(ctx, nil)
    if err != nil {
        log.Fatal(err)
    }

    config := &genai.GenerateContentConfig{
        Tools: []*genai.Tool{
            {CodeExecution: &genai.ToolCodeExecution{}}
        }
    }

    result, _ := client.Models.GenerateContent(
        ctx,
        "gemini-2.5-flash",
        genai.Text("What is the sum of the first 50 prime numbers? Generate and run code for the calculation, and make sure you get all 50."),
        config
    )

    fmt.Println(result.Text())
    fmt.Println(result.ExecutableCode())
    fmt.Println(result.CodeExecutionResult())
}

Key Points

  • Initialize client with context
  • Create GenerateContentConfig with Tools slice
  • Include ToolCodeExecution in the Tools array
  • Use PascalCase methods: ExecutableCode() and CodeExecutionResult()

--------------------------------

### Install Google GenAI SDK - Go

Source: https://ai.google.dev/gemini-api/docs/libraries

Install the official google.golang.org/genai Go library using go get command. This is the recommended library for Go development with Gemini API, replacing the legacy google.golang.org/generative-ai package.

```bash
go get google.golang.org/genai

Guide Generative AI for JSON Output based on Order (Partial Input)

Source: https://ai.google.dev/gemini-api/docs/prompting-strategies

Demonstrates how to guide a generative AI model to produce structured JSON output representing an order. The first example uses a direct instruction, while the second shows how to use an example and a response prefix to achieve a more concise output, omitting items not ordered.

For the given order, return a JSON object that has the fields cheeseburger, hamburger, fries, or
drink, with the value being the quantity.

Order: A burger and a drink.
  
{
  "cheeseburger": 0,
  "hamburger": 1,
  "fries": 0,
  "drink": 1
}
Valid fields are cheeseburger, hamburger, fries, and drink.
Order: Give me a cheeseburger and fries
Output:
{
  "cheeseburger": 1,
  "fries": 1
}
Order: I want two burgers, a drink, and fries.
Output:
  
{
  "hamburger": 2,
  "drink": 1,
  "fries": 1
}

Guide JSON Order Completion with Examples and Output Prefixes (Gemini)

Source: https://ai.google.dev/gemini-api/docs/prompting-intro

Demonstrates how to refine JSON output from a Gemini model by providing an in-prompt example and an output prefix. This technique helps the model produce more concise JSON, omitting items not explicitly ordered.

Valid fields are cheeseburger, hamburger, fries, and drink.
Order: Give me a cheeseburger and fries
Output:

{ "cheeseburger": 1, "fries": 1 }

Order: I want two burgers, a drink, and fries.
Output:
  
{
  "hamburger": 2,
  "drink": 1,
  "fries": 1
}

Install MCP SDK

Source: https://ai.google.dev/gemini-api/docs/function-calling/tutorial

This snippet provides commands to install the Model Context Protocol (MCP) SDK, which is essential for interacting with MCP servers. The Python example uses 'pip', while the JavaScript example uses 'npm'.

pip install mcp
npm install @modelcontextprotocol/sdk

Install PyAudio for audio streaming with pip

Source: https://ai.google.dev/gemini-api/docs/live

Install PyAudio library required for streaming audio from microphone. Additional system-level dependencies like portaudio may be required depending on your operating system.

pip install pyaudio

JavaScript - Code Execution Example

Source: https://ai.google.dev/gemini-api/docs/code-execution

JavaScript example showing how to enable code execution using the Gemini API. Demonstrates client configuration, request setup, and response part handling.

## JavaScript Code Execution Implementation

### Description
Demonstrates enabling code execution using the JavaScript Gemini API client library.

### Example Code
```javascript
import { GoogleGenAI } from "@google/genai";

const ai = new GoogleGenAI({});

let response = await ai.models.generateContent({
  model: "gemini-2.5-flash",
  contents: [
    "What is the sum of the first 50 prime numbers? Generate and run code for the calculation, and make sure you get all 50."
  ],
  config: {
    tools: [{ codeExecution: {} }]
  }
});

const parts = response?.candidates?.[0]?.content?.parts || [];
parts.forEach((part) => {
  if (part.text) {
    console.log(part.text);
  }
  if (part.executableCode && part.executableCode.code) {
    console.log(part.executableCode.code);
  }
  if (part.codeExecutionResult && part.codeExecutionResult.output) {
    console.log(part.codeExecutionResult.output);
  }
});

Key Points

  • Import GoogleGenAI from @google/genai package
  • Configure tools with codeExecution object
  • Use camelCase naming: executableCode and codeExecutionResult
  • Access response using optional chaining and nullish coalescing

--------------------------------

### Install Google GenAI SDK - C#/.NET

Source: https://ai.google.dev/gemini-api/docs/libraries

Install the official Google.GenAI C# library using dotnet command. This is the recommended library for .NET development with Gemini API.

```bash
dotnet add package Google.GenAI

Initialize and Start Live Audio Application - JavaScript

Source: https://ai.google.dev/gemini-api/docs/live

Entry point function that starts both the message processing loop and audio playback loop concurrently, then initiates the Gemini Live API connection. Handles errors at the top level and manages the complete lifecycle of the real-time audio streaming application.

// Start loops
messageLoop();
playbackLoop();

live().catch(console.error);

Control Response Format and Detail Level with System Instructions (Gemini)

Source: https://ai.google.dev/gemini-api/docs/prompting-intro

This example shows how to use a system instruction to guide the Gemini model's conversational style and detail level. The instruction ensures comprehensive answers unless the user specifically requests a concise response.

    All questions should be answered comprehensively with details, unless the user requests a concise response specifically.
  
    What is a smart way to make a business that sells DVD's in 2025?
  
Starting a business selling DVDs in 2025 presents significant challenges due to the overwhelming dominance of digital streaming and on-demand services; a "smart" approach would therefore require a highly specialized niche, a strong value proposition beyond just the disc, and likely a focus on community or collector markets rather than mainstream consumption.

Here's a breakdown of what a smart DVD business in 2025 would likely entail:

1. Extreme Niche Specialization: You cannot compete with streaming libraries. Your business must focus on what digital doesn't offer easily or at all. This could include:

    * Out-of-Print & Rare Titles: Sourcing films or TV shows that have never been released digitally or are currently unavailable on major streaming platforms.
    ....
  

Initialize Node.js Audio Queues and Speaker Setup

Source: https://ai.google.dev/gemini-api/docs/live

This snippet initializes arrays to act as queues for incoming API responses and outgoing audio data. It also defines helper functions, waitMessage to asynchronously retrieve messages from a queue, and createSpeaker to set up and manage the audio speaker with error handling. This is part of the main live function for Node.js.

const responseQueue = [];
  const audioQueue = [];
  let speaker;

  async function waitMessage() {
    while (responseQueue.length === 0) {
      await new Promise((resolve) => setImmediate(resolve));
    }
    return responseQueue.shift();
  }

  function createSpeaker() {
    if (speaker) {
      process.stdin.unpipe(speaker);
      speaker.end();
    }
    speaker = new Speaker({
      channels: 1,
      bitDepth: 16,
      sampleRate: 24000,
    });
    speaker.on('error', (err) => console.error('Speaker error:', err));
    process.stdin.pipe(speaker);
  }

Establish connection to Gemini Live API

Source: https://ai.google.dev/gemini-api/docs/live-guide

This example demonstrates how to create an asynchronous connection to the Gemini Live API using an API key. It initializes the client, specifies the model and response modalities (AUDIO), and opens a session for interaction.

import asyncio
from google import genai

client = genai.Client()

model = "gemini-2.5-flash-native-audio-preview-12-2025"
config = {"response_modalities": ["AUDIO"]}

async def main():
    async with client.aio.live.connect(model=model, config=config) as session:
        print("Session started")
        # Send content...

if __name__ == "__main__":
    asyncio.run(main())
import { GoogleGenAI, Modality } from '@google/genai';

const ai = new GoogleGenAI({});
const model = 'gemini-2.5-flash-native-audio-preview-12-2025';
const config = { responseModalities: [Modality.AUDIO] };

async function main() {

  const session = await ai.live.connect({
    model: model,
    callbacks: {
      onopen: function () {
        console.debug('Opened');
      },
      onmessage: function (message) {
        console.debug(message);
      },
      onerror: function (e) {
        console.debug('Error:', e.message);
      },
      onclose: function (e) {
        console.debug('Close:', e.reason);
      },
    },
    config: config,
  });

  console.debug("Session started");
  // Send content...

  session.close();
}

main();

Lyria RealTime Prompt Examples

Source: https://ai.google.dev/gemini-api/docs/music-generation

Comprehensive guide for prompting Lyria RealTime with supported instruments, music genres, and mood descriptions. Use descriptive language combining multiple prompt elements for optimal results.

## POST /session/generateMusic

### Description
Generate music using Lyria RealTime with descriptive prompts. Prompts can include instruments, music genres, and mood/description elements. Be descriptive and iterative for best results.

### Supported Prompt Elements

#### Instruments
303 Acid Bass, 808 Hip Hop Beat, Accordion, Alto Saxophone, Bagpipes, Balalaika Ensemble, Banjo, Bass Clarinet, Bongos, Boomy Bass, Bouzouki, Buchla Synths, Cello, Charango, Clavichord, Conga Drums, Didgeridoo, Dirty Synths, Djembe, Drumline, Dulcimer, Fiddle, Flamenco Guitar, Funk Drums, Glockenspiel, Guitar, Hang Drum, Harmonica, Harp, Harpsichord, Hurdy-gurdy, Kalimba, Koto, Lyre, Mandolin, Maracas, Marimba, Mbira, Mellotron, Metallic Twang, Moog Oscillations, Ocarina, Persian Tar, Pipa, Precision Bass, Ragtime Piano, Rhodes Piano, Shamisen, Shredding Guitar, Sitar, Slide Guitar, Smooth Pianos, Spacey Synths, Steel Drum, Synth Pads, Tabla, TR-909 Drum Machine, Trumpet, Tuba, Vibraphone, Viola Ensemble, Warm Acoustic Guitar, Woodwinds

#### Music Genres
Acid Jazz, Afrobeat, Alternative Country, Baroque, Bengal Baul, Bhangra, Bluegrass, Blues Rock, Bossa Nova, Breakbeat, Celtic Folk, Chillout, Chiptune, Classic Rock, Contemporary R&B, Cumbia, Deep House, Disco Funk, Drum & Bass, Dubstep, EDM, Electro Swing, Funk Metal, G-funk, Garage Rock, Glitch Hop, Grime, Hyperpop, Indian Classical, Indie Electronic, Indie Folk, Indie Pop, Irish Folk, Jam Band, Jamaican Dub, Jazz Fusion, Latin Jazz, Lo-Fi Hip Hop, Marching Band, Merengue, New Jack Swing, Minimal Techno, Moombahton, Neo-Soul, Orchestral Score, Piano Ballad, Polka, Post-Punk, 60s Psychedelic Rock, Psytrance, R&B, Reggae, Reggaeton, Renaissance Music, Salsa, Shoegaze, Ska, Surf Rock, Synthpop, Techno, Trance, Trap Beat, Trip Hop, Vaporwave, Witch house

#### Mood/Description
Acoustic Instruments, Ambient, Bright Tones, Chill, Crunchy Distortion, Danceable, Dreamy, Echo, Emotional, Ethereal Ambience, Experimental, Fat Beats, Funky, Glitchy Effects, Huge Drop, Live Performance, Lo-fi, Ominous Drone, Psychedelic, Rich Orchestration, Saturated Tones, Subdued Melody, Sustained Chords, Swirling Phasers, Tight Groove, Unsettling, Upbeat, Virtuoso, Weird Noises

### Prompting Best Practices
- **Be descriptive**: Use adjectives describing mood, genre, and instrumentation
- **Iterate gradually**: Rather than completely changing the prompt, add or modify elements to morph the music smoothly
- **Use WeightedPrompt**: Experiment with weight parameter to influence how strongly a new prompt affects the ongoing generation

### Example Prompts
- "Upbeat Jazz Fusion with bright tones and tight groove"
- "Lo-fi Hip Hop with chill ambient vibes and acoustic instruments"
- "EDM with huge drop and crunchy distortion"

Detailed Prompts for Generating Product Photography with Gemini API

Source: https://ai.google.dev/gemini-api/docs/image-generation

These prompt examples illustrate how to craft precise text descriptions to guide the Gemini API in generating high-quality, studio-lit product images. They cover elements like product details, background, lighting, camera angles, and specific features to highlight, ensuring ultra-realistic output.

A high-resolution, studio-lit product photograph of a [product description]
on a [background surface/description]. The lighting is a [lighting setup,
e.g., three-point softbox setup] to [lighting purpose]. The camera angle is
a [angle type] to showcase [specific feature]. Ultra-realistic, with sharp
focus on [key detail]. [Aspect ratio].
A high-resolution, studio-lit product photograph of a minimalist ceramic
coffee mug in matte black, presented on a polished concrete surface. The
lighting is a three-point softbox setup designed to create soft, diffused
highlights and eliminate harsh shadows. The camera angle is a slightly
elevated 45-degree shot to showcase its clean lines. Ultra-realistic, with
sharp focus on the steam rising from the coffee. Square image.

Prompt Templates for Character Image Generation

Source: https://ai.google.dev/gemini-api/docs/image-generation

These examples provide text-based prompts used to guide the Gemini API in generating specific character views. The first is a generic template, while the second is a concrete example for generating a profile view of a man.

A studio portrait of [person] against [background], [looking forward/in profile looking right/etc.]

A studio portrait of this man against white, in profile looking right


Import Gemini API and Audio Modules in Node.js

Source: https://ai.google.dev/gemini-api/docs/live_example=mic-stream

Imports the required modules for interacting with the Google Gemini API and for handling audio input/output in Node.js. This sets up the environment for building real-time audio applications using the @google/genai, mic, and speaker libraries.

import { GoogleGenAI, Modality } from '@google/genai';
import mic from 'mic';
import Speaker from 'speaker';

Python - Code Execution Example

Source: https://ai.google.dev/gemini-api/docs/code-execution

Python example demonstrating how to enable and use code execution with the Gemini API client. Shows how to configure the code execution tool and process the response parts.

## Python Code Execution Implementation

### Description
Demonstrates enabling code execution using the Python Gemini API client library.

### Example Code
```python
from google import genai
from google.genai import types

client = genai.Client()

response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents="What is the sum of the first 50 prime numbers? Generate and run code for the calculation, and make sure you get all 50.",
    config=types.GenerateContentConfig(
        tools=[types.Tool(code_execution=types.ToolCodeExecution)]
    ),
)

for part in response.candidates[0].content.parts:
    if part.text is not None:
        print(part.text)
    if part.executable_code is not None:
        print(part.executable_code.code)
    if part.code_execution_result is not None:
        print(part.code_execution_result.output)

Key Points

  • Import genai and types from the google.genai module
  • Configure GenerateContentConfig with tools parameter
  • Include ToolCodeExecution in the tools array
  • Access response parts: text, executable_code, and code_execution_result

--------------------------------

### Initialize Gemini API and Configure Live Session in Node.js

Source: https://ai.google.dev/gemini-api/docs/live_example=mic-stream

Initializes the GoogleGenAI client and defines the model and configuration for a real-time Gemini API session, including response modalities (audio) and system instructions. It also includes an important security warning about using API keys in client-side applications.

```javascript
const ai = new GoogleGenAI({});
// WARNING: Do not use API keys in client-side (browser based) applications
// Consider using Ephemeral Tokens instead
// More information at: https://ai.google.dev/gemini-api/docs/ephemeral-tokens

// --- Live API config ---
const model = 'gemini-2.5-flash-native-audio-preview-12-2025';
const config = {
  responseModalities: [Modality.AUDIO],
  systemInstruction: "You are a helpful and friendly AI assistant.",
};

Initialize Agent with Screenshot and User Prompt

Source: https://ai.google.dev/gemini-api/docs/computer-use_hl=ar

Captures an initial screenshot of the page and creates the initial content message containing the user prompt and screenshot image. This establishes the starting state for the agent loop with both text instructions and visual context. Returns a list with one Content object containing user role, text prompt, and PNG image data.

page.goto("https://ai.google.dev/gemini-api/docs")
initial_screenshot = page.screenshot(type="png")
USER_PROMPT = "Go to ai.google.dev/gemini-api/docs and search for pricing."
print(f"Goal: {USER_PROMPT}")

contents = [
    Content(role="user", parts=[
        Part(text=USER_PROMPT),
        Part.from_bytes(data=initial_screenshot, mime_type='image/png')
    ])
]

Configure Gemini API for Live Audio Streaming

Source: https://ai.google.dev/gemini-api/docs/live

This snippet defines the model and configuration for connecting to the Gemini API with live audio capabilities. It specifies the model version, response modalities (AUDIO), and a system instruction for the AI assistant. This configuration is used for both Python and Node.js implementations.

MODEL = "gemini-2.5-flash-native-audio-preview-12-2025"
CONFIG = {
    "response_modalities": ["AUDIO"],
    "system_instruction": "You are a helpful and friendly AI assistant.",
}
const ai = new GoogleGenAI({});
// WARNING: Do not use API keys in client-side (browser based) applications
// Consider using Ephemeral Tokens instead
// More information at: https://ai.google.dev/gemini-api/docs/ephemeral-tokens

// --- Live API config ---
const model = 'gemini-2.5-flash-native-audio-preview-12-2025';
const config = {
  responseModalities: [Modality.AUDIO],
  systemInstruction: "You are a helpful and friendly AI assistant.",
};

Install Google GenAI SDK Libraries

Source: https://ai.google.dev/gemini-api/docs/migrate

This snippet provides installation commands for the new Google GenAI SDK across Python, JavaScript, and Go. The Google GenAI SDK offers an improved developer experience and is recommended for all new and migrating projects.

pip install -U -q "google-genai"
npm install @google/genai
go get google.golang.org/genai

Go SDK - Generate Content with Multiple Images

Source: https://ai.google.dev/gemini-api/docs/image-understanding

Go implementation example using the Google GenAI SDK to generate content with multiple images. Demonstrates file uploads and inline image data.

## Go SDK Implementation

### Description
Generate content with multiple images using the Go Google GenAI SDK. Supports file uploads and inline image data.

### Method
`client.Models.GenerateContent()`

### Parameters
- **ctx** (context.Context) - Required - Context for the request
- **model** (string) - Required - Model identifier (e.g., "gemini-2.5-flash")
- **contents** ([]*genai.Content) - Required - Array of content objects
- **opts** - Optional - Additional options

### Code Example
```go
package main

import (
	"context"
	"fmt"
	"os"

	"github.com/google/generative-ai-go/genai"
)

func main() {
	ctx := context.Background()
	client, _ := genai.NewClient(ctx, nil)
	defer client.Close()

	// Upload the first image
	image1Path := "path/to/image1.jpg"
	uploadedFile, _ := client.Files.UploadFromPath(ctx, image1Path, nil)

	// Prepare the second image as inline data
	image2Path := "path/to/image2.jpeg"
	imgBytes, _ := os.ReadFile(image2Path)

	parts := []*genai.Part{
		genai.NewPartFromText("What is different between these two images?"),
		genai.NewPartFromBytes(imgBytes, "image/jpeg"),
		genai.NewPartFromURI(uploadedFile.URI, uploadedFile.MIMEType),
	}

	contents := []*genai.Content{
		genai.NewContentFromParts(parts, genai.RoleUser),
	}

	result, _ := client.Models.GenerateContent(
		ctx,
		"gemini-2.5-flash",
		contents,
		nil,
	)

	fmt.Println(result.Text())
}

Key Functions

  • client.Files.UploadFromPath() - Upload image file and get reference
  • genai.NewPartFromText() - Create text part
  • genai.NewPartFromBytes() - Create inline image part from bytes
  • genai.NewPartFromURI() - Create image part from file URI
  • genai.NewContentFromParts() - Create content from parts
  • client.Models.GenerateContent() - Generate content with combined inputs

Response

  • result.Text() - Generated text response

Notes

  • Parts can be added in any order
  • Use context for request cancellation and timeouts
  • Error handling should be implemented for production use

--------------------------------

### Enable Model Audio Input Transcription - Python and JavaScript

Source: https://ai.google.dev/gemini-api/docs/live-guide

Configure the Gemini API to receive transcriptions of the user's audio input by setting `input_audio_transcription` in the setup config. The example loads audio data from a PCM file, sends it to the model via a live session, and retrieves the transcription of the input audio.

```python
import asyncio
from pathlib import Path
from google import genai
from google.genai import types

client = genai.Client()
model = "gemini-2.5-flash-native-audio-preview-12-2025"

config = {
    "response_modalities": ["AUDIO"],
    "input_audio_transcription": {}
}

async def main():
    async with client.aio.live.connect(model=model, config=config) as session:
        audio_data = Path("16000.pcm").read_bytes()

        await session.send_realtime_input(
            audio=types.Blob(data=audio_data, mime_type='audio/pcm;rate=16000')
        )

        async for msg in session.receive():
            if msg.server_content.input_transcription:
                print('Transcript:', msg.server_content.input_transcription.text)

if __name__ == "__main__":
    asyncio.run(main())
import { GoogleGenAI, Modality } from '@google/genai';
import * as fs from "node:fs";
import pkg from 'wavefile';
const { WaveFile } = pkg;

const ai = new GoogleGenAI({});
const model = 'gemini-2.5-flash-native-audio-preview-12-2025';

const config = {
  responseModalities: [Modality.AUDIO],
  inputAudioTranscription: {}
};

async function live() {
  const responseQueue = [];

  async function waitMessage() {
    let done = false;
    let message = undefined;
    while (!done) {
      message = responseQueue.shift();
      if (message) {
        done = true;
      } else {
        await new Promise((resolve) => setTimeout(resolve, 100));
      }
    }
    return message;
  }

  async function handleTurn() {
    const turns = [];
    let done = false;
    while (!done) {
      const message = await waitMessage();
      turns.push(message);
      if (message.serverContent && message.serverContent.turnComplete) {
        done = true;
      }
    }
    return turns;
  }

Initialize Gemini Client and Prepare Inpainting Prompt (Python)

Source: https://ai.google.dev/gemini-api/docs/image-generation

This Python snippet initializes the Google GenAI client and uses the Pillow library to load a base image. It defines the natural language instruction that will guide the Gemini model in performing semantic inpainting, specifically targeting the modification of a blue sofa. Dependencies include the google-generativeai and Pillow libraries, and an image file at the specified path is required as input.

from google import genai
from google.genai import types
from PIL import Image

client = genai.Client()

# Base image prompt: "A wide shot of a modern, well-lit living room with a prominent blue sofa in the center. A coffee table is in front of it and a large window is in the background."
living_room_image = Image.open('/path/to/your/living_room.png')
text_input = """Using the provided image of a living room, change only the blue sofa to be a vintage, brown leather chesterfield sofa. Keep the rest of the room, including the pillows on the sofa and the lighting, unchanged."""

Gemini API Client - Go

Source: https://ai.google.dev/gemini-api/docs/quickstart

Initialize the Gemini API client in Go and generate content from a text prompt. The client automatically retrieves the API key from the GEMINI_API_KEY environment variable.

## Go Client Example

### Description
Demonstrates how to use the Go Gemini API client to generate content.

### Usage
```go
package main

import (
    "context"
    "fmt"
    "log"
    "google.golang.org/genai"
)

func main() {
    ctx := context.Background()
    // The client gets the API key from the environment variable `GEMINI_API_KEY`.
    client, err := genai.NewClient(ctx, nil)
    if err != nil {
        log.Fatal(err)
    }

    result, err := client.Models.GenerateContent(
        ctx,
        "gemini-2.5-flash",
        genai.Text("Explain how AI works in a few words"),
        nil,
    )
    if err != nil {
        log.Fatal(err)
    }
    fmt.Println(result.Text())
}

Method Signature

  • client.Models.GenerateContent(ctx, model, content, options) - Generates content using the specified model
    • ctx (context.Context) - Required - Context for the request
    • model (string) - Required - The model identifier (e.g., "gemini-2.5-flash")
    • content (genai.Part) - Required - The content to send (use genai.Text() for text)
    • options (interface{}) - Optional - Additional options

Returns

  • result.Text() (string) - The generated text response from the model

--------------------------------

### Python: Generate Video from Initial Image with Gemini API

Source: https://ai.google.dev/gemini-api/docs/video_hl=it

This Python example demonstrates a two-step process: first, generating an initial image using the `gemini-2.5-flash-image` model based on a prompt. Subsequently, this generated image is used as a starting frame to generate a video with the `veo-3.1-generate-preview` model. The script then polls for the video generation to complete.

```python
import time
from google import genai

client = genai.Client()

prompt = "Panning wide shot of a calico kitten sleeping in the sunshine"

# Step 1: Generate an image with Nano Banana.
image = client.models.generate_content(
    model="gemini-2.5-flash-image",
    contents=prompt,
    config={"response_modalities":['IMAGE']}
)

# Step 2: Generate video with Veo 3.1 using the image.
operation = client.models.generate_videos(
    model="veo-3.1-generate-preview",
    prompt=prompt,
    image=image.parts[0].as_image(),
)

# Poll the operation status until the video is ready.
while not operation.done:
    print("Waiting for video generation to complete...")
    time.sleep(10)
    operation = client.operations.get(operation)


Close Reading Protocol User Prompt Example

Source: https://ai.google.dev/gemini-api/docs/learnlm

An example user prompt showing how a student initiates interaction with the close reading tutor, demonstrating the conversational entry point for the 4 A's protocol learning activity.

hey

Configure Code Execution Tool - Go

Source: https://ai.google.dev/gemini-api/docs/code-execution

Create a Gemini client and set up code execution by adding a ToolCodeExecution to the GenerateContentConfig. Call GenerateContent with the model name and text prompt, then access the result's text, executable code, and execution results.

package main

import (
    "context"
    "fmt"
    "os"
    "google.golang.org/genai"
)

func main() {

    ctx := context.Background()
    client, err := genai.NewClient(ctx, nil)
    if err != nil {
        log.Fatal(err)
    }

    config := &genai.GenerateContentConfig{
        Tools: []*genai.Tool{
            {CodeExecution: &genai.ToolCodeExecution{}},
        },
    }

    result, _ := client.Models.GenerateContent(
        ctx,
        "gemini-2.5-flash",
        genai.Text("What is the sum of the first 50 prime numbers? " +
                  "Generate and run code for the calculation, and make sure you get all 50."),
        config,
    )

    fmt.Println(result.Text())
    fmt.Println(result.ExecutableCode())
    fmt.Println(result.CodeExecutionResult())
}

Install Google GenAI SDK - JavaScript/TypeScript

Source: https://ai.google.dev/gemini-api/docs/libraries

Install the official @google/genai JavaScript/TypeScript library using npm package manager. This is the recommended library for Node.js and browser-based JavaScript development with Gemini API.

npm install @google/genai

Configure Gemini Live API client with PyAudio for microphone streaming

Source: https://ai.google.dev/gemini-api/docs/live

Initialize Gemini client and configure PyAudio settings for real-time microphone audio streaming. Sets audio format to 16-bit PCM, 16kHz mono for input and 24kHz for received audio. This server-side example streams audio from the microphone and plays returned audio responses.

import asyncio
from google import genai
import pyaudio

client = genai.Client()

# --- pyaudio config ---
FORMAT = pyaudio.paInt16
CHANNELS = 1
SEND_SAMPLE_RATE = 16000
RECEIVE_SAMPLE_RATE = 24000
CHUNK_SIZE = 1024

pya = pyaudio.PyAudio()

Upload Audio File - Go Implementation

Source: https://ai.google.dev/gemini-api/docs/audio

Upload an audio file and generate content using the Gemini API with the Go SDK. This example shows context-based initialization and error handling.

## Go: Upload and Process Audio with Gemini

### Code Example
```go
package main

import (
  "context"
  "fmt"
  "os"
  "google.golang.org/genai"
)

func main() {
  ctx := context.Background()
  client, err := genai.NewClient(ctx, nil)
  if err != nil {
      log.Fatal(err)
  }

  // Upload audio file from path
  localAudioPath := "/path/to/sample.mp3"
  uploadedFile, _ := client.Files.UploadFromPath(
      ctx,
      localAudioPath,
      nil,
  )

  // Create parts: text prompt and file reference
  parts := []*genai.Part{
      genai.NewPartFromText("Describe this audio clip"),
      genai.NewPartFromURI(uploadedFile.URI, uploadedFile.MIMEType),
  }
  
  // Create content from parts
  contents := []*genai.Content{
      genai.NewContentFromParts(parts, genai.RoleUser),
  }

  // Generate content
  result, _ := client.Models.GenerateContent(
      ctx,
      "gemini-2.5-flash",
      contents,
      nil,
  )

  fmt.Println(result.Text())
}

Steps

  1. Create background context
  2. Initialize Gemini client with context
  3. Upload file using UploadFromPath()
  4. Create parts array with text and file URI
  5. Create content from parts with user role
  6. Call GenerateContent() with context, model, and contents
  7. Output result using Text()

Requirements

  • Google GenAI Go SDK
  • Valid Gemini API key in environment

--------------------------------

### Gemini API Client - C#

Source: https://ai.google.dev/gemini-api/docs/quickstart

Initialize the Gemini API client in C# and asynchronously generate content from a text prompt. The client automatically retrieves the API key from the GEMINI_API_KEY environment variable.

```APIDOC
## C# Client Example

### Description
Demonstrates how to use the C# Gemini API client to generate content asynchronously.

### Usage
```csharp
using System.Threading.Tasks;
using Google.GenAI;
using Google.GenAI.Types;

public class GenerateContentSimpleText {
  public static async Task Main() {
    // The client gets the API key from the environment variable `GEMINI_API_KEY`.
    var client = new Client();
    var response = await client.Models.GenerateContentAsync(
      model: "gemini-2.5-flash",
      contents: "Explain how AI works in a few words"
    );
    Console.WriteLine(response.Candidates[0].Content.Parts[0].Text);
  }
}

Method Signature

  • client.Models.GenerateContentAsync(model, contents) - Asynchronously generates content using the specified model
    • model (string) - Required - The model identifier (e.g., "gemini-2.5-flash")
    • contents (string) - Required - The text prompt to send to the model

Returns

  • response.Candidates[0].Content.Parts[0].Text (string) - The generated text response from the model

--------------------------------

### Enable Model Audio Output Transcription - Python and JavaScript

Source: https://ai.google.dev/gemini-api/docs/live-guide

Configure the Gemini API to receive transcriptions of the model's audio responses. Requires setting `output_audio_transcription` in the setup config, with transcription language automatically inferred from the model's response. The example establishes a live session, sends a text message, and receives both model output and its transcription.

```python
import asyncio
from google import genai
from google.genai import types

client = genai.Client()
model = "gemini-2.5-flash-native-audio-preview-12-2025"

config = {
    "response_modalities": ["AUDIO"],
    "output_audio_transcription": {}
}

async def main():
    async with client.aio.live.connect(model=model, config=config) as session:
        message = "Hello? Gemini are you there?"

        await session.send_client_content(
            turns={"role": "user", "parts": [{"text": message}]}, turn_complete=True
        )

        async for response in session.receive():
            if response.server_content.model_turn:
                print("Model turn:", response.server_content.model_turn)
            if response.server_content.output_transcription:
                print("Transcript:", response.server_content.output_transcription.text)

if __name__ == "__main__":
    asyncio.run(main())
import { GoogleGenAI, Modality } from '@google/genai';

const ai = new GoogleGenAI({});
const model = 'gemini-2.5-flash-native-audio-preview-12-2025';

const config = {
  responseModalities: [Modality.AUDIO],
  outputAudioTranscription: {}
};

async function live() {
  const responseQueue = [];

  async function waitMessage() {
    let done = false;
    let message = undefined;
    while (!done) {
      message = responseQueue.shift();
      if (message) {
        done = true;
      } else {
        await new Promise((resolve) => setTimeout(resolve, 100));
      }
    }
    return message;
  }

  async function handleTurn() {
    const turns = [];
    let done = false;
    while (!done) {
      const message = await waitMessage();
      turns.push(message);
      if (message.serverContent && message.serverContent.turnComplete) {
        done = true;
      }
    }
    return turns;
  }

  const session = await ai.live.connect({
    model: model,
    callbacks: {
      onopen: function () {
        console.debug('Opened');
      },
      onmessage: function (message) {
        responseQueue.push(message);
      },
      onerror: function (e) {
        console.debug('Error:', e.message);
      },
      onclose: function (e) {
        console.debug('Close:', e.reason);
      }
    },
    config: config
  });

  const inputTurns = 'Hello how are you?';
  session.sendClientContent({ turns: inputTurns });

  const turns = await handleTurn();

  for (const turn of turns) {
    if (turn.serverContent && turn.serverContent.outputTranscription) {
      console.debug('Received output transcription: %s\n', turn.serverContent.outputTranscription.text);
    }
  }

  session.close();
}

async function main() {
  await live().catch((e) => console.error('got error', e));
}

main();

Install Google Generative AI Python client library

Source: https://ai.google.dev/gemini-api/docs/oauth

This pip command installs the google-generativeai Python client library. This library is essential for interacting with the Google Generative Language API from a Python application.

pip install google-generativeai

Install Google GenAI SDK - Python

Source: https://ai.google.dev/gemini-api/docs/libraries

Install the official google-genai Python library using pip package manager. This is the recommended library for Python development with Gemini API, replacing the legacy google-generativeai package.

pip install google-genai

Gemini API Client - Python

Source: https://ai.google.dev/gemini-api/docs/quickstart

Initialize the Gemini API client in Python and generate content from a text prompt. The client automatically retrieves the API key from the GEMINI_API_KEY environment variable.

## Python Client Example

### Description
Demonstrates how to use the Python Gemini API client to generate content.

### Usage
```python
import genai

# The client gets the API key from the environment variable `GEMINI_API_KEY`.
client = genai.Client()

response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents="Explain how AI works in a few words"
)
print(response.text)

Method Signature

  • client.models.generate_content(model, contents) - Generates content using the specified model
    • model (string) - Required - The model identifier (e.g., "gemini-2.5-flash")
    • contents (string) - Required - The text prompt to send to the model

Returns

  • response.text (string) - The generated text response from the model

--------------------------------

### Prompt Example: Generate Finished Car from Sketch

Source: https://ai.google.dev/gemini-api/docs/image-generation_hl=de

This concrete prompt example demonstrates how to apply the 'sketch to photo' template to generate a polished image of a futuristic car from a pencil sketch. It specifies retaining the car's sleek lines while adding metallic blue paint and neon rim lighting, aiming for a showroom-ready concept car appearance.

```text
"Turn this rough pencil sketch of a futuristic car into a polished photo of the finished concept car in a showroom. Keep the sleek lines and low profile from the sketch but add metallic blue paint and neon rim lighting."

Start Deep Research Agent and Poll Results (JavaScript, cURL)

Source: https://ai.google.dev/gemini-api/docs/interactions_hl=bn&ua=chat

These snippets illustrate how to initiate a 'deep-research' agent and then poll its status for completion. The JavaScript example uses the @google/genai library, while the cURL example uses direct HTTP requests. Both methods demonstrate starting an asynchronous task and periodically checking its progress until a final state is reached, retrieving the final output upon completion.

import { GoogleGenAI } from '@google/genai';

const client = new GoogleGenAI({});

// 1. Start the Deep Research Agent
const initialInteraction = await client.interactions.create({
    input: 'Research the history of the Google TPUs with a focus on 2025 and 2026.',
    agent: 'deep-research-pro-preview-12-2025',
    background: true
});

console.log(`Research started. Interaction ID: ${initialInteraction.id}`);

// 2. Poll for results
while (true) {
    const interaction = await client.interactions.get(initialInteraction.id);
    console.log(`Status: ${interaction.status}`);

    if (interaction.status === 'completed') {
        console.log('\nFinal Report:\n', interaction.outputs[interaction.outputs.length - 1].text);
        break;
    } else if (['failed', 'cancelled'].includes(interaction.status)) {
        console.log(`Failed with status: ${interaction.status}`);
        break;
    }

    await new Promise(resolve => setTimeout(resolve, 10000));
}
# 1. Start the Deep Research Agent
curl -X POST "https://generativelanguage.googleapis.com/v1beta/interactions" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
    "input": "Research the history of the Google TPUs with a focus on 2025 and 2026.",
    "agent": "deep-research-pro-preview-12-2025",
    "background": true
}'

# 2. Poll for results (Replace INTERACTION_ID with the ID from the previous interaction)
# curl -X GET "https://generativelanguage.googleapis.com/v1beta/interactions/INTERACTION_ID" \
# -H "x-goog-api-key: $GEMINI_API_KEY"

Image Refinement Prompt Examples

Source: https://ai.google.dev/gemini-api/docs/image-generation_hl=vi

Examples of text prompts to guide the Gemini model in refining sketches or rough drawings into polished, complete images. These prompts demonstrate how to specify style, details, and features.

## Image Refinement Prompt Examples

### Description
These examples illustrate how to craft effective text prompts for the `generateContent` endpoint to transform rough sketches or concepts into detailed and stylized images. You can specify the medium, subject, desired style, features to retain, and new details to add.

### Example Prompt 1: Generic Refinement
Turn this rough [medium] sketch of a [subject] into a [style description] photo. Keep the [specific features] from the sketch but add [new details/materials].

### Example Prompt 2: Detailed Car Concept
"Turn this rough pencil sketch of a futuristic car into a polished photo of the finished concept car in a showroom. Keep the sleek lines and low profile from the sketch but add metallic blue paint and neon rim lighting."

Configure Gemini API for Code Execution in Go

Source: https://ai.google.dev/gemini-api/docs/code-execution_hl=bn

This Go example illustrates how to set up the Gemini API client to activate the code execution feature. It queries the gemini-2.5-flash model with a request that requires computation, attaching a ToolCodeExecution configuration, and prints the generated text, executable code, and code execution results from the model's response.

package main

import (
    "context"
    "fmt"
    "log"
    "google.golang.org/genai"
)

func main() {

    ctx := context.Background()
    client, err := genai.NewClient(ctx, nil)
    if err != nil {
        log.Fatal(err)
    }

    config := &genai.GenerateContentConfig{
        Tools: []*genai.Tool{
            {CodeExecution: &genai.ToolCodeExecution{}},
        },
    }

    result, _ := client.Models.GenerateContent(
        ctx,
        "gemini-2.5-flash",
        genai.Text("What is the sum of the first 50 prime numbers? " +
                  "Generate and run code for the calculation, and make sure you get all 50."),
        config,
    )

    fmt.Println(result.Text())
    fmt.Println(result.ExecutableCode())
    fmt.Println(result.CodeExecutionResult())
}
### Hono Getting Started Guides
Source: https://hono.dev/
These guides provide instructions on how to set up and start using Hono on different platforms and environments. They cover basic setup as well as platform-specific configurations for Cloudflare Workers, Deno, Bun, and Fastly Compute.
```markdown
docs_getting-started_basic.md
docs_getting-started_cloudflare-workers.md
docs_getting-started_cloudflare-pages.md
docs_getting-started_deno.md
docs_getting-started_bun.md
docs_getting-started_fastly-compute.md
```
--------------------------------
### Mastra Installation Guide
Source: https://mastra.ai/docs/observability/logging
A guide on installing Mastra and setting up prerequisites for various LLM providers. It covers the initial setup required to start using the Mastra framework.
```bash
Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers.
```
--------------------------------
### Mastra Getting Started - Installation Page
Source: https://mastra.ai/examples/scorers/toxicity
Provides front matter details for the Mastra installation guide, including title, description, and file path.
```json
{
\"name\":\"installation\",
\"route\":\"/en/docs/getting-started/installation\",
\"frontMatter\":{
\"title\":\"Installing Mastra | Getting Started | Mastra Docs\",
\"description\":\"Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers.\",
\"filePath\":\"src/content/en/docs/getting-started/installation.mdx\"
},
\"title\":\"Installation\"
}
```
--------------------------------
### Install Mastra AI LLMs
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
Guide on installing Mastra and setting up necessary prerequisites for running it with various LLM providers. This includes instructions for different environments and configurations.
```bash
npm install mastra-ai-llms
```
--------------------------------
### Mastra AI LLM Interactive Installation
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
This snippet describes how to add the MCP Docs Server during the interactive installation of Mastra AI for new projects. It guides users through the setup prompts.
```English
For new projects, the MCP Docs Server can be added during installation either through the interactive setup prompts, or by specifying the -m flag using the
```
--------------------------------
### Getting Started with Keywords AI
Source: https://docs.keywordsai.co/get-started/overview
This snippet provides guidance on getting started with Keywords AI, including a link to a quickstart guide for choosing the right path for AI products and a general description of the platform's benefits.
```JavaScript
function _createMdxContent(props) {
const _components = {
li: "li",
p: "p",
strong: "strong",
ul: "ul",
..._provideComponents(),
...props.components
}, {Card, Frame, Heading, OptimizedImage, Tab, Tabs} = _components;
if (!Card) _missingMdxReference("Card", true);
if (!Frame) _missingMdxReference("Frame", true);
if (!Heading) _missingMdxReference("Heading", true);
if (!OptimizedImage) _missingMdxReference("OptimizedImage", true);
if (!Tab) _missingMdxReference("Tab", true);
if (!Tabs) _missingMdxReference("Tabs", true);
return _jsxs(_Fragment, {
children: ["\n", _jsx(Heading, {
level: "2",
id: "getting-started-with-keywords-ai",
children: "Getting started with Keywords AI"
}), "\n", _jsx(_components.p, {
children: "Keywords AI helps AI teams build reliable products faster through advanced observability, prompt engineering, and evaluation tools."
}), "\n", _jsx(Card, {
title: "Quickstart",
href: "/get-started/llm-inference",
children: _jsx(_components.p, {
children: "Choose the right path for your AI product."
})
}), ""]
});
}
```
--------------------------------
### Mastra AI Getting Started - Installation
Source: https://mastra.ai/docs/deployment/cloud-providers/digital-ocean
Guide on installing Mastra AI and setting up prerequisites for various LLM providers. Includes details on the installation process.
```json
{
"name": "installation",
"route": "/en/docs/getting-started/installation",
"frontMatter": {
"title": "Installing Mastra | Getting Started | Mastra Docs",
"description": "Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers.",
"filePath": "src/content/en/docs/getting-started/installation.mdx"
},
"title": "Installation"
}
```
--------------------------------
### Install Mastra AI LLMs
Source: https://mastra.ai/reference/cli/build
Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers. This section covers the initial setup and dependencies required to get started with the Mastra framework.
```bash
npm install mastra
```
--------------------------------
### CLI Operations
Source: https://docs.trychroma.com/docs/overview/getting-started
Guides for performing various operations using the Mastra AI LLMs CLI, including browsing, copying collections, database management, and installing sample applications.
```Documentation
Browse Collections
docs/cli/browse
Copy Collections
docs/cli/copy
DB Management
docs/cli/db
Install Sample Apps
docs/cli/sample-apps
Login
docs/cli/login
```
--------------------------------
### Get started with DeepSeek R1
Source: https://ai-sdk.dev/providers/openai-compatible-providers
This guide covers how to get started with DeepSeek R1, focusing on its reasoning capabilities. It likely includes setup instructions and basic usage examples.
```mdx
This guide covers how to get started with DeepSeek R1, focusing on its reasoning capabilities. It likely includes setup instructions and basic usage examples.
```
--------------------------------
### Mastra Client SDK Setup and Usage
Source: https://mastra.ai/docs/frameworks/web-frameworks/vite-react
Learn how to set up and use the Mastra Client SDK.
```markdown
## MastraClient
Learn how to set up and use the Mastra Client SDK
```
--------------------------------
### CLI Installation
Source: https://docs.trychroma.com/docs/overview/getting-started
Instructions for installing the Mastra AI LLMs CLI tool.
```Documentation
Installing the CLI
docs/cli/install
```
--------------------------------
### React Getting Started Guide
Source: https://clerk.com/docs
Instructions for getting started with Clerk in React applications. This section covers the initial setup process for the Clerk SDK within a React project.
```react
{
"title": "Getting started",
"sdk": [
"react"
],
"items": [
{
"title": "Set up your Clerk a"
}
]
}
```
--------------------------------
### Install Chroma
Source: https://docs.trychroma.com/docs/overview/getting-started
This section details how to install the Chroma client. It's a prerequisite for interacting with ChromaDB.
```shell
pip install chromadb
```
--------------------------------
### Install Chroma DB
Source: https://docs.trychroma.com/docs/overview/getting-started
Instructions for installing the Chroma DB client library using various package managers across different environments.
```terminal
pip install chromadb
```
```terminal
poetry add chromadb
```
```terminal
uv pip install chromadb
```
```terminal
npm install chromadb @chroma-core/default-embed
```
```terminal
pnpm add chromadb @chroma-core/default-embed
```
```terminal
yarn add chromadb @chroma-core/default-embed
```
```terminal
bun add chromadb @chroma-core/default-embed
```
--------------------------------
### Next.js Quickstart Guides
Source: https://clerk.com/docs
Guides for getting started with Clerk in Next.js applications, covering both App Router and Pages Router quickstarts. Includes setup instructions and integration steps.
```nextjs
{
"title": "Quickstart (App Router)",
"href": "/docs/quickstarts/nextjs",
"sdk": [
"nextjs"
],
"sections": "$1a:props:children:props:manifest:0:0:items:0:0:items:0:0:items:0:0:sections"
}
```
```nextjs
{
"title": "Quickstart (Pages Router)",
"href": "/docs/quickstarts/nextjs-pages-router",
"sdk": [
"nextjs"
],
"sections": "$1a:props:children:props:manifest:0:0:items:0:0:items:0:0:items:0:0:sections"
}
```
--------------------------------
### Get started with OpenAI o3-mini
Source: https://ai-sdk.dev/providers/openai-compatible-providers
This guide provides instructions for getting started with OpenAI o3-mini, emphasizing its reasoning capabilities. It likely covers initial setup and basic usage.
```mdx
This guide provides instructions for getting started with OpenAI o3-mini, emphasizing its reasoning capabilities. It likely covers initial setup and basic usage.
```
--------------------------------
### Create Chroma Client
Source: https://docs.trychroma.com/docs/overview/getting-started
Demonstrates how to initialize a Chroma client to connect to the database. This is the first step after installation to start using Chroma.
```python
import chromadb
client = chromadb.Client()
```
--------------------------------
### Full Example: Querying with Florida (Python)
Source: https://docs.trychroma.com/docs/overview/getting-started
This comprehensive Python example demonstrates setting up a Chroma client, creating or getting a collection, upserting documents, and then querying with a new text. It includes necessary imports and client initialization.
```python
import chromadb
chroma_client = chromadb.Client()
# switch `create_collection` to `get_or_create_collection` to avoid creating a new collection every time
collection = chroma_client.get_or_create_collection(name="my_collection")
# switch `add` to `upsert` to avoid adding the same documents every time
collection.upsert(
documents=[
"This is a document about pineapple",
"This is a document about oranges"
],
ids=["id1", "id2"]
)
results = collection.query(
query_texts=["This is a query document about florida"], # Chroma will embed this for you
n_results=2 # how many results to return
)
print(results)
```
--------------------------------
### Install ChromaDB with uv
Source: https://docs.trychroma.com/docs/overview/getting-started
Install ChromaDB using the uv package installer, a fast alternative to pip.
```bash
uv pip install chromadb
```
--------------------------------
### Android Getting Started and Quickstart
Source: https://clerk.com/docs
Guides for setting up a Clerk account and completing the Android quickstart. Includes references to Clerk SDK and Android specific sections.
```text
Setup your Clerk account: /docs/quickstarts/setup-clerk
Quickstart: /docs/quickstarts/android
```
--------------------------------
### Mastra Server and DB Configuration
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
Guide on configuring the server and database components for Mastra. This includes setup instructions for backend services and data persistence layers.
```markdown
## Server & DB
This section provides guidance on configuring the server infrastructure and database connections for Mastra applications. Ensure proper setup for robust data handling and application performance.
```
--------------------------------
### Full Example: Querying with Florida (TypeScript)
Source: https://docs.trychroma.com/docs/overview/getting-started
This complete TypeScript example shows how to initialize a Chroma client, manage collections (get or create), upsert records, and perform a query. It covers the essential steps for interacting with Chroma.
```typescript
import { ChromaClient } from "chromadb";
const client = new ChromaClient();
// switch `createCollection` to `getOrCreateCollection` to avoid creating a new collection every time
const collection = await client.getOrCreateCollection({
name: "my_collection",
});
// switch `addRecords` to `upsertRecords` to avoid adding the same documents every time
await collection.upsert({
documents: [
"This is a document about pineapple",
"This is a document about oranges",
],
ids: ["id1", "id2"],
});
const results = await collection.query({
queryTexts: "This is a query document about florida", // Chroma will embed this for you
nResults: 2, // how many results to return
});
console.log(results);
```
--------------------------------
### Chroma CLI: Install Sample Apps
Source: https://docs.trychroma.com/docs/cli/db
This section details how to install sample applications using the Chroma CLI. It's a straightforward process to get started with example projects.
```bash
chroma sample-apps install
```
--------------------------------
### iOS Getting Started and Quickstart
Source: https://clerk.com/docs
Guides for setting up a Clerk account and completing the iOS quickstart. Includes references to Clerk SDK and iOS specific sections.
```text
Setup your Clerk account: /docs/quickstarts/setup-clerk
Quickstart: /docs/quickstarts/ios
```
--------------------------------
### Mastra Observability Setup
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
Guide on setting up observability for Mastra applications. This includes configuring logging, monitoring, and tracing to gain insights into application behavior and performance.
```markdown
## Observability
Implement robust observability for your Mastra applications. This section details how to set up logging, metrics, and tracing to monitor and troubleshoot your AI systems effectively.
```
--------------------------------
### Vue.js Quickstart with Clerk
Source: https://clerk.com/docs
Get started installing and initializing Clerk in a new Vue + Vite app. This guide covers the initial setup and basic integration steps.
```javascript
console.log('Vue.js Quickstart with Clerk');
// Further code examples would go here.
```
--------------------------------
### Getting Started with Mastra and SvelteKit
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
A step-by-step guide to integrating Mastra with SvelteKit. This covers web framework integration.
```javascript
import { Mastra } from '@mastra/core';
const mastra = new Mastra();
// Example SvelteKit page or server load function
export async function load() {
const data = await mastra.getData();
return { mastraData: data };
}
// ... (rest of SvelteKit application)
```
--------------------------------
### CoAgent Quickstart
Source: https://docs.copilotkit.ai/
This section likely details how to get started with CoAgents, which are AI agents that can interact with CopilotKit. It may involve setup or basic usage examples.
```N/A
{\"name\": \"CoAgent Quickstart\", \"url\": \"../coagents/quickstart\", \"external\": false}
```
--------------------------------
### Upstash Search Python SDK - Getting Started
Source: https://upstash.com/vector
Guide to getting started with the Upstash Search Python SDK, covering installation and basic setup for search functionalities.
```Python
from upstash_vector import Index
index = Index(url="YOUR_UPSTASH_URL", token="YOUR_UPSTASH_TOKEN")
def main():
# Example: Upserting data
index.upsert(vectors=[{"id": "1", "vector": [0.1, 0.2], "metadata": {"name": "example"}}])
print("Data upserted")
if __name__ == "__main__":
main()
```
--------------------------------
### Mastra Templates Guide
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
Information on using pre-built project templates in Mastra. These templates demonstrate common use cases and patterns, helping developers get started quickly.
```markdown
## Templates
Utilize Mastra's pre-built project templates to quickly bootstrap common AI application patterns and use cases. These templates serve as excellent starting points for your projects.
```
--------------------------------
### Guides for Building AI Applications with Mastra
Source: https://mastra.ai/docs/frameworks/web-frameworks/vite-react
General guides on building applications with Mastra. This section covers various AI agent and workflow examples.
```mdx
## Guides
Guides on building with Mastra
```
--------------------------------
### Upstash Search TypeScript SDK - Getting Started
Source: https://upstash.com/vector
Guide to getting started with the Upstash Search TypeScript SDK, covering installation and basic setup for search functionalities.
```TypeScript
import { Client } from '@upstash/vector';
const client = new Client({
url: 'YOUR_UPSTASH_URL',
token: 'YOUR_UPSTASH_TOKEN',
});
async function main() {
// Example: Upserting data
await client.upsert({ id: '1', vector: [0.1, 0.2], metadata: { name: 'example' } });
console.log('Data upserted');
}
```
--------------------------------
### Mastra AI LLM Automatic Installation
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
This section details the automatic installation process for Mastra AI, specifically for new projects. It covers both interactive setup prompts and non-interactive methods using command-line flags.
```English
Automatic installation
```
--------------------------------
### Quickstart for AG2 Agents
Source: https://docs.copilotkit.ai/
This guide provides a quickstart for turning AG2 Agents into an agent-native application in just 5 minutes. It focuses on rapid deployment and initial setup.
```markdown
Turn your AG2 Agents into an agent-native application in 5 minutes.
```
--------------------------------
### Getting started with Mastra and Express
Source: https://mastra.ai/docs/frameworks/web-frameworks/vite-react
A step-by-step guide to integrating Mastra with an Express backend.
```markdown
## With Express
A step-by-step guide to integrating Mastra with an Express backend.
```
--------------------------------
### Supervisor Agent Example
Source: https://mastra.ai/docs/frameworks/next-js
This example showcases the creation of a supervisor agent using Mastra. In this setup, agents communicate with each other through tool functions.
```mdx
import { Page } from "@/components/Page";
export const meta = {
title: "Example: Supervisor agent | Agents | Mastra",
description: "Example of creating a supervisor agent using Mastra, where agents interact through tool functions.",
filePath: "src/content/en/examples/agents/supervisor-agent.mdx",
};
export default () => <Page content={meta} />;
```
--------------------------------
### Getting Started with Mastra and Vite/React
Source: https://mastra.ai/docs/frameworks/web-frameworks/vite-react
A step-by-step guide to integrating Mastra with Vite and React.
```markdown
## With Vite/React
A step-by-step guide to integrating Mastra with Vite and React.
```
--------------------------------
### Get started with Llama 3.1
Source: https://ai-sdk.dev/providers/openai-compatible-providers
This guide provides instructions for getting started with Llama 3.1. It likely covers the setup and initial usage of this AI model.
```mdx
This guide provides instructions for getting started with Llama 3.1. It likely covers the setup and initial usage of this AI model.
```
--------------------------------
### Install Mastra with bun
Source: https://mastra.ai/docs/frameworks/web-frameworks/vite-react
Installs the necessary Mastra packages using bun. This command is part of the initial setup for using Mastra AI LLMs in a Vite/React project.
```bash
bun add @mastra/core @mastra/ui
```
--------------------------------
### Getting Started with Mastra and Astro
Source: https://mastra.ai/docs/frameworks/web-frameworks/vite-react
A step-by-step guide to integrating Mastra with Astro.
```markdown
## With Astro
A step-by-step guide to integrating Mastra with Astro.
```
--------------------------------
### Deploying an MCPServer
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
Example of setting up, building, and deploying a Mastra MCPServer using the stdio transport and publishing it to NPM.
```mdx
Example of setting up, building, and deploying a Mastra MCPServer using the stdio transport and publishing it to NPM.\
```
--------------------------------
### Install Mastra with pnpm
Source: https://mastra.ai/docs/frameworks/web-frameworks/vite-react
Installs the necessary Mastra packages using pnpm. This command is part of the initial setup for using Mastra AI LLMs in a Vite/React project.
```bash
pnpm add @mastra/core @mastra/ui
```
--------------------------------
### Install Mastra using create mastra CLI
Source: https://mastra.ai/docs/getting-started/installation
This command initiates the setup process for Mastra AI using the command-line interface. It's the recommended and fastest way to get started.
```bash
create mastra
```
--------------------------------
### Mastra Authentication Setup
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
Guide on setting up authentication for Mastra applications. This covers different authentication methods and best practices for securing your AI applications.
```markdown
## Auth
Secure your Mastra applications by configuring authentication. This section details the available authentication methods and provides guidance on implementing secure access controls.
```
--------------------------------
### Run ChromaDB backend and create a client in TypeScript
Source: https://docs.trychroma.com/docs/overview/getting-started
Shows how to start the ChromaDB backend server and then connect to it using a TypeScript client.
```terminal
chroma run --path ./getting-started
```
```typescript
import { ChromaClient } from "chromadb";
const client = new ChromaClient();
```
```typescript
const { ChromaClient } = require("chromadb");
const client = new ChromaClient();
```
--------------------------------
### Install Mastra with yarn
Source: https://mastra.ai/docs/frameworks/web-frameworks/vite-react
Installs the necessary Mastra packages using yarn. This command is part of the initial setup for using Mastra AI LLMs in a Vite/React project.
```bash
yarn add @mastra/core @mastra/ui
```
--------------------------------
### Install Mastra with npm
Source: https://mastra.ai/docs/frameworks/web-frameworks/vite-react
Installs the necessary Mastra packages using npm. This command is part of the initial setup for using Mastra AI LLMs in a Vite/React project.
```bash
npm install @mastra/core @mastra/ui
```
--------------------------------
### Get started with OpenAI o1
Source: https://ai-sdk.dev/providers/openai-compatible-providers
This guide focuses on getting started with OpenAI o1, including its reasoning capabilities. It likely covers the initial setup and basic usage patterns for this model.
```mdx
This guide focuses on getting started with OpenAI o1, including its reasoning capabilities. It likely covers the initial setup and basic usage patterns for this model.
```
--------------------------------
### Mastra Cloud Setup & Deploy
Source: https://mastra.ai/docs/frameworks/next-js
Configuration steps for Mastra Cloud projects. This guide details how to set up and deploy your projects using Mastra Cloud.
```markdown
Configuration steps for Mastra Cloud projects
```
--------------------------------
### Vercel Web Analytics: Getting Started
Source: https://vercel.com/docs/observability/otel-overview/quickstart
Guides on getting started with Vercel Web Analytics for various frameworks like Next.js, SvelteKit, Remix, Create React App, Nuxt, Vue, Astro, and HTML. This section covers initial setup and integration.
--------------------------------
### Nuxt Getting Started and Quickstart
Source: https://clerk.com/docs
Guides for setting up a Clerk account and completing the Nuxt quickstart. Includes references to Clerk SDK and Nuxt specific sections.
```text
Setup your Clerk account: /docs/quickstarts/setup-clerk
Quickstart: /docs/quickstarts/nuxt
```
--------------------------------
### Angular Rspack Getting Started Guide
Source: https://nx.dev/concepts/decisions/dependency-management
A guide on how to get started with Angular projects using Rspack as the build tool, covering initial setup and configuration.
--------------------------------
### Install Mastra AI
Source: https://mastra.ai/reference/scorers/keyword-coverage
Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers. This section covers the initial setup and dependencies required to get started with Mastra.
```bash
npm install mastra-ai
```
--------------------------------
### Getting Started with Mastra and Astro
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
A step-by-step guide to integrating Mastra with Astro. This covers web framework integration.
```javascript
import { Mastra } from '@mastra/core';
const mastra = new Mastra();
// Example Astro component or page script
const mastraData = await mastra.getData();
console.log(mastraData);
// ... (rest of Astro site)
```
--------------------------------
### Deploying MCPServer Example
Source: https://mastra.ai/docs/frameworks/next-js
This example provides instructions on deploying an MCPServer within the Mastra framework. It covers the necessary steps for setting up and running the server.
```mdx
import { Page } from "@/components/Page";
export const meta = {
title: "Example: Deploying an MCPServer | Agents | Mastra Docs",
description: "E
```
--------------------------------
### Install Mastra AI LLMs
Source: https://mastra.ai/reference/scorers/completeness
Guide on installing Mastra and setting up prerequisites for running with various LLM providers. This section covers the initial setup and configuration needed to get started with the Mastra framework.
```bash
npm install @mastra/core
# or
yarn add @mastra/core
```
--------------------------------
### AWS CLI Get Started Guide
Source: https://aws.amazon.com/cli/
Provides a link to the official AWS CLI getting started guide, which helps users begin using the command-line interface for AWS services.
```URL
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
```
--------------------------------
### Install ChromaDB using various package managers
Source: https://docs.trychroma.com/docs/overview/getting-started
Instructions for installing the ChromaDB client library using different package managers like pip, poetry, uv for Python, and npm, pnpm, yarn, bun for TypeScript.
```terminal
pip install chromadb
```
```terminal
poetry add chromadb
```
```terminal
uv pip install chromadb
```
```terminal
npm install chromadb @chroma-core/default-embed
```
```terminal
pnpm add chromadb @chroma-core/default-embed
```
```terminal
yarn add chromadb @chroma-core/default-embed
```
```terminal
bun add chromadb @chroma-core/default-embed
```
--------------------------------
### Install Chroma using Pip
Source: https://docs.trychroma.com/docs/overview/getting-started
Installs the Chroma Python package using pip. This is the most common method for Python users to get started with Chroma.
```bash
pip install chromadb
```
--------------------------------
### Getting started with Mastra and Express
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
A step-by-step guide to integrating Mastra with an Express backend. This covers server-side integration.
```javascript
const express = require('express');
const { Mastra } = require('@mastra/core');
const app = express();
const mastra = new Mastra();
// Example Express route
app.get('/mastra-data', async (req, res) => {
const data = await mastra.getData();
res.json(data);
});
// ... (rest of Express server setup)
```
--------------------------------
### Getting Started with Mastra and Next.js
Source: https://mastra.ai/docs/frameworks/web-frameworks/vite-react
A step-by-step guide to integrating Mastra with Next.js.
```markdown
## With Next.js
A step-by-step guide to integrating Mastra with Next.js.
```
--------------------------------
### Install Mastra AI Docs Server
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
Installs the Mastra AI documentation server using npx. This command fetches and runs the package, ensuring you have the latest version for serving documentation.
```bash
npx @mastra/mcp-docs-server
```
--------------------------------
### React Quickstart: Install and Initialize Clerk
Source: https://clerk.com/docs
This snippet guides you through installing and initializing Clerk in a new React + Vite application. It covers the essential steps to get started with Clerk's authentication features.
```javascript
self.__next_f.push([1,"6e:[\"$\",\"div\",null,{\"className\":\"flex-auto\",\"children\":[[\"$\",\"h3\",null,{\"className\":\"mb-2 text-[0.875rem]/5 font-medium text-gray-950 dark:text-white\",\"children\":[\"$\",\"$L45\",null,{\"href\":\"/docs/quickstarts/react\",\"children\":[[\"\",[\"$\",\"span\",null,{\"className\":\"whitespace-nowrap\",\"children\":[\"React\",false,[\"$\",\"svg\",null,{\"viewBox\":\"0 0 16 16\",\"fill\":\"none\",\"aria-hidden\":true,\"className\":\"relative top-[0.1875rem] ml-0.5 inline-block size-4 -translate-x-2 stroke-gray-400 align-top opacity-0 transition duration-300 ease-[cubic-bezier(0.175,0.885,0.32,1.275)] group-hover/card:translate-x-0 group-hover/card:stroke-gray-950 group-hover/card:opacity-100 dark:group-hover/card:stroke-white\",\"children\":[\"$\",\"path\",null,{\"strokeLinecap\":\"round\",\"strokeLinejoin\":\"round\",\"strokeWidth\":\"1.5\",\"d\":\"M6.75 4.75 10.25 8l-3.5 3.25\"}]}]]},[\"$\",\"span\",null,{\"className\":\"absolute inset-0\"}]]}]}]}],\"$\",\"div\",null,{\"className\":\"text-[0.875rem]/5 text-gray-600 dark:text-gray-400\",\"children\":[\"$\",\"p\",\".1\",{\"children\":\"Get started installing and initializing Clerk in a new React + Vite app.\"}]}]}]])
```
--------------------------------
### Edge Config Getting Started
Source: https://vercel.com/docs/cli
A guide to getting started with Edge Config, outlining the initial setup and basic configurations.
```English
Getting Started
```
--------------------------------
### Running Workflows
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
Example for how to run workflows.
```mdx
Example for how to run workflows.\
```
--------------------------------
### Getting Started with Mastra and Next.js
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
A step-by-step guide to integrating Mastra with Next.js. This covers web framework integration.
```javascript
import { Mastra } from '@mastra/core';
const mastra = new Mastra();
// Example Next.js API route or component
export async function getServerSideProps(context) {
const data = await mastra.getData();
return { props: { data } };
}
// ... (rest of Next.js application)
```
--------------------------------
### Getting Started with Mastra and Vite/React
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
A step-by-step guide to integrating Mastra with Vite and React. This covers web framework integration.
```javascript
import React from 'react';
import { Mastra } from '@mastra/core';
const mastra = new Mastra();
function App() {
// Example React component using Mastra
return (
<div>
<h1>Mastra Integration</h1>
{/* ... */}
</div>
);
}
export default App;
```
--------------------------------
### Initialize Mastra CLI
Source: https://mastra.ai/docs/frameworks/next-js
Run the Mastra CLI to customize the setup. It prompts for installation location, defaulting to 'src'. For root-level Pages Router, use '.' as the location.
```bash
npx mastra@latest init
```
--------------------------------
### Create a ChromaDB client in Python
Source: https://docs.trychroma.com/docs/overview/getting-started
Demonstrates how to initialize a ChromaDB client instance in Python.
```python
import chromadb
chroma_client = chromadb.Client()
```
--------------------------------
### Mastra Installation Guide
Source: https://mastra.ai/reference/evals/prompt-alignment
This guide covers the installation of Mastra and the setup of prerequisites for various LLM providers. It ensures a smooth start for users.
```TypeScript
Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers.
```
--------------------------------
### React Authentication with Clerk
Source: https://clerk.com/docs
Get started with Clerk authentication in a React + Vite application. This setup guide helps you install and initialize Clerk for user management in your React projects.
```javascript
// src/main.jsx
import React from 'react';
import ReactDOM from 'react-dom/client';
import App from './App';
import { ClerkProvider } from '@clerk/clerk-react';
// Ensure that your publishable key is set
if (!process.env.CLERK_PUBLISHABLE_KEY) {
throw new Error('CLERK_PUBLISHABLE_KEY is not set');
}
ReactDOM.createRoot(document.getElementById('root')).render(
<React.StrictMode>
<ClerkProvider publishableKey={process.env.CLERK_PUBLISHABLE_KEY}>
<App />
</ClerkProvider>
</React.StrictMode>
);
```
--------------------------------
### Set up Environment with .env
Source: https://github.com/mastra-ai/mastra/tree/main/examples/memory-per-resource-example
This snippet shows how to set up the project environment by copying a sample .env file and adding the OpenAI API key. This is a crucial first step before running the application.
```shell
cp .env.example .env
# Add your OpenAI API key to .env
```
--------------------------------
### Create Chroma Client (Python)
Source: https://docs.trychroma.com/docs/overview/getting-started
Example of how to create a Chroma client instance in Python. This code assumes the Chroma backend is running and connects to the default host and port.
```python
$L1e
```
--------------------------------
### AI SDK v5 Integration Example
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
This example provides guidance on integrating with Mastra's AI SDK version 5. It covers the necessary steps and configurations for utilizing the latest SDK features.
```python
from mastra.sdk.v5 import MastraClient
# Initialize the Mastra client with your API key
client = MastraClient(api_key="YOUR_MASTRA_API_KEY")
# Example of using a model
response = client.completions.create(
model="mastra-model-xyz",
prompt="Translate 'hello' to French."
)
print(response.choices[0].text)
```
--------------------------------
### Workflow: Getting Started and Quickstarts
Source: https://upstash.com/vector
This section covers the initial steps for using Workflow, including a general getting started guide and quickstarts for various platforms like Vercel/Next.js, Cloudflare Workers, Nuxt, SolidJS, Svelte, Hono, Express, Astro, FastAPI, and Flask.
```Workflow
workflow/getstarted
workflow/quickstarts/platforms
workflow/quickstarts/vercel-nextjs
workflow/quickstarts/cloudflare-workers
workflow/quickstarts/nuxt
workflow/quickstarts/solidjs
workflow/quickstarts/svelte
workflow/quickstarts/hono
workflow/quickstarts/express
workflow/quickstarts/astro
workflow/quickstarts/fastapi
workflow/quickstarts/nextjs-fastapi
workflow/quickstarts/flask
workflow/quickstarts/nextjs-flask
```
--------------------------------
### Get started with Gemini 2.5
Source: https://ai-sdk.dev/providers/openai-compatible-providers
A getting started guide for using Gemini 2.5, a powerful AI model. This documentation likely covers initial setup, basic usage, and potential applications of the Gemini 2.5 model.
```mdx
A getting started guide for using Gemini 2.5, a powerful AI model. This documentation likely covers initial setup, basic usage, and potential applications of the Gemini 2.5 model.
```
--------------------------------
### Mastra Installation Guide
Source: https://mastra.ai/reference/evals/toxicity
This guide covers the installation of Mastra and the setup of prerequisites for various LLM providers. It ensures a smooth start for developing AI applications with Mastra.
```TypeScript
Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers.
```
--------------------------------
### Inngest Mastra Workflow Example
Source: https://mastra.ai/docs/frameworks/next-js
Example of integrating Mastra workflows with Inngest for event-driven execution.
```typescript
import { Workflow } from "@mastra/workflow";
import { serve } from "inngest/next";
const workflow = new Workflow({
// Workflow definition
});
export const { handler } = serve({
functions: [workflow],
// Inngest configuration
});
```
--------------------------------
### Install Mastra with One-Liner
Source: https://mastra.ai/docs/frameworks/web-frameworks/astro
This snippet shows how to install Mastra using a single command line instruction. It's a quick way to get started with Mastra in your project.
```bash
# Example one-liner installation command (actual command not provided in text)
# npm install mastra-cli or similar
```
--------------------------------
### Scaffold Mastra with Next.js (Interactive CLI)
Source: https://mastra.ai/docs/frameworks/next-js
Initiate Mastra integration into your Next.js project using the interactive command-line interface. This allows for customized setup by prompting the user for installation location and other configurations.
```bash
npx mastra@latest init
```
--------------------------------
### Mastra Installation Guide
Source: https://mastra.ai/examples/memory/memory-with-pg
A guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers. This section covers the initial setup process for new users.
```bash
Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers.
```
--------------------------------
### Calling Agents Example
Source: https://mastra.ai/docs/frameworks/next-js
This example demonstrates how to call agents within the Mastra framework. It serves as a foundational example for agent interaction.
```mdx
import { Page } from "@/components/Page";
export const meta = {
title: "Calling Agents | Agents | Mastra Docs",
description: "Example for how to call agents.",
filePath: "src/content/en/examples/agents/calling-agents.mdx",
};
export default () => <Page content={meta} />;
```
--------------------------------
### LlamaIndex Quickstart
Source: https://docs.copilotkit.ai/reference/hooks/useCoAgent
A quickstart guide to turn your LlamaIndex Agents into an agent-native application in 10 minutes. This will help you get up and running quickly.
```markdown
Turn your LlamaIndex Agents into an agent-native application in 10 minutes.
```
--------------------------------
### Mastra Installation Guide
Source: https://mastra.ai/reference/templates
This guide covers the installation of Mastra and the setup of prerequisites for running it with various LLM providers. It ensures a smooth start for developing AI applications.
```TypeScript
Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers.
```
--------------------------------
### Docker Chroma Deployment
Source: https://docs.trychroma.com/docs/overview/getting-started
Commands to pull the Chroma Docker image and run it, exposing the default port.
```terminal
docker pull chromadb/chroma
docker run -p 8000:8000 chromadb/chroma
```
--------------------------------
### JS Backend SDK Getting Started and Quickstart
Source: https://clerk.com/docs
Guides for setting up a Clerk account and completing the JS Backend SDK quickstart. References Clerk SDK and JS Backend SDK sections.
```text
Setup your Clerk account: /docs/quickstarts/setup-clerk
Quickstart: /docs/references/backend/overview
```
--------------------------------
### Mastra Installation Guide
Source: https://mastra.ai/reference/client-js/workflows
This guide covers the installation of Mastra and the setup of prerequisites for running it with various Large Language Model (LLM) providers. It ensures a smooth start for users.
```TypeScript
Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers.
```
--------------------------------
### Mastra Project Structure Guide
Source: https://mastra.ai/docs/getting-started/mcp-docs-server
Guide on organizing folders and files within a Mastra project. It outlines best practices and recommended structures for efficient development and maintainability.
```markdown
## Mastra Project Structure
- `src/`
- `content/`
- `en/`
- `docs/`
- `index.mdx`
- `getting-started/`
- `installation.mdx`
- `project-structure.mdx`
- `mcp-docs-server.mdx`
- `model-providers.mdx`
- `model-capability.mdx`
- `templates.mdx`
- `agents/`
- `overview.mdx`
- `agent-memory.mdx`
- `using-tools-and-mcp.mdx`
- `input-processors.mdx`
- `adding-voice.mdx`
- `runtime-context.mdx`
- `dynamic-agents.mdx`
- `lib/`
- `components/`
- `public/`
- `package.json`
```
--------------------------------
### Create Chroma Client (Python)
Source: https://docs.trychroma.com/docs/overview/getting-started
Demonstrates how to initialize a Chroma client in Python to interact with the Chroma database.
```python
import chromadb
chroma_client = chromadb.Client()
```
--------------------------------
### Deployment Examples for Mastra Server
Source: https://mastra.ai/docs/frameworks/web-frameworks/vite-react
Provides an overview of deployment examples for a Mastra server. This includes setting up middleware for authentication, CORS, and logging, as well as creating custom API routes.
```mdx
## Deployment examples
Overview of deployment examples for a Mastra server.
```
```mdx
## Auth Middleware
Guide on setting up authentication middleware for a Mastra server.
```
```mdx
## CORS Middleware
Guide on configuring CORS middleware for a Mastra server.
```
```mdx
## Logging Middleware
Guide on implementing logging middleware for a Mastra server.
```
```mdx
## Custom API Route
Guide on creating custom API routes within a Mastra server.
```
```mdx
## Deploying a Mastra Server
Instructions on how to deploy a Mastra server.
```
--------------------------------
### Basic RAG Example
Source: https://mastra.ai/docs/frameworks/web-frameworks/vite-react
Example of implementing a basic RAG system in Mastra using OpenAI embedd
```mdx
Example: Using the Vector Query Tool | RAG | Mastra Docs
```

MastraAuthSupabase Configuration and Usage

Source: https://mastra.ai/docs/v1/auth/supabase

This section covers the installation and basic usage of the MastraAuthSupabase class, including environment variable setup and initialization within the Mastra core.

## MastraAuthSupabase Setup and Configuration

### Description
Provides instructions on installing the necessary package, configuring Supabase credentials via environment variables, and initializing the `MastraAuthSupabase` provider within the Mastra core.

### Installation
```bash
npm install @mastra/auth-supabase@beta

Prerequisites

Ensure your .env file contains your Supabase URL and anon key:

SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key

Review your Supabase Row Level Security (RLS) settings for proper data access controls.

Usage Example

import { Mastra } from "@mastra/core";
import { MastraAuthSupabase } from "@mastra/auth-supabase";

export const mastra = new Mastra({
  // ..
  server: {
    auth: new MastraAuthSupabase({
      url: process.env.SUPABASE_URL,
      anonKey: process.env.SUPABASE_ANON_KEY,
    }),
  },
});

Note: The default authorizeUser method checks the isAdmin column in the users table. You can provide a custom authorizeUser function for specific authorization logic.


--------------------------------

### Install Mastra Client SDK with npm, pnpm, yarn, or bun

Source: https://mastra.ai/docs/v1/server-db/mastra-client

Installs the Mastra Client SDK beta version using different package managers. Ensure Node.js v22.13.0 or later is installed.

```bash
npm install @mastra/client-js@beta
pnpm add @mastra/client-js@beta
yarn add @mastra/client-js@beta
bun add @mastra/client-js@beta

Initialize TypeScript Project and Install Dependencies (bun)

Source: https://mastra.ai/docs/v1/getting-started/installation

Initializes a new Node.js project with bun and installs core Mastra dependencies, TypeScript, and related types.

bun init -y
bun add -d typescript @types/node mastra@beta
bun add @mastra/core@beta zod@^4

Install Cedar-OS CLI

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/cedar-os

Run this command to install the Cedar-OS CLI, which is the first step in setting up your project.

npx cedar-os-cli plant-seed  

Initialize TypeScript Project and Install Dependencies (npm)

Source: https://mastra.ai/docs/v1/getting-started/installation

Initializes a new Node.js project with npm and installs core Mastra dependencies, TypeScript, and related types.

npm init -y
npm install -D typescript @types/node mastra@beta
npm install @mastra/core@beta zod@^4

Install Mastra Core Package

Source: https://mastra.ai/docs/v1/agents/overview

Installs the Mastra core package with the beta tag. This is the initial step for setting up agents in your project.

npm install @mastra/core@beta  

Initialize TypeScript Project and Install Dependencies (pnpm)

Source: https://mastra.ai/docs/v1/getting-started/installation

Initializes a new Node.js project with pnpm and installs core Mastra dependencies, TypeScript, and related types.

pnpm init
pnpm add -D typescript @types/node mastra@beta
pnpm add @mastra/core@beta zod@^4

Initialize TypeScript Project and Install Dependencies (yarn)

Source: https://mastra.ai/docs/v1/getting-started/installation

Initializes a new Node.js project with yarn and installs core Mastra dependencies, TypeScript, and related types.

yarn init -y
yarn add -D typescript @types/node mastra@beta
yarn add @mastra/core@beta zod@^4

Install Arize Exporter (npm)

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/arize

Installs the Arize Exporter package using npm. This is the initial step before configuration and usage.

npm install @mastra/arize@beta  

Define Tool using Vercel AI SDK format for Mastra

Source: https://mastra.ai/docs/v1/tools-mcp/advanced-usage

Illustrates how to define a tool compatible with the Vercel AI SDK (ai package) and use it within Mastra agents. This example shows a weather tool with a city parameter, demonstrating the tool function, description, parameters schema, and execute function. Ensure the ai package is installed (npm install ai).

import { tool } from "ai";
import { z } from "zod";

export const vercelWeatherTool = tool({
  description: "Fetches current weather using Vercel AI SDK format",
  parameters: z.object({
    city: z.string().describe("The city to get weather for"),
  }),
  execute: async ({ city }) => {
    console.log(`Fetching weather for ${city} (Vercel format tool)`);
    // Replace with actual API call
    const data = await fetch(`https://api.example.com/weather?city=${city}`);
    return data.json();
  },
});

Install Mastra Packages for npm, yarn, pnpm, and bun

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/sveltekit

Installs the necessary Mastra packages across different package managers. Ensure you have the correct package manager installed for your environment.

npm install mastra@beta @mastra/core@beta @mastra/libsql@beta  
yarn add mastra@beta @mastra/core@beta @mastra/libsql@beta  
pnpm add mastra@beta @mastra/core@beta @mastra/libsql@beta  
bun add mastra@beta @mastra/core@beta @mastra/libsql@beta  

Run Development Server (bun)

Source: https://mastra.ai/docs/v1/getting-started/installation

Starts the Mastra development server using bun.

bun run dev

Install Mastra Voice Provider Package

Source: https://mastra.ai/docs/v1/voice/speech-to-text

Shows the command to add a specific Mastra voice provider package to your project using pnpm. This example installs the OpenAI provider.

pnpm add @mastra/voice-openai@beta  # Example for OpenAI

Install Mastra Packages with bun

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/astro

Installs the necessary Mastra packages (mastra, @mastra/core, @mastra/libsql) using bun. This is a fast, all-in-one JavaScript runtime.

bun add mastra@beta @mastra/core@beta @mastra/libsql@beta  

Install Mastra LibSQL Package

Source: https://mastra.ai/docs/v1/memory/storage/memory-with-libsql

Installs the beta version of the Mastra LibSQL package for integrating LibSQL storage with Mastra's memory system.

npm install @mastra/libsql@beta  

Install MastraAuthWorkos Package

Source: https://mastra.ai/docs/v1/auth/workos

Install the MastraAuthWorkos package using npm. This command installs the beta version of the package, which may contain experimental features or breaking changes.

npm install @mastra/auth-workos@beta  

Run Development Server (yarn)

Source: https://mastra.ai/docs/v1/getting-started/installation

Starts the Mastra development server using yarn.

yarn run dev

Install MastraAuthAuth0 Package

Source: https://mastra.ai/docs/v1/auth/auth0

Installs the MastraAuthAuth0 package using npm. This is a prerequisite for using the MastraAuthAuth0 class in your project.

npm install @mastra/auth-auth0@beta

Resource Metadata Example (JSON)

Source: https://mastra.ai/docs/v1/server-db/storage

Shows an example of resource metadata stored as JSONB, including user preferences like language and timezone, and tags.

{
  "preferences": {
    "language": "en",
    "timezone": "UTC"
  },
  "tags": [
    "premium",
    "beta-user"
  ]
}

Complete Example with Ratio Sampling and Exporters

Source: https://mastra.ai/docs/v1/observability/tracing/overview

A complete example demonstrating the initialization of Mastra with observability, including service name, ratio sampling, and default exporters.

export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      "10_percent": {
        serviceName: "my-service",
        // Sample 10% of traces
        sampling: {
          type: "ratio",
          probability: 0.1,
        },
        exporters: [new DefaultExporter()],
      },
    },
  }),
});

Run Development Server (pnpm)

Source: https://mastra.ai/docs/v1/getting-started/installation

Starts the Mastra development server using pnpm.

pnpm run dev

Install Auth0 React SDK

Source: https://mastra.ai/docs/v1/auth/auth0

Installs the Auth0 React SDK using npm. This is required for client-side authentication with Auth0.

npm install @auth0/auth0-react

Install NetlifyDeployer with npm

Source: https://mastra.ai/docs/v1/deployment/cloud-providers/netlify-deployer

Installs the NetlifyDeployer package from npm, which is required to use Netlify-specific deployment functionalities for Mastra applications.

npm install @mastra/deployer-netlify@beta

Install Mastra Memory Dependencies

Source: https://mastra.ai/docs/v1/memory/overview

Installs the necessary Mastra packages for core functionality, memory management, and LibSQL storage. This is the first step to enabling memory features.

npm install @mastra/core@beta @mastra/memory@beta @mastra/libsql@beta  

Start Mastra Dev Server with CLI

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/astro

Starts the Mastra development server directly using the 'mastra dev:mastra' command. This is an alternative way to expose your agents locally.

mastra dev:mastra  

Install Mastra CopilotKit Runtime (yarn)

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/copilotkit

Installs the CopilotKit runtime and Mastra integration package using yarn. This command is the yarn equivalent for the aforementioned npm installation.

yarn add @copilotkit/runtime @ag-ui/mastra@beta

Install @mastra/auth Package

Source: https://mastra.ai/docs/v1/auth/jwt

Installs the Mastra authentication package, specifically the beta version, using npm. This is a prerequisite for using the MastraJwtAuth class.

npm install @mastra/auth@beta

Install Mastra Supabase Auth Package

Source: https://mastra.ai/docs/v1/auth/supabase

Installs the beta version of the @mastra/auth-supabase package using npm. This package is necessary before using the MastraAuthSupabase class.

npm install @mastra/auth-supabase@beta

Install Mastra Fastembed Package

Source: https://mastra.ai/docs/v1/memory/storage/memory-with-libsql

Installs the beta version of the Mastra Fastembed package, enabling local embedding generation for semantic memory recall.

npm install @mastra/fastembed@beta  

Start Mastra Dev Server

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/sveltekit

Starts the Mastra development server, which exposes your agents as REST endpoints. This command is essential for local development and testing of your Mastra AI agents.

npm run dev:mastra
mastra dev:mastra

Use MastraClient in Server-Side Environments for Streaming Responses

Source: https://mastra.ai/docs/v1/server-db/mastra-client

Demonstrates using MastraClient in server-side environments like API routes or serverless functions. This example shows how to get an agent and stream its response, returning the stream's body as a Response object for clients. Assumes mastraClient is initialized.

export async function action() {  
  const agent = mastraClient.getAgent("testAgent");  
  
  const stream = await agent.stream({  
    messages: [{ role: "user", content: "Hello" }],  
  });  
  
  return new Response(stream.body);  
}  

Install CloudflareDeployer with npm

Source: https://mastra.ai/docs/v1/deployment/cloud-providers/cloudflare-deployer

Installs the CloudflareDeployer package from npm. This is the first step to using the deployer for Mastra applications.

npm install @mastra/deployer-cloudflare@beta  

Install CopilotKit & Mastra for Next.js (pnpm)

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/copilotkit

Installs all necessary packages for integrating CopilotKit and Mastra within a Next.js application using pnpm. This ensures efficient package installation.

pnpm add @copilotkit/react-core @copilotkit/react-ui @copilotkit/runtime @ag-ui/mastra@beta

Install Mastra Packages with npm

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/astro

Installs the necessary Mastra packages (mastra, @mastra/core, @mastra/libsql) using npm. This is the first step in integrating Mastra into your project.

npm install mastra@beta @mastra/core@beta @mastra/libsql@beta  

Start Mastra Dev Server with npm

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/astro

Starts the Mastra development server using the 'npm run dev:mastra' command. This exposes your Mastra agents as local REST endpoints.

npm run dev:mastra  

Install Node.js Dependencies (Bash)

Source: https://mastra.ai/docs/v1/deployment/cloud-providers/amazon-ec2

Installs the necessary Node.js dependencies for the Mastra application using npm. Requires Node.js and npm to be installed.

npm install

Install Mastra Upstash Package

Source: https://mastra.ai/docs/v1/memory/storage/memory-with-upstash

Installs the beta version of the Mastra Upstash package, which provides integrations for using Upstash services with Mastra's memory system.

npm install @mastra/upstash@beta  

Install Mastra CopilotKit Runtime (npm)

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/copilotkit

Installs the CopilotKit runtime and Mastra integration package using npm. This is required for the Mastra server-side integration.

npm install @copilotkit/runtime @ag-ui/mastra@beta

Install MCP Dependency

Source: https://mastra.ai/docs/v1/tools-mcp/mcp-overview

Installs the necessary beta version of the Mastra MCP package using npm. This is the first step to integrating MCP functionality into your project.

npm install @mastra/mcp@beta  

Install Mastra Client SDK (npm, yarn, pnpm)

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/copilotkit

Instructions for installing the Mastra Client SDK using different package managers. This SDK is essential for interacting with Mastra services.

npm install @mastra/client-js@beta
yarn add @mastra/client-js@beta
pnpm add @mastra/client-js@beta

Install Vercel Deployer Package

Source: https://mastra.ai/docs/v1/deployment/cloud-providers/vercel-deployer

Installs the Vercel deployer package for Mastra. This is the initial step to integrate Vercel deployment capabilities into your Mastra project.

npm install @mastra/deployer-vercel@beta

Basic Mastra Setup with Default Exporter

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/default

Demonstrates the basic setup of Mastra with a LibSQL storage and the DefaultExporter explicitly configured for observability. This requires explicit instantiation of both the storage and observability modules.

import { Mastra } from "@mastra/core";
import { Observability, DefaultExporter } from "@mastra/observability";
import { LibSQLStore } from "@mastra/libsql";

export const mastra = new Mastra({
  storage: new LibSQLStore({
    id: 'mastra-storage',
    url: "file:./mastra.db", // Required for trace persistence
  }),
  observability: new Observability({
    configs: {
      local: {
        serviceName: "my-service",
        exporters: [new DefaultExporter()],
      },
    },
  }),
});

Install Mastra Packages with pnpm

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/astro

Installs the necessary Mastra packages (mastra, @mastra/core, @mastra/libsql) using pnpm. This is another package manager option.

pnpm add mastra@beta @mastra/core@beta @mastra/libsql@beta  

Install Mastra Packages with yarn

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/astro

Installs the necessary Mastra packages (mastra, @mastra/core, @mastra/libsql) using yarn. This is an alternative to npm for package management.

yarn add mastra@beta @mastra/core@beta @mastra/libsql@beta  

Scaffold Mastra with Interactive CLI

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/astro

Initializes Mastra in your project using the interactive 'init' command, allowing customization of the setup by prompting the user for choices.

npx mastra@beta init  

Start Mastra Dev Server (CLI)

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/vite-react

Starts the Mastra development server, which exposes your agents as REST endpoints. This command is used to run the Mastra backend during development.

npm run dev:mastra  
mastra dev:mastra  

Install Mastra Inngest Packages

Source: https://mastra.ai/docs/v1/workflows/inngest-workflow

Installs the necessary Mastra and Inngest packages via npm. These packages are required to integrate Mastra workflows with the Inngest platform.

npm install @mastra/inngest@beta @mastra/core@beta @mastra/deployer@beta  

Install MastraAuthClerk Package

Source: https://mastra.ai/docs/v1/auth/clerk

Installs the beta version of the @mastra/auth-clerk package using npm. This package is required to use the MastraAuthClerk class for authentication.

npm install @mastra/auth-clerk@beta

Run Phoenix Instance with Docker

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/arize

Starts a local Phoenix instance using Docker for testing. This command sets up an in-memory SQLite database for the Phoenix backend, making it easy to test the Arize exporter locally.

docker run --pull=always -d --name arize-phoenix -p 6006:6006 \  
  -e PHOENIX_SQL_DATABASE_URL="sqlite:///:memory:" \  
  arizephoenix/phoenix:latest  

Install Mastra Memory and Storage Provider

Source: https://mastra.ai/docs/v1/agents/agent-memory

Installs the necessary Mastra packages for memory functionality and a storage provider like LibSQL. This is the initial step to enable memory in your Mastra agents.

npm install @mastra/memory@beta @mastra/libsql@beta  

Install Mastra Core and Vercel AI SDK

Source: https://mastra.ai/docs/v1/agents/overview

Installs the Mastra core package along with a specific Vercel AI SDK provider (e.g., OpenAI). This is required when integrating Mastra agents with Vercel's AI SDK for LLM interactions.

npm install @mastra/core@beta @ai-sdk/openai  

Install Langfuse Exporter (npm)

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/langfuse

Installs the beta version of the Langfuse Exporter package using npm. This is the first step to integrate Langfuse observability into your project.

npm install @mastra/langfuse@beta  

Install OTEL Exporter for HTTP/JSON Providers (npm)

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/otel

Installs the base OpenTelemetry exporter and the protocol package for HTTP/JSON providers like Traceloop. Requires Node.js and npm.

npm install @mastra/otel-exporter@beta @opentelemetry/exporter-trace-otlp-http

Install Mastra CopilotKit Runtime (pnpm)

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/copilotkit

Installs the CopilotKit runtime and Mastra integration package using pnpm. This ensures all necessary dependencies are available for Mastra integration.

pnpm add @copilotkit/runtime @ag-ui/mastra@beta

Install OTEL Exporter for HTTP/Protobuf Providers (npm)

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/otel

Installs the base OpenTelemetry exporter and the specific protocol package for HTTP/Protobuf providers such as SigNoz, New Relic, and Laminar. Requires Node.js and npm.

npm install @mastra/otel-exporter@beta @opentelemetry/exporter-trace-otlp-proto

Install CopilotKit Packages (npm)

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/copilotkit

Installs the core and UI packages for CopilotKit using npm. These are essential for integrating CopilotKit components into a React frontend.

npm install @copilotkit/react-core @copilotkit/react-ui

Install @mastra/auth-firebase Package

Source: https://mastra.ai/docs/v1/auth/firebase

Installs the necessary package for Mastra Firebase authentication using npm. This is a prerequisite for using the MastraAuthFirebase class.

npm install @mastra/auth-firebase@beta  

Retrieve Trace IDs from Workflow Execution and Streaming (JavaScript)

Source: https://mastra.ai/docs/v1/observability/tracing/overview

Shows how to get the traceId from workflow executions using createRun, start, and stream methods. The traceId is available in the result of start and can be obtained from the final state after streaming.

// Create a workflow run
const run = await mastra.getWorkflow("myWorkflow").createRun();

// Start the workflow
const result = await run.start({
  inputData: { data: "process this" },
});

console.log("Trace ID:", result.traceId);

// Or stream the workflow
const { stream, getWorkflowState } = run.stream({
  inputData: { data: "process this" },
});

// Get the final state which includes the trace ID
const finalState = await getWorkflowState();
console.log("Trace ID:", finalState.traceId);

Mastra Entry Point (src/mastra/index.ts)

Source: https://mastra.ai/docs/v1/getting-started/installation

Initializes the Mastra core with the defined agents, setting up the main application entry point.

import { Mastra } from "@mastra/core";
import { weatherAgent } from "./agents/weather-agent";

export const mastra = new Mastra({
  agents: { weatherAgent },
});

Install OTEL Exporter for gRPC Providers (npm)

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/otel

Installs the base OpenTelemetry exporter and the necessary packages for gRPC providers like Dash0. This includes the gRPC JavaScript library. Requires Node.js and npm.

npm install @mastra/otel-exporter@beta @opentelemetry/exporter-trace-otlp-grpc @grpc/grpc-js

Scaffold Mastra Weather Agent with One-Liner or Interactive CLI

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/sveltekit

Initializes the Mastra project by scaffolding the default Weather agent. Use the '--default' flag for quick setup or run 'init' interactively for customization.

npx mastra@beta init --default  
npx mastra@beta init  

Eval Result Example (JSON)

Source: https://mastra.ai/docs/v1/server-db/storage

An example of the JSON structure for evaluation results, containing a score and detailed information about the evaluation, including reasons and citations.

{
  "score": 0.95,
  "details": {
    "reason": "Response accurately reflects source material",
    "citations": [
      "page 1",
      "page 3"
    ]
  }
}

Install CopilotKit Packages (pnpm)

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/copilotkit

Installs the core and UI packages for CopilotKit using pnpm. pnpm is a performant package manager for Node.js.

pnpm add @copilotkit/react-core @copilotkit/react-ui

Access Suspend Payload in Mastra Workflow

Source: https://mastra.ai/docs/v1/workflows/human-in-the-loop

This JavaScript snippet shows how to retrieve the suspend payload from a suspended workflow step in Mastra. It demonstrates getting a workflow, starting a run, and then accessing the suspended payload if the result status is 'suspended'.

const workflow = mastra.getWorkflow("testWorkflow");
const run = await workflow.createRunAsync();

const result = await run.start({
  inputData: {
    userEmail: "alex@example.com"
  }
});

if (result.status === "suspended") {
  const suspendStep = result.suspended[0];
  const suspendedPayload = result.steps[suspendStep[0]].suspendPayload;

  console.log(suspendedPayload);
}

Run Workflow (Start Mode)

Source: https://mastra.ai/docs/v1/workflows/overview

Explains how to execute a workflow in 'start' mode, which waits for all steps to complete before returning the final result. It uses createRun() to instantiate a workflow run and .start() to initiate execution with provided input data that matches the workflow's input schema.

const run = await testWorkflow.createRun();

const result = await run.start({
  inputData: {
    message: "Hello world"
  }
});

console.log(result);

Example Network Stream Output in Mastra AI

Source: https://mastra.ai/docs/v1/streaming/events

Demonstrates the structure of events emitted by a Mastra AI network stream, including routing decisions and agent/workflow execution. These events track the orchestration flow, starting with routing and followed by primitive execution. The output includes types like 'routing-agent-start', 'routing-agent-end', 'agent-execution-start', and 'agent-execution-event-text-delta'.

// Routing agent decides what to do  
{
  type: 'routing-agent-start',
  from: 'NETWORK',
  runId: '7a3b9c2d-1e4f-5a6b-8c9d-0e1f2a3b4c5d',
  payload: {
    agentId: 'routing-agent',
    // ...
  }
}
// Routing agent makes a selection  
{
  type: 'routing-agent-end',
  from: 'NETWORK',
  runId: '7a3b9c2d-1e4f-5a6b-8c9d-0e1f2a3b4c5d',
  payload: {
    // ...
  }
}
// Delegated agent begins execution  
{
  type: 'agent-execution-start',
  from: 'NETWORK',
  runId: '8b4c0d3e-2f5a-6b7c-9d0e-1f2a3b4c5d6e',
  payload: {
    // ...
  }
}
// Events from the delegated agent's execution  
{
  type: 'agent-execution-event-text-delta',
  from: 'NETWORK',
  runId: '8b4c0d3e-2f5a-6b7c-9d0e-1f2a3b4c5d6e',
  payload: {
    type: 'text-delta',
    payload: {
      // ...
    }
  }
}
// ...more events

Basic Mastra AI Setup with Braintrust Exporter (TypeScript)

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/braintrust

Demonstrates the basic setup of Mastra AI with the Braintrust exporter integrated into the observability configuration. It initializes Mastra with a service name and configures the Braintrust exporter using environment variables.

import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";
import { BraintrustExporter } from "@mastra/braintrust";

export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      braintrust: {
        serviceName: "my-service",
        exporters: [
          new BraintrustExporter({
            apiKey: process.env.BRAINTRUST_API_KEY,
            projectName: process.env.BRAINTRUST_PROJECT_NAME,
          }),
        ],
      },
    },
  }),
});

Install CopilotKit Packages (yarn)

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/copilotkit

Installs the core and UI packages for CopilotKit using yarn. This command is an alternative to npm for managing project dependencies.

yarn add @copilotkit/react-core @copilotkit/react-ui

Install Mastra AI SDK Package (npm, pnpm, yarn, bun)

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/ai-sdk

Installs the @mastra/ai-sdk package, which provides custom API routes and utilities for streaming Mastra agents in AI SDK-compatible formats. This package is essential for integrating Mastra's streaming capabilities with the AI SDK.

npm install @mastra/ai-sdk@beta
pnpm add @mastra/ai-sdk@beta
yarn add @mastra/ai-sdk@beta
bun add @mastra/ai-sdk@beta

Initialize Mastra AI Project

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/sveltekit

Initializes a new Mastra AI project with default settings or interactive prompts. This command sets up the basic project structure and configuration files required for Mastra AI.

npx mastra@beta init --default
npx mastra@beta init

Add Mastra Scripts to package.json

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/sveltekit

Adds 'dev:mastra' and 'build:mastra' scripts to the 'package.json' file. These scripts are used to start the Mastra development server and build Mastra applications, respectively.

{
  "scripts": {
    ... ,
    "dev:mastra": "mastra dev",
    "build:mastra": "mastra build"
  }
}

Create Mastra Entry Point File

Source: https://mastra.ai/docs/v1/getting-started/installation

Creates the main index.ts file for the Mastra application, where the Mastra instance is created and agents are registered.

touch src/mastra/index.ts

Install WorkOS Node SDK

Source: https://mastra.ai/docs/v1/auth/workos

Install the WorkOS SDK for Node.js applications. This is required for server-side authentication flows, specifically for exchanging authorization codes.

npm install @workos-inc/node  

Langfuse Exporter Complete Configuration Options (JavaScript)

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/langfuse

Provides a comprehensive configuration example for the Langfuse exporter, including optional settings like base URL, realtime mode, log level, and Langfuse-specific options for environment, version, and release tracking.

new LangfuseExporter({  
  // Required credentials  
  publicKey: process.env.LANGFUSE_PUBLIC_KEY!,  
  secretKey: process.env.LANGFUSE_SECRET_KEY!,  
  
  // Optional settings  
  baseUrl: process.env.LANGFUSE_BASE_URL, // Default: https://cloud.langfuse.com  
  realtime: process.env.NODE_ENV === "development", // Dynamic mode selection  
  logLevel: "info", // Diagnostic logging: debug | info | warn | error  
  
  // Langfuse-specific options  
  options: {  
    environment: process.env.NODE_ENV, // Shows in UI for filtering  
    version: process.env.APP_VERSION, // Track different versions  
    release: process.env.GIT_COMMIT, // Git commit hash  
  },  
});  

Tool Execution Example

Source: https://mastra.ai/docs/v1/agents/networks

Shows an example where the routing agent directly calls a tool (weatherTool) to fulfill a user's request, bypassing other agents or workflows.

## API: Direct Tool Execution

### Description
Demonstrates how the routing agent can directly invoke a specific tool to satisfy a user's request when it determines that is the most efficient method.

### Method
`network()`

### Endpoint
Not applicable (Method call on an agent object)

### Parameters
#### Input
- **userMessage** (string) - Required - The message prompting direct tool execution (e.g., "What's the weather in London?").

### Request Example
```javascript
const result = await routingAgent.network("What's the weather in London?");

for await (const chunk of result) {
  console.log(chunk.type);
  if (chunk.type === "network-execution-event-step-finish") {
    console.log(chunk.payload.result);
  }
}

Response

Success Response (Stream of Events)

  • chunk.type (string) - The type of event indicating tool execution progress (e.g., tool-execution-start, tool-execution-end).
  • chunk.payload.result (any) - The output of the tool execution, typically available after network-execution-event-step-finish.

Response Example (Event Types)

routing-agent-start
routing-agent-end
tool-execution-start
tool-execution-end
network-execution-event-step-finish

--------------------------------

### Basic Mastra AI Setup with Langfuse Exporter (TypeScript)

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/langfuse

Initializes Mastra AI with the Langfuse exporter, configuring it to send traces to Langfuse. Requires environment variables for credentials and specifies a service name.

```typescript
import { Mastra } from "@mastra/core";  
import { Observability } from "@mastra/observability";  
import { LangfuseExporter } from "@mastra/langfuse";  
  
export const mastra = new Mastra({  
  observability: new Observability({  
    configs: {  
      langfuse: {  
        serviceName: "my-service",  
        exporters: [  
          new LangfuseExporter({  
            publicKey: process.env.LANGFUSE_PUBLIC_KEY!,  
            secretKey: process.env.LANGFUSE_SECRET_KEY!,  
            baseUrl: process.env.LANGFUSE_BASE_URL,  
            options: {  
              environment: process.env.NODE_ENV,  
            },  
          }),  
        ],  
      },  
    },  
  }),  
});  

Install LangSmith Exporter using npm

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/langsmith

Installs the beta version of the Mastra LangSmith exporter package. This is the first step to integrate LangSmith's tracing capabilities into your Mastra AI project.

npm install @mastra/langsmith@beta  

Add Dev and Build Scripts to package.json

Source: https://mastra.ai/docs/v1/getting-started/installation

Configures 'dev' and 'build' scripts in the package.json file for running Mastra development server and building the project.

{
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "dev": "mastra dev",
    "build": "mastra build"
  }
}

Initialize Mastra (Interactive CLI)

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/vite-react

Initializes Mastra in your project using the interactive CLI, allowing for custom setup options. This is an alternative to the one-liner command.

npx mastra@latest init  

Monorepo Directory Structure Example

Source: https://mastra.ai/docs/v1/deployment/monorepo

Illustrates a typical monorepo layout with Mastra application code located in apps/api and shared packages in packages/.

apps/  
├── api/  
│   ├── src/  
│   │   └── mastra/  
│   │       ├── agents/  
│   │       ├── tools/  
│   │       ├── workflows/  
│   │       └── index.ts  
│   ├── package.json  
│   └── tsconfig.json  
└── web/  
packages/  
├── ui/  
└── utils/  
package.json  


Setup Mastra with Cloud Exporter (TypeScript)

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/cloud

Configures Mastra with observability enabled, including the CloudExporter. This setup requires a Mastra Cloud account and an access token, typically provided via environment variables.

import { Mastra } from "@mastra/core";
import { Observability, CloudExporter } from "@mastra/observability";

export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      production: {
        serviceName: "my-service",
        exporters: [
          new CloudExporter(), // Uses MASTRA_CLOUD_ACCESS_TOKEN env var
        ],
      },
    },
  }),
});

Basic Mastra Initialization with Auth0

Source: https://mastra.ai/docs/v1/auth/auth0

Initializes the Mastra server with the MastraAuthAuth0 authentication provider. This example assumes default Auth0 configuration is sufficient.

import { Mastra } from "@mastra/core";
import { MastraAuthAuth0 } from "@mastra/auth-auth0";

export const mastra = new Mastra({
  // ..
  server: {
    auth: new MastraAuthAuth0(),
  },
});

Create Project Directory

Source: https://mastra.ai/docs/v1/getting-started/installation

Creates a new directory for the Mastra project and changes the current directory into it.

mkdir my-first-agent && cd my-first-agent

Install Mastra Packages (npm, yarn, pnpm, bun)

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/vite-react

Installs the necessary Mastra packages for your Vite/React project using different package managers. Ensure you have the correct package manager for your project.

npm install mastra@beta @mastra/core@beta @mastra/libsql@beta @mastra/client-js@beta  
yarn add mastra@beta @mastra/core@beta @mastra/libsql@beta @mastra/client-js@beta  
pnpm add mastra@beta @mastra/core@beta @mastra/libsql@beta @mastra/client-js@beta  
bun add mastra@beta @mastra/core@beta @mastra/libsql@beta @mastra/client-js@beta  

Install Mastra Evals Package

Source: https://mastra.ai/docs/v1/evals/overview

Installs the Mastra Evals package, which is required to use Mastra's scorers feature. This command is used in a Node.js environment.

npm install @mastra/evals@beta  

Install CopilotKit & Mastra for Next.js (npm)

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/copilotkit

Installs all necessary packages for integrating CopilotKit and Mastra within a Next.js application using npm. This includes core, UI, runtime, and Mastra packages.

npm install @copilotkit/react-core @copilotkit/react-ui @copilotkit/runtime @ag-ui/mastra@beta

Environment Variables (.env)

Source: https://mastra.ai/docs/v1/getting-started/installation

Example of an environment variable for storing the Google Generative AI API key. Other providers like OpenAI and Anthropic are also supported.

GOOGLE_GENERATIVE_AI_API_KEY=<your-api-key>

Bootstrap Mastra Server Project

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/assistant-ui

Command to create a new Mastra project using an interactive wizard. This command helps scaffold the server project by prompting for details and setting up basic configurations.

npx create-mastra@beta

Initialize Mastra Core Configuration

Source: https://mastra.ai/docs/v1/frameworks/servers/express

Creates a Mastra configuration file and initializes the Mastra core instance. This setup is essential for any Mastra application, serving as the entry point for defining and managing agents.

import { Mastra } from "@mastra/core";  
  
export const mastra = new Mastra({});  

Run Mastra Server

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/assistant-ui

Command to start the Mastra server in development mode. The server will typically run on http://localhost:4111, making agent endpoints accessible.

npm run dev

Install Mastra Braintrust Exporter (npm)

Source: https://mastra.ai/docs/v1/observability/tracing/exporters/braintrust

Installs the beta version of the Mastra Braintrust exporter package using npm. This is the first step to integrating Braintrust for LLM application quality monitoring.

npm install @mastra/braintrust@beta

Install Mastra MongoDB Package

Source: https://mastra.ai/docs/v1/memory/storage/memory-with-mongodb

To utilize Mastra's MongoDB storage capabilities, you need to install the official @mastra/mongodb package. This package provides the necessary classes and functions to integrate MongoDB with Mastra's memory system.

npm install @mastra/mongodb@beta

Create Weather Tool File

Source: https://mastra.ai/docs/v1/getting-started/installation

Creates the directory structure and an empty file for the weather tool.

mkdir -p src/mastra/tools && touch src/mastra/tools/weather-tool.ts

Workflow Example

Source: https://mastra.ai/docs/v1/agents/networks

Illustrates how a routing agent can execute a defined workflow based on user input. It shows the process of iterating through workflow execution events.

## API: Execute Workflow

### Description
Executes a predefined workflow based on the user's message. The routing agent interprets the message and routes it to the appropriate workflow, which may involve multiple agent steps.

### Method
`network()`

### Endpoint
Not applicable (Method call on an agent object)

### Parameters
#### Input
- **userMessage** (string) - Required - The message that triggers the workflow execution (e.g., "Tell me some historical facts about London").

### Request Example
```javascript
const result = await routingAgent.network(
  "Tell me some historical facts about London",
);

for await (const chunk of result) {
  console.log(chunk.type);
  if (chunk.type === "network-execution-event-step-finish") {
    console.log(chunk.payload.result);
  }
}

Response

Success Response (Stream of Events)

  • chunk.type (string) - The type of event emitted during workflow execution (e.g., workflow-execution-start, workflow-execution-event-workflow-step-result).
  • chunk.payload.result (any) - The result of a completed workflow step.

Response Example (Event Types)

routing-agent-end
workflow-execution-start
workflow-execution-event-workflow-start
workflow-execution-event-workflow-step-start
workflow-execution-event-workflow-step-result
workflow-execution-event-workflow-finish
workflow-execution-end
routing-agent-start
network-execution-event-step-finish

--------------------------------

### Usage Example: Interacting with Upstash Agent (TypeScript)

Source: https://mastra.ai/docs/v1/memory/storage/memory-with-upstash

Provides a practical example of how to use the configured Upstash agent to send messages and retrieve responses, demonstrating memory persistence and scoped recall options.

```typescript
import "dotenv/config";  
  
import { mastra } from "./mastra";  
  
const threadId = "123";  
const resourceId = "user-456";  
  
const agent = mastra.getAgent("upstashAgent");  
  
const message = await agent.stream("My name is Mastra", {  
  memory: {  
    thread: threadId,  
    resource: resourceId,  
  },  
});  
  
await message.textStream.pipeTo(new WritableStream());  
  
const stream = await agent.stream("What's my name?", {  
  memory: {  
    thread: threadId,  
    resource: resourceId,  
  },  
  memoryOptions: {  
    lastMessages: 5,  
    semanticRecall: {  
      topK: 3,  
      messageRange: 2,  
    },  
  },  
});  
  
for await (const chunk of stream.textStream) {  
  process.stdout.write(chunk);  
}  



Install @ai-sdk/react Package (npm, pnpm, yarn, bun)

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/ai-sdk

Installs the AI SDK React package, which provides hooks for integrating frontend components with Mastra agents. This package is essential for using features like useChat and useCompletion.

npm install @ai-sdk/react
pnpm add @ai-sdk/react
yarn add @ai-sdk/react
bun add @ai-sdk/react

Create Assistant UI Project

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/assistant-ui

Command to generate a new Assistant UI project. This command initializes a frontend project using the Assistant UI library.

npx assistant-ui@latest create

Initialize Mastra with CloudflareDeployer

Source: https://mastra.ai/docs/v1/deployment/cloud-providers/cloudflare-deployer

Initializes the Mastra application with the CloudflareDeployer. This example shows how to configure the deployer with a project name and environment variables.

import { Mastra } from "@mastra/core";  
import { CloudflareDeployer } from "@mastra/deployer-cloudflare";  
  
export const mastra = new Mastra({  
  // ...  
  deployer: new CloudflareDeployer({  
    projectName: "hello-mastra",  
    env: {  
      NODE_ENV: "production",  
    },  
  }),  
});  

Basic Mastra Observability Configuration (TypeScript)

Source: https://mastra.ai/docs/v1/observability/tracing/overview

Configures Mastra with basic observability settings, enabling default and cloud exporters, and setting up local storage for trace persistence. This configuration is a minimal setup for enabling tracing.

import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";

export const mastra = new Mastra({
  // ... other config
  observability: new Observability({
    default: { enabled: true }, // Enables DefaultExporter and CloudExporter
  }),
  storage: new LibSQLStore({
    id: 'mastra-storage',
    url: "file:./mastra.db", // Storage is required for tracing
  }),
});

Initialize Mastra with NetlifyDeployer

Source: https://mastra.ai/docs/v1/deployment/cloud-providers/netlify-deployer

Demonstrates how to initialize the Mastra application with the NetlifyDeployer instance, enabling Netlify-specific deployment configurations within the Mastra core.

import { Mastra } from "@mastra/core";
import { NetlifyDeployer } from "@mastra/deployer-netlify";

export const mastra = new Mastra({
  // ...
  deployer: new NetlifyDeployer(),
});

Basic Mastra Initialization with WorkOS Auth

Source: https://mastra.ai/docs/v1/auth/workos

Initialize the Mastra server with the MastraAuthWorkos authentication module. This example assumes WorkOS authentication is configured via environment variables.

import { Mastra } from "@mastra/core";  
import { MastraAuthWorkos } from "@mastra/auth-workos";  
  
export const mastra = new Mastra({
  // ..  
  server: {
    auth: new MastraAuthWorkos(),  
  },
});  

Install CopilotKit & Mastra for Next.js (yarn)

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/copilotkit

Installs all necessary packages for integrating CopilotKit and Mastra within a Next.js application using yarn. This command provides the yarn alternative for dependency management.

yarn add @copilotkit/react-core @copilotkit/react-ui @copilotkit/runtime @ag-ui/mastra@beta

Install OpenRouter Provider and Uninstall OpenAI SDK

Source: https://mastra.ai/docs/v1/frameworks/agentic-uis/openrouter

Installs the necessary OpenRouter AI SDK provider package and removes the default OpenAI SDK package. This is a crucial step for configuring Mastra to use OpenRouter.

npm uninstall @ai-sdk/openai
npm install @openrouter/ai-sdk-provider

Install Mastra Dependencies for Express.js

Source: https://mastra.ai/docs/v1/frameworks/servers/express

Installs the necessary Mastra packages and dependencies for an Express project, including core Mastra libraries, an OpenAI SDK, and Zod for schema validation. This command prepares your project to integrate Mastra agents.

npm install mastra@beta @mastra/core@beta @mastra/libsql@beta zod@^3.0.0 @ai-sdk/openai@^1.0.0  

Basic Express Server Setup (TypeScript)

Source: https://mastra.ai/docs/v1/frameworks/servers/express

Sets up a basic Express.js server with a root route. This serves as the foundation for adding more complex API endpoints. It requires the 'express' library.

import express, { Request, Response } from "express";  
  
const app = express();  
const port = 3456;  
  
app.get("/", (req: Request, res: Response) => {  
  res.send("Hello, world!");  
});  
  
app.listen(port, () => {  
  console.log(`Server is running at http://localhost:${port}`);  
});  

Workflow Snapshot Example (JSON)

Source: https://mastra.ai/docs/v1/server-db/storage

Presents the JSON structure for a workflow snapshot, used to save and rehydrate workflow states. Includes current state, context, active paths, run ID, and timestamp.

{
  "value": {
    "currentState": "running"
  },
  "context": {
    "stepResults": {},
    "attempts": {},
    "triggerData": {}
  },
  "activePaths": [],
  "runId": "550e8400-e29b-41d4-a716-446655440000",
  "timestamp": 1648176000000
}

Create Test Route Directory and Action File (TypeScript)

Source: https://mastra.ai/docs/v1/frameworks/web-frameworks/sveltekit

Creates a new directory for a test route and an associated +page.server.ts file. This file defines a SvelteKit Action that interacts with a Mastra agent to get weather information.

mkdir src/routes/test  
touch src/routes/test/+page.server.ts  
import type { Actions } from "./$types";
import { mastra } from "../../mastra";

export const actions = {
  default: async (event) => {
    const city = (await event.request.formData()).get("city")!.toString();
    const agent = mastra.getAgent("weatherAgent");

    const result = await agent.generate(`What's the weather like in ${city}?`);
    return { result: result.text };
  },
} satisfies Actions;

Initialize Inngest for Mastra (Development)

Source: https://mastra.ai/docs/v1/workflows/inngest-workflow

Initializes the Inngest client for development environments. This setup includes specifying the Inngest function ID, the local Inngest server base URL, enabling development mode, and applying the realtime middleware for enhanced observability.

import { Inngest } from "inngest";  
import { realtimeMiddleware } from "@inngest/realtime/middleware";  
  
export const inngest = new Inngest({  
  id: "mastra",  
  baseUrl: "http://localhost:8288",  
  isDev: true,  
  middleware: [realtimeMiddleware()],  
});  

Install Mastra with create mastra CLI

Source: https://mastra.ai/en/docs/getting-started/installation

This command initiates the Mastra project setup wizard, which guides you through creating a new project with example agents, workflows, and tools. It's the fastest way to get started. Dependencies include Node.js 20+ and an API key.

npm create mastra@latest -y
pnpm create mastra@latest -y
yarn create mastra@latest -y
bun create mastra@latest -y

Example: Install Text-to-SQL Mastra Template

Source: https://mastra.ai/en/docs/getting-started/templates

Demonstrates installing a specific Mastra template for a text-to-SQL application using npx. This is a practical example of the create-mastra command.

npx create-mastra@latest --template text-to-sql

Install Mastra Core with bun

Source: https://mastra.ai/en/examples/observability/basic-ai-tracing

Installs the Mastra core library using bun. Bun is a fast JavaScript runtime that can also manage packages.

bun add @mastra/core

Install Mastra Core with npm

Source: https://mastra.ai/en/examples/observability/basic-ai-tracing

Installs the Mastra core library using npm. This is the first step in setting up Mastra for AI tracing and other functionalities.

npm install @mastra/core

Run Development Server

Source: https://mastra.ai/en/docs/getting-started/installation

Commands to start the development server using different package managers (npm, pnpm, yarn, bun). This allows for testing the implemented agent and tool setup.

npm run dev
pnpm run dev
yarn run dev
bun run dev

Install Mastra MCP Dependencies with pnpm

Source: https://mastra.ai/en/examples/agents/deploying-mcp-server

Installs the necessary Mastra MCP and core packages, along with the tsup build tool. This command is essential for setting up the project environment.

pnpm add @mastra/mcp @mastra/core tsup

Install Mastra Core with pnpm

Source: https://mastra.ai/en/examples/observability/basic-ai-tracing

Installs the Mastra core library using pnpm. This command is an alternative to npm for dependency management.

pnpm add @mastra/core

Install Mastra Template using create-mastra

Source: https://mastra.ai/en/reference/templates/overview

Installs a Mastra template using the npx command, creating a complete project structure with necessary code and configurations. This is the primary method for starting a new project with a Mastra template.

npx create-mastra@latest --template template-name

Install Mastra Core with yarn

Source: https://mastra.ai/en/examples/observability/basic-ai-tracing

Installs the Mastra core library using yarn. This command is another alternative for managing Node.js dependencies.

yarn add @mastra/core

Start Development Server (Shell)

Source: https://mastra.ai/en/examples/agents/ai-sdk-v5-integration

This shell command starts the Next.js development server using pnpm. After executing this command, the application will be accessible at http://localhost:3000.

pnpm dev

Install Dependencies and Run Inngest Dev Server

Source: https://mastra.ai/en/examples/workflows/inngest-workflow

Installs necessary Mastra and Inngest packages using npm and starts the Inngest development server for local testing. It configures the server to connect to a local application at http://host.docker.internal:3000/inngest/api.

npm install @mastra/inngest inngest @mastra/core @mastra/deployer @hono/node-server @ai-sdk/openai

docker run --rm -p 8288:8288 \
  inngest/inngest \
  inngest dev -u http://host.docker.internal:3000/inngest/api

Set up Mastra MCPServer with TypeScript

Source: https://mastra.ai/en/examples/agents/deploying-mcp-server

Defines a basic Mastra MCPServer using the stdio transport in TypeScript. It includes importing the MCPServer class and a sample weather tool, then starts the server. Ensure your tools and server name are correctly configured.

#!/usr/bin/env node
import { MCPServer } from "@mastra/mcp";
import { weatherTool } from "./tools";
 
const server = new MCPServer({
  name: "my-mcp-server",
  version: "1.0.0",
  tools: { weatherTool },
});
 
server.startStdio().catch((error) => {
  console.error("Error running MCP server:", error);
  process.exit(1);
});

Install Cedar-OS CLI

Source: https://mastra.ai/en/docs/frameworks/agentic-uis/cedar-os

Command to install and run the Cedar-OS CLI for project setup. This is the initial step for integrating Cedar-OS.

npx cedar-os-cli plant-seed

Mastra Environment Variables Example (.env.example)

Source: https://mastra.ai/en/reference/templates/overview

An example file demonstrating required environment variables for Mastra templates. It includes placeholders for API keys for various LLM providers and other services, guiding users on necessary configurations.

# LLM provider API keys (choose one or more)
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_key_here
 
# Other service API keys as needed
OTHER_SERVICE_API_KEY=your_api_key_here

Package Installation (npm)

Source: https://mastra.ai/en/docs/memory/storage/memory-with-libsql

Lists the npm packages required to use Mastra's LibSQL memory integration and fastembed for local embeddings. These commands ensure that the necessary libraries are installed in the project to run the provided code examples.

npm install @mastra/libsql
npm install @mastra/fastembed

Multi-Environment Setup with Vector Query Tools (JavaScript)

Source: https://mastra.ai/en/examples/rag/usage/database-specific-config

Demonstrates setting up vector query tools with different configurations for development, staging, and production environments in JavaScript. It covers creating environment-specific tools and dynamically switching environments at runtime using RuntimeContext.

import { openai } from "@ai-sdk/openai";
import { createVectorQueryTool } from "@mastra/rag";
import { RuntimeContext } from "@mastra/core/runtime-context";
 
// Base configuration
const createSearchTool = (environment) => {
  return createVectorQueryTool({
    vectorStoreName: "pinecone",
    indexName: "documents", 
    model: openai.embedding("text-embedding-3-small"),
    databaseConfig: {
      pinecone: {
        namespace: environment
      }
    }
  });
};
 
// Create environment-specific tools
const devSearchTool = createSearchTool('dev');
const prodSearchTool = createSearchTool('prod');
 
// Or use runtime override
const dynamicSearchTool = createVectorQueryTool({
  vectorStoreName: "pinecone",
  indexName: "documents",
  model: openai.embedding("text-embedding-3-small")
});
 
// Switch environment at runtime
const switchEnvironment = async (environment, query) => {
  const runtimeContext = new RuntimeContext();
  runtimeContext.set('databaseConfig', {
    pinecone: {
      namespace: environment
    }
  });
 
  return await dynamicSearchTool.execute({
    context: { queryText: query },
    mastra,
    runtimeContext
  });
};

Multi-Environment Setup with Vector Query Tools (TypeScript)

Source: https://mastra.ai/en/examples/rag/usage/database-specific-config

Demonstrates setting up vector query tools with different configurations for development, staging, and production environments. It shows how to create environment-specific tools and how to dynamically switch environments at runtime using RuntimeContext.

import { openai } from "@ai-sdk/openai";
import { createVectorQueryTool } from "@mastra/rag";
import { RuntimeContext } from "@mastra/core/runtime-context";
 
// Base configuration
const createSearchTool = (environment: 'dev' | 'staging' | 'prod') => {
  return createVectorQueryTool({
    vectorStoreName: "pinecone",
    indexName: "documents",
    model: openai.embedding("text-embedding-3-small"),
    databaseConfig: {
      pinecone: {
        namespace: environment
      }
    }
  });
};
 
// Create environment-specific tools
const devSearchTool = createSearchTool('dev');
const prodSearchTool = createSearchTool('prod');
 
// Or use runtime override
const dynamicSearchTool = createVectorQueryTool({
  vectorStoreName: "pinecone", 
  indexName: "documents",
  model: openai.embedding("text-embedding-3-small")
});
 
// Switch environment at runtime
const switchEnvironment = async (environment: string, query: string) => {
  const runtimeContext = new RuntimeContext();
  runtimeContext.set('databaseConfig', {
    pinecone: {
      namespace: environment
    }
  });
 
  return await dynamicSearchTool.execute({
    context: { queryText: query },
    mastra,
    runtimeContext
  });
};

Initialize Project and Install Dependencies (bun)

Source: https://mastra.ai/en/docs/getting-started/installation

Initializes a new Node.js project with bun and installs necessary development and core Mastra dependencies, including TypeScript. Requires Node.js 20+.

bun init -y
bun add -d typescript @types/node mastra@latest
bun add @mastra/core@latest zod@^4

Get Agent Instructions (JavaScript)

Source: https://mastra.ai/en/reference/agents/getInstructions

Retrieves the instructions for an agent. This is a basic usage example without any options.

await agent.getInstructions();

Copy Environment Example File

Source: https://mastra.ai/en/docs/getting-started/templates

Copies the example environment file (.env.example) to a new file named .env. This is a common step to set up project-specific configurations and secrets.

cp .env.example .env

Custom OpenTelemetry Exporter Setup (Langfuse Example)

Source: https://mastra.ai/en/docs/observability/nextjs-tracing

This option demonstrates setting up a custom OpenTelemetry exporter, using Langfuse as an example. It involves installing necessary dependencies and creating an instrumentation.ts file to initialize the NodeSDK with a specific exporter and resource attributes, including the service name.

npm install @opentelemetry/api langfuse-vercel
import {
  NodeSDK,
  ATTR_SERVICE_NAME,
  resourceFromAttributes,
} from "@mastra/core/telemetry/otel-vendor";
import { LangfuseExporter } from "langfuse-vercel";

export function register() {
  const exporter = new LangfuseExporter({
    // ... Langfuse config
  });

  const sdk = new NodeSDK({
    resource: resourceFromAttributes({
      [ATTR_SERVICE_NAME]: "ai",
    }),
    traceExporter: exporter,
  });

  sdk.start();
}

Start Mastra Development Server

Source: https://mastra.ai/en/docs/getting-started/studio

Instructions to start the local Mastra development server using various package managers. This command initiates the server, making the Playground UI and REST API accessible.

npm run dev
pnpm run dev
yarn run dev
bun run dev
mastra dev

Install AI SDK v5 and Mastra Dependencies (JSON)

Source: https://mastra.ai/en/examples/agents/ai-sdk-v5-integration

This package.json file lists the project's dependencies, including beta versions of @ai-sdk/openai, @ai-sdk/react, and Mastra libraries, along with Next.js, React, SWR, and Zod. It's crucial for ensuring compatibility with AI SDK v5.

{
  "dependencies": {
    "@ai-sdk/openai": "2.0.0-beta.1",
    "@ai-sdk/react": "2.0.0-beta.1",
    "@mastra/core": "0.0.0-ai-v5-20250625173645",
    "@mastra/libsql": "0.0.0-ai-v5-20250625173645",
    "@mastra/memory": "0.0.0-ai-v5-20250625173645",
    "next": "15.1.7",
    "react": "^19.0.0",
    "react-dom": "^19.0.0",
    "swr": "^2.3.3",
    "zod": "^3.25.67"
  }
}

Install Mastra AI Evals

Source: https://mastra.ai/en/examples/evals/content-similarity

Installs the Mastra AI Evals package, which provides tools for evaluating content similarity. This is the first step before using any of the provided metrics.

npm install @mastra/evals

Initialize Project and Install Dependencies (npm)

Source: https://mastra.ai/en/docs/getting-started/installation

Initializes a new Node.js project with npm and installs necessary development and core Mastra dependencies, including TypeScript. Requires Node.js 20+.

npm init -y
npm install -D typescript @types/node mastra@latest
npm install @mastra/core@latest zod@^4

Install Mastra Client SDK with bun

Source: https://mastra.ai/en/docs/server-db/mastra-client

Installs the latest version of the Mastra Client SDK using bun. bun is a fast all-in-one JavaScript runtime.

bun add @mastra/client-js@latest

OtelExporter - Basic Usage Example

Source: https://mastra.ai/en/reference/observability/ai-tracing/exporters/otel

Demonstrates the basic setup of the OtelExporter, configuring it to send traces to a Signoz instance using an API key. It shows how to import the exporter and instantiate it with provider-specific settings.

import { OtelExporter } from '@mastra/otel-exporter';

const exporter = new OtelExporter({
  provider: {
    signoz: {
      apiKey: process.env.SIGNOZ_API_KEY,
      region: 'us',
    }
  },
});

Initialize Project and Install Dependencies (pnpm)

Source: https://mastra.ai/en/docs/getting-started/installation

Initializes a new Node.js project with pnpm and installs necessary development and core Mastra dependencies, including TypeScript. Requires Node.js 20+.

pnpm init -y
pnpm add -D typescript @types/node mastra@latest
pnpm add @mastra/core@latest zod@^4

Initialize Project and Install Dependencies (yarn)

Source: https://mastra.ai/en/docs/getting-started/installation

Initializes a new Node.js project with yarn and installs necessary development and core Mastra dependencies, including TypeScript. Requires Node.js 20+.

yarn init -y
yarn add -D typescript @types/node mastra@latest
yarn add @mastra/core@latest zod@^4

Usage Example

Source: https://mastra.ai/en/reference/deployer/deployer

An example demonstrating how to create a custom deployer by extending the abstract Deployer class and implementing the deploy method.

## Usage Example

```typescript
import { Deployer } from "@mastra/deployer";
 
// Create a custom deployer by extending the abstract Deployer class
class CustomDeployer extends Deployer {
  constructor() {
    super({ name: "custom-deployer" });
  }
 
  // Implement the abstract deploy method
  async deploy(outputDirectory: string): Promise<void> {
    // Prepare the output directory
    await this.prepare(outputDirectory);
 
    // Bundle the application
    await this._bundle("server.ts", "mastra.ts", outputDirectory);
 
    // Custom deployment logic
    // ...
  }
}

--------------------------------

### Initialize Mastra Project with CLI

Source: https://mastra.ai/en/docs/frameworks/agentic-uis/openrouter

Initializes a new Mastra project using the npx create-mastra command. This CLI tool guides users through project setup, including naming, component selection (Agents recommended), and initial provider choice (OpenAI recommended, to be manually changed later).

```bash
npx create-mastra@latest

Start Mastra Dev Server (CLI)

Source: https://mastra.ai/en/docs/frameworks/web-frameworks/sveltekit

Starts the Mastra Dev Server directly using the mastra CLI command. This is an alternative to using npm scripts for starting the development server.

mastra dev:mastra

Vercel's OpenTelemetry Setup

Source: https://mastra.ai/en/docs/observability/nextjs-tracing

This method utilizes Vercel's built-in OpenTelemetry integration. It requires installing the @vercel/otel package and creating an instrumentation.ts file to register the OpenTelemetry setup with your specified service name.

npm install @opentelemetry/api @vercel/otel
import { registerOTel } from "@vercel/otel";

export function register() {
  registerOTel({ serviceName: "your-project-name" });
}

Install Mastra Template using bun

Source: https://mastra.ai/en/docs/getting-started/templates

Installs a Mastra template using the bun runtime. This command uses bun create to run the latest create-mastra and allows specifying a template.

bun create mastra@latest --template template-name

Retrieve Trace IDs from Mastra Workflow Runs

Source: https://mastra.ai/en/docs/observability/ai-tracing

This example illustrates how to get the traceId from Mastra workflow executions, both for asynchronous runs started with createRunAsync and for streamed workflows. The trace ID is available in the result of start and in the final state of a streamed workflow.

// Create a workflow run
const run = await mastra.getWorkflow('myWorkflow').createRunAsync();
 
// Start the workflow
const result = await run.start({
  inputData: { data: 'process this' }
});
 
console.log('Trace ID:', result.traceId);
 
// Or stream the workflow
const { stream, getWorkflowState } = run.stream({
  inputData: { data: 'process this' }
});
 
// Get the final state which includes the trace ID
const finalState = await getWorkflowState();
console.log('Trace ID:', finalState.traceId);

Manually Create Mastra Project Directory

Source: https://mastra.ai/en/docs/getting-started/installation

Creates a new directory for your Mastra project and navigates into it. This is the first step in the manual installation process.

mkdir my-first-agent && cd my-first-agent

Install Mastra Client SDK with npm

Source: https://mastra.ai/en/docs/server-db/mastra-client

Installs the latest version of the Mastra Client SDK using npm. This is the first step to integrating Mastra's capabilities into your project.

npm install @mastra/client-js@latest

Navigate and Configure Mastra Project

Source: https://mastra.ai/en/reference/templates/overview

Provides commands for navigating into the newly created project directory, copying environment variable configuration, and installing project dependencies. These steps are crucial after template installation.

cd your-project-name
cp .env.example .env
npm install

Install Voice Provider Package

Source: https://mastra.ai/en/docs/voice/speech-to-text

Install specific voice provider packages using pnpm to extend Mastra's STT capabilities. This example shows how to add the OpenAI provider.

pnpm add @mastra/voice-openai  # Example for OpenAI

Install Mastra Client SDK with pnpm

Source: https://mastra.ai/en/docs/server-db/mastra-client

Installs the latest version of the Mastra Client SDK using pnpm. pnpm is a performant package manager that saves disk space and speeds up installations.

pnpm add @mastra/client-js@latest

Create a New Mastra Project (Bun)

Source: https://mastra.ai/en/reference/cli/create-mastra

Initializes a new Mastra project using Bun. This command scaffolds a complete Mastra setup in a dedicated directory. It runs interactively by default.

bun create mastra@latest

Register Mastra Agent (TypeScript)

Source: https://mastra.ai/en/examples/agents/runtime-context

Registers a new Mastra agent named 'supportAgent' with the Mastra core instance. This is the initial setup step required before using any agents. It imports the Mastra class and the specific agent implementation.

import { Mastra } from "@mastra/core/mastra";
import { supportAgent } from "./agents/support-agent";
 
export const mastra = new Mastra({
  agents: { supportAgent }
});

Mastra Client Initialization

Source: https://mastra.ai/en/reference/client-js/mastra-client

Example of how to initialize the Mastra Client with a base URL.

## Mastra Client Initialization

### Description
Initializes the Mastra Client with the necessary configuration, including the base URL for API requests.

### Method
`new MastraClient(options)`

### Parameters
#### Request Body
- **baseUrl** (string) - Required - The base URL for the Mastra API.
- **retries** (number) - Optional - The number of times a request will be retried on failure.
- **backoffMs** (number) - Optional - The initial delay in milliseconds before retrying a failed request.
- **maxBackoffMs** (number) - Optional - The maximum backoff time in milliseconds.
- **headers** (Record<string, string>) - Optional - Custom HTTP headers to include with every request.
- **credentials** (\"omit\" | \"same-origin\" | \"include\") - Optional - Credentials mode for requests.

### Request Example
```typescript
import { MastraClient } from "@mastra/client-js";

export const mastraClient = new MastraClient({
  baseUrl: "http://localhost:4111/",
  retries: 5,
  backoffMs: 500,
});

Response

Success Response (200)

  • MastraClient - An instance of the MastraClient.

Response Example

// No direct JSON response, returns a MastraClient instance.

--------------------------------

### Example Usage of RAG Agent for Question Answering

Source: https://mastra.ai/en/examples/rag/usage/basic-rag

Demonstrates how to use the configured RAG agent to generate a response based on a prompt and retrieved context. The prompt guides the agent to use only the provided context for its answer.

```typescript
const prompt = "
[Insert query based on document here]
Please base your answer only on the context provided in the tool. 
If the context doesn't contain enough information to fully answer the question, please state that explicitly.
";

const completion = await agent.generate(prompt);
console.log(completion.text);

MongoDB Store Constructor Examples

Source: https://mastra.ai/en/reference/storage/mongodb

Demonstrates different ways to instantiate the MongoDBStore. Includes a basic connection and an example with advanced MongoDB client options for configuration.

import { MongoDBStore } from "@mastra/mongodb";
 
// Basic connection without custom options
const store1 = new MongoDBStore({
  url: "mongodb+srv://user:password@cluster.mongodb.net",
  dbName: "mastra_storage",
});
 
// Using connection string with options
const store2 = new MongoDBStore({
  url: "mongodb+srv://user:password@cluster.mongodb.net",
  dbName: "mastra_storage",
  options: {
    retryWrites: true,
    maxPoolSize: 10,
    serverSelectionTimeoutMS: 5000,
    socketTimeoutMS: 45000,
  },
});

Install Mastra Core Package

Source: https://mastra.ai/en/docs/agents/overview

Installs the Mastra core package required for agent functionality. This is a prerequisite for setting up and creating agents.

npm install @mastra/core

Install Dependencies with pnpm

Source: https://mastra.ai/en/reference/observability/otel-tracing/providers/keywordsai

Installs the necessary project dependencies using the pnpm package manager. Ensure you have Node.js and pnpm installed globally.

pnpm install

Shell Curl Command to Start Mastra Workflow

Source: https://mastra.ai/en/examples/workflows_legacy/workflow-variables

This curl command demonstrates how to asynchronously start the 'user-registration' Mastra workflow. It sends a JSON payload containing user email, name, and age to the specified API endpoint.

curl --location 'http://localhost:4111/api/workflows/user-registration/start-async' \
     --header 'Content-Type: application/json' \
     --data '{ 
       "email": "user@example.com",
       "name": "John Doe",
       "age": 25
     }'

Install Mastra Supabase Auth Package

Source: https://mastra.ai/en/docs/auth/supabase

Install the @mastra/auth-supabase package using npm. This is a prerequisite for using the MastraAuthSupabase class.

npm install @mastra/auth-supabase@latest

Use Deployed Mastra MCPServer with MCPClient

Source: https://mastra.ai/en/examples/agents/deploying-mcp-server

Demonstrates how to instantiate an MCPClient and configure it to use a deployed Mastra MCPServer. It shows how to specify the command to run the server package, commonly using 'npx', and retrieves tools and toolsets from the server configuration.

import { MCPClient } from "@mastra/mcp";
 
const mcp = new MCPClient({
  servers: {
    // Give this MCP server instance a name
    yourServerName: {
      command: "npx",
      args: ["-y", "@your-org-name/your-package-name@latest"], // Replace with your package name
    },
  },
});
 
// You can then get tools or toolsets from this configuration to use in your agent
const tools = await mcp.getTools();
const toolsets = await mcp.getToolsets();

Usage Example

Source: https://mastra.ai/en/reference/observability/ai-tracing/exporters/langsmith

Demonstrates how to import and initialize the LangSmithExporter.

## Usage

### Description
Example of initializing the LangSmithExporter.

### Code Example
```javascript
import { LangSmithExporter } from '@mastra/langsmith';

const exporter = new LangSmithExporter({
  apiKey: process.env.LANGSMITH_API_KEY,
  apiUrl: 'https://api.smith.langchain.com',
  logLevel: 'info',
});

--------------------------------

### QdrantVector Constructor and Basic Usage (Python)

Source: https://mastra.ai/en/reference/rag/qdrant

This snippet demonstrates how to initialize the QdrantVector store with connection details and API key, and includes an example of a basic query operation. It highlights the use of indexName, queryVector, and topK parameters.

```python
from langchain_community.vectorstores import QdrantVector

# Initialize QdrantVector store
store = QdrantVector(
    url="https://xyz-example.eu-central.aws.cloud.qdrant.io:6333",
    apiKey="YOUR_API_KEY",
    https=True
)

# Example query
query_vector = [0.1, 0.2, 0.3]  # Replace with your actual query vector
results = store.query(
    indexName="my_index",
    queryVector=query_vector,
    topK=5
)
print(results)

Example: Low Word Inclusion Score in JavaScript

Source: https://mastra.ai/en/examples/evals/custom-native-javascript-eval

Shows an example of using the WordInclusionMetric to get a low score. In this scenario, none of the query words are present in the response, resulting in a score of 0.

import { WordInclusionMetric } from "./mastra/evals/example-word-inclusion";
 
const metric = new WordInclusionMetric();
 
const query = "Colombia, Brazil, Panama";
const response = "Let's go to Mexico";
 
const result = await metric.measure(query, response);
 
console.log(result);

Workflow Execution with Query in TypeScript

Source: https://mastra.ai/en/examples/rag/usage/cot-workflow-rag

Executes a configured Mastra AI workflow with a given query. It defines the prompt, creates a run, and starts the workflow, logging the results.

const query = "What are the main adaptation strategies for farmers?";

console.log("\nQuery:", query);
const prompt = `
    Please answer the following question:
    ${query}
 
    Please base your answer only on the context provided in the tool. If the context doesn't contain enough information to fully answer the question, please state that explicitly.
    `;
 
const { runId, start } = await ragWorkflow.createRunAsync();
 
console.log("Run:", runId);
 
const workflowResult = await start({
  triggerData: {
    query: prompt,
  },
});
console.log("\nThought Process:");
console.log(workflowResult.results);

Install Arize Exporter with bun

Source: https://mastra.ai/en/docs/observability/ai-tracing/exporters/arize

Installs the Arize Exporter package using bun. Bun is a fast JavaScript runtime and package manager.

bun add @mastra/arize

Install Mastra Packages (npm, yarn, pnpm, bun)

Source: https://mastra.ai/en/docs/frameworks/web-frameworks/sveltekit

Installs the necessary Mastra packages for SvelteKit integration. This includes the core Mastra library and specific adapters like @mastra/libsql. Ensure you have Node.js and a package manager installed.

npm install mastra@latest @mastra/core@latest @mastra/libsql@latest
yarn add mastra@latest @mastra/core@latest @mastra/libsql@latest
pnpm add mastra@latest @mastra/core@latest @mastra/libsql@latest
bun add mastra@latest @mastra/core@latest @mastra/libsql@latest

Configure Simple Agent with Default Strategy

Source: https://mastra.ai/en/examples/processors/message-length-limiter

Shows how to instantiate a Mastra Agent using a simplified approach for MessageLengthLimiter, relying on the default 'block' strategy. This is useful for quick setup when the default behavior is desired.

import { Agent } from "@mastra/core/agent";
import { MessageLengthLimiter } from "../processors/message-length-limiter";

export const simpleAgent = new Agent({
  name: 'simple-agent',
  instructions: 'You are a helpful assistant',
  model: "openai/gpt-4o",
  inputProcessors: [
    new MessageLengthLimiter(500),
  ],
});

Initialize Mastra Application Orchestrator

Source: https://mastra.ai/en/reference/core/mastra-class

Example of initializing the Mastra class, registering workflows, agents, storage, and logger. It demonstrates setting up the core components of a Mastra application.

import { Mastra } from '@mastra/core/mastra';
import { PinoLogger } from '@mastra/loggers';
import { LibSQLStore } from '@mastra/libsql';
import { weatherWorkflow } from './workflows/weather-workflow';
import { weatherAgent } from './agents/weather-agent';

export const mastra = new Mastra({
  workflows: { weatherWorkflow },
  agents: { weatherAgent },
  storage: new LibSQLStore({
    url: ":memory:",
  }),
  logger: new PinoLogger({
    name: 'Mastra',
    level: 'info',
  }),
});

Install AIHubMix Provider with bun

Source: https://mastra.ai/en/models/providers/aihubmix

Installs the AIHubMix AI SDK provider package using the Bun runtime. This command facilitates the integration of AIHubMix models into Mastra applications.

bun add @aihubmix/ai-sdk-provider

Example Usage: Single and Multiple Embeddings with AI SDK

Source: https://mastra.ai/en/reference/rag/embeddings

This example demonstrates how to use both the embed and embedMany functions from the AI SDK. It shows the setup for generating a single embedding and multiple embeddings using the OpenAI embedding model.

import {
 embed,
 embedMany
} from "ai";
import {
 openai
} from "@ai-sdk/openai";

// Single embedding
const singleResult = await embed({
 model: openai.embedding("text-embedding-3-small"),
 value: "What is the meaning of life?",
});

// Multiple embeddings
const multipleResult = await embedMany({
 model: openai.embedding("text-embedding-3-small"),
 values: [
  "First question about life",
  "Second question about universe",
  "Third question about everything",
 ],
});

Connect to Smithery.ai MCP Registry via CLI (Windows) - TypeScript

Source: https://mastra.ai/en/docs/tools-mcp/mcp-overview

Configures an MCPClient to interact with Smithery.ai services using their CLI on Windows. This example shows the setup for a 'sequentialThinking' tool using npx and the necessary arguments for the Smithery CLI.

// Windows
import { MCPClient } from "@mastra/mcp";

const mcp = new MCPClient({
  servers: {
    sequentialThinking: {
      command: "npx",
      args: [
        "-y",
        "@smithery/cli@latest",
        "run",
        "@smithery-ai/server-sequential-thinking",
        "--config",
        "{}",
      ],
    },
  },
});

Custom Deployer Implementation Example (TypeScript)

Source: https://mastra.ai/en/reference/deployer/deployer

Demonstrates how to create a custom deployer by extending the abstract Deployer class. It includes implementing the abstract deploy method, preparing the output directory, and bundling the application. This example highlights the extensibility of the Deployer for specific deployment needs.

import { Deployer } from "@mastra/deployer";
 
// Create a custom deployer by extending the abstract Deployer class
class CustomDeployer extends Deployer {
  constructor() {
    super({ name: "custom-deployer" });
  }
 
  // Implement the abstract deploy method
  async deploy(outputDirectory: string): Promise<void> {
    // Prepare the output directory
    await this.prepare(outputDirectory);
 
    // Bundle the application
    await this._bundle("server.ts", "mastra.ts", outputDirectory);
 
    // Custom deployment logic
    // ...
  }
}

Install CloudflareDeployer - npm

Source: https://mastra.ai/en/docs/deployment/serverless-platforms/cloudflare-deployer

Installs the CloudflareDeployer package from npm. This is the first step to using the deployer for Cloudflare Workers.

npm install @mastra/deployer-cloudflare@latest

Create a New Mastra Project (PNPM)

Source: https://mastra.ai/en/reference/cli/create-mastra

Initializes a new Mastra project using PNPM. This command scaffolds a complete Mastra setup in a dedicated directory. It runs interactively by default.

pnpm create mastra@latest

Install Vercel Deployer Package

Source: https://mastra.ai/en/docs/deployment/serverless-platforms/vercel-deployer

Installs the latest version of the Vercel deployer package for Mastra applications using npm.

npm install @mastra/deployer-vercel@latest

Start Mastra Dev Server (npm)

Source: https://mastra.ai/en/docs/frameworks/web-frameworks/sveltekit

Starts the Mastra Dev Server using the npm run command. This command exposes agents as REST endpoints locally.

npm run dev:mastra

Install MastraAuthAuth0 Package

Source: https://mastra.ai/en/docs/auth/auth0

Installs the @mastra/auth-auth0 package using npm. This is a prerequisite for using the MastraAuthAuth0 class in your project.

npm install @mastra/auth-auth0@latest

Create Mastra Entry Point File

Source: https://mastra.ai/en/docs/getting-started/installation

This command creates the main TypeScript file for the Mastra application. This file will be used to initialize and export the Mastra instance.

touch src/mastra/index.ts

CloudExporter Usage Examples

Source: https://mastra.ai/en/reference/observability/ai-tracing/exporters/cloud-exporter

Demonstrates how to import and instantiate the CloudExporter. Examples show usage with environment variables for authentication and explicit configuration for various parameters like access token, batch size, and wait time.

import { CloudExporter } from '@mastra/core/ai-tracing';

// Uses environment variable for token
const exporter = new CloudExporter();

// Explicit configuration
const customExporter = new CloudExporter({
  accessToken: 'your-token',
  maxBatchSize: 500,
  maxBatchWaitMs: 2000
});

LibSQLVector Methods: Index Management

Source: https://mastra.ai/en/reference/vectors/libsql

Provides examples for managing vector indexes using the LibSQLVector store. This includes creating new indexes, describing existing ones to get statistics, deleting indexes, and listing all available indexes.

// Create an index
await store.createIndex({
  indexName: "myCollection",
  dimension: 1536,
});

// Describe an index
const indexStats = await store.describeIndex({
  indexName: "myCollection",
});
console.log(indexStats);

// Delete an index
await store.deleteIndex({
  indexName: "myCollection",
});

// List all indexes
const indexes = await store.listIndexes();
console.log(indexes);

Install xAI Provider Package using bun

Source: https://mastra.ai/en/models/providers/xai

This command installs the xAI provider directly as a standalone package using bun. Bun is a fast JavaScript runtime and package manager, offering an alternative for installing the xAI provider.

bun add @ai-sdk/xai

ArizeExporter Usage: Arize AX Configuration

Source: https://mastra.ai/en/reference/observability/ai-tracing/exporters/arize

Example demonstrating the configuration of ArizeExporter for Arize AX. This setup requires a spaceId, an apiKey, and the projectName.

import { ArizeExporter } from '@mastra/arize';

const exporter = new ArizeExporter({
  spaceId: process.env.ARIZE_SPACE_ID!,
  apiKey: process.env.ARIZE_API_KEY!,
  projectName: 'my-ai-project',
});

Configure Mastra AI with WhatsApp Webhook

Source: https://mastra.ai/en/examples/agents/whatsapp-chat-bot

Sets up the Mastra AI instance with chat workflows, text message and chat agents, and integrates WhatsApp webhook endpoints for both verification (GET) and message handling (POST). It uses LibSQLStore for storage and PinoLogger for logging.

import { Mastra } from "@mastra/core/mastra";
import { registerApiRoute } from "@mastra/core/server";
import { PinoLogger } from "@mastra/loggers";
import { LibSQLStore } from "@mastra/libsql";

import { chatWorkflow } from "./workflows/chat-workflow";
import { textMessageAgent } from "./agents/text-message-agent";
import { chatAgent } from "./agents/chat-agent";

export const mastra = new Mastra({
  workflows: { chatWorkflow },
  agents: { textMessageAgent, chatAgent },
  storage: new LibSQLStore({
    url: ":memory:",
  }),
  logger: new PinoLogger({
    name: "Mastra",
    level: "info",
  }),
  server: {
    apiRoutes: [
      registerApiRoute("/whatsapp", {
        method: "GET",
        handler: async (c) => {
          const verifyToken = process.env.WHATSAPP_VERIFY_TOKEN;
          const {
            "hub.mode": mode,
            "hub.challenge": challenge,
            "hub.verify_token": token,
          } = c.req.query();

          if (mode === "subscribe" && token === verifyToken) {
            return c.text(challenge, 200);
          } else {
            return c.status(403);
          }
        },
      }),
      registerApiRoute("/whatsapp", {
        method: "POST",
        handler: async (c) => {
          const mastra = c.get("mastra");
          const chatWorkflow = mastra.getWorkflow("chatWorkflow");

          const body = await c.req.json();

          const workflowRun = await chatWorkflow.createRunAsync();
          const runResult = await workflowRun.start({
            inputData: { userMessage: JSON.stringify(body) },
          });

          return c.json(runResult);
        },
      }),
    ],
  },
});

Mastra Workflow Step: Analyze Context

Source: https://mastra.ai/en/examples/rag/usage/cot-workflow-rag

Defines a Mastra workflow step 'analyzeContext' that retrieves the query from the context, gets the 'ragAgent', and generates an initial analysis of the query. The output is an object containing 'initialAnalysis'.

const analyzeContext = new Step({
  id: "analyzeContext",
  outputSchema: z.object({
    initialAnalysis: z.string(),
  }),
  execute: async ({ context, mastra }) => {
    console.log("---------------------------");
    const ragAgent = mastra?.getAgent("ragAgent");
    const query = context?.getStepResult<{ query: string }>("trigger")?.query;
 
    const analysisPrompt = `${query} 1. First, carefully analyze the retrieved context chunks and identify key information.`;
 
    const analysis = await ragAgent?.generate(analysisPrompt);
    console.log(analysis?.text);
    return {
      initialAnalysis: analysis?.text ?? "",
    };
  },
});

POST /run/start

Source: https://mastra.ai/en/reference/workflows/run-methods/start

Starts a workflow run with the provided input data and optional runtime context.

## POST /run/start

### Description
Starts a workflow run with input data, allowing you to execute the workflow from the beginning.

### Method
POST

### Endpoint
/run/start

### Parameters
#### Request Body
- **inputData** (object) - Required - Input data that matches the workflow's input schema.
- **runtimeContext** (object) - Optional - Runtime context data to use during workflow execution.
- **writableStream** (WritableStream) - Optional - Optional writable stream for streaming workflow output.
- **tracingContext** (TracingContext) - Optional - AI tracing context for creating child spans and adding metadata.
- **currentSpan** (AISpan) - Optional - Current AI span for creating child spans and adding metadata.
- **tracingOptions** (TracingOptions) - Optional - Options for AI tracing configuration.
- **metadata** (Record<string, any>) - Optional - Metadata to add to the root trace span.
- **outputOptions** (OutputOptions) - Optional - Options for AI tracing configuration.
- **includeState** (boolean) - Optional - Whether to include the workflow run state in the result.

### Request Example
```json
{
  "inputData": {
    "value": "initial data"
  },
  "runtimeContext": {
    "variable": false
  }
}

Response

Success Response (200)

  • result (Promise) - A promise that resolves to the workflow execution result.
  • traceId (string) - The trace ID associated with this execution when AI tracing is enabled.

--------------------------------

### Install LangSmith Exporter with bun

Source: https://mastra.ai/en/docs/observability/ai-tracing/exporters/langsmith

Installs the LangSmith Exporter package using bun. Bun is a fast JavaScript runtime and toolkit.

```bash
bun add @mastra/langsmith

Basic Mastra Firebase Authentication Setup (TypeScript)

Source: https://mastra.ai/en/reference/auth/firebase

This example demonstrates the basic setup for Mastra authentication using Firebase. It automatically utilizes environment variables for Firebase service account and Firestore database ID. The Mastra server is configured with the MastraAuthFirebase instance.

import {
  Mastra,
} from "@mastra/core/mastra";
import {
  MastraAuthFirebase,
} from "@mastra/auth-firebase";
 
// Automatically uses FIREBASE_SERVICE_ACCOUNT and FIRESTORE_DATABASE_ID env vars
export const mastra = new Mastra({
  // ..
  server: {
    experimental_auth: new MastraAuthFirebase(),
  },
});

Install @mastra/mcp using npm

Source: https://mastra.ai/en/guides/guide/notes-mcp-server

Installs the @mastra/mcp package into your project. This is a Node.js package manager command.

npm install @mastra/mcp

Start Workflow Run with Runtime Context (JavaScript)

Source: https://mastra.ai/en/reference/workflows/run-methods/start

Demonstrates starting a workflow run with both input data and a custom RuntimeContext. This allows for setting and accessing variables during workflow execution. It requires importing RuntimeContext from @mastra/core/runtime-context.

import { RuntimeContext } from "@mastra/core/runtime-context";
 
const run = await workflow.createRunAsync();
 
const runtimeContext = new RuntimeContext();
runtimeContext.set("variable", false);
 
const result = await run.start({
  inputData: {
    value: "initial data"
  },
  runtimeContext
});

Free Tier User Agent Usage Example (TypeScript)

Source: https://mastra.ai/en/examples/agents/runtime-context

Demonstrates how a free tier user interacts with the 'supportAgent'. It initializes the agent, sets up a runtime context with 'free' tier and 'en' language, and calls the agent to generate a response for a specific query. Requires 'dotenv/config'.

import "dotenv/config";
import { mastra } from "../mastra";
import { RuntimeContext } from "@mastra/core/runtime-context";
import type { SupportRuntimeContext } from "../mastra/agents/support-agent";
 
const agent = mastra.getAgent("supportAgent");
const runtimeContext = new RuntimeContext<SupportRuntimeContext>();
 
runtimeContext.set("user-tier", "free");
runtimeContext.set("language", "en");
 
const response = await agent.generate(
  "I'm having trouble with API rate limits. Can you help?",
  { runtimeContext }
);
 
console.log(response.text);

High Example: Mastra Agent Response Validation (TypeScript)

Source: https://mastra.ai/en/examples/processors/response-validator

Demonstrates a successful response validation scenario using a Mastra agent configured with ResponseValidator. The agent's prompt is designed to elicit keywords that the validator requires, resulting in a passed validation. This example includes the agent setup and the generation call.

import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { ResponseValidator } from "./mastra/processors/response-validator";

// Create agent that requires AI-related keywords
export const agent = new Agent({
  name: 'validated-agent',
  instructions: 'You are an AI expert. Always mention artificial intelligence and machine learning when discussing AI topics.',
  model: openai("gpt-4o"),
  outputProcessors: [
    new ResponseValidator(['artificial intelligence', 'machine learning']),
  ],
});

const result = await agent.generate("Explain how AI systems learn from data.");
console.log("✅ Response passed validation:");
console.log(result.text);

Pro Tier User Agent Usage Example (TypeScript)

Source: https://mastra.ai/en/examples/agents/runtime-context

Illustrates how a pro tier user uses the 'supportAgent'. It sets up a runtime context with 'pro' tier and 'es' language, then prompts the agent for detailed analytics and optimization recommendations. Requires 'dotenv/config'.

import "dotenv/config";
import { mastra } from "../mastra";
import { RuntimeContext } from "@mastra/core/runtime-context";
import type { SupportRuntimeContext } from "../mastra/agents/support-agent";
 
const agent = mastra.getAgent("supportAgent");
const runtimeContext = new RuntimeContext<SupportRuntimeContext>();
 
runtimeContext.set("user-tier", "pro");
runtimeContext.set("language", "es");
 
const response = await agent.generate(
  "I need detailed analytics on my API usage patterns and optimization recommendations.",
  { runtimeContext }
);
 
console.log(response.text);

Get SSE Transport Object

Source: https://mastra.ai/en/reference/tools/mcp-server

Retrieves the SSEServerTransport object if the server was started with startSSE(). This is mainly for internal checks or testing and returns an object or undefined.

getSseTransport(): SSEServerTransport | undefined

Configure package.json for Mastra MCPServer Build

Source: https://mastra.ai/en/examples/agents/deploying-mcp-server

Configures the 'bin' entry in package.json to point to the compiled server file and defines a 'build:mcp' script using tsup. This script compiles the TypeScript server file, generates type definitions, and makes the output executable.

{
  "bin": "dist/stdio.js",
  "scripts": {
    "build:mcp": "tsup src/mastra/stdio.ts --format esm --no-splitting --dts && chmod +x dist/stdio.js"
  }
}

Run Mastra Legacy Workflow Example (TypeScript)

Source: https://mastra.ai/en/reference/legacyWorkflows/events

This TypeScript code demonstrates how to execute the previously defined Mastra AI 'document-request-workflow'. It shows how to retrieve the workflow, start a new run, and then resume the workflow with simulated 'approvalReceived' and 'documentUploaded' events.

import { requestWorkflow } from "./workflows";
import { mastra } from "./mastra";

async function runWorkflow() {
  // Get the workflow
  const workflow = mastra.legacy_getWorkflow("document-request-workflow");
  const run = workflow.createRun();

  // Start the workflow
  const initialResult = await run.start();
  console.log("Workflow started:", initialResult.results);

  // Simulate receiving approval
  const afterApprovalResult = await run.resumeWithEvent("approvalReceived", {
    approved: true,
    approverName: "Jane Smith",
  });
  console.log("After approval:", afterApprovalResult.results);

  // Simulate document upload
  const finalResult = await run.resumeWithEvent("documentUploaded", {
    documentId: "doc-456",
    documentType: "invoice",
  });
  console.log("Final result:", finalResult.results);
}

runWorkflow().catch(console.error);

Install OpenAI SDK using bun

Source: https://mastra.ai/en/models/providers/openai

This snippet shows how to install the OpenAI SDK as a standalone package using bun. This package can be used directly, bypassing the Mastra model router for specific use cases. No other dependencies are needed for this installation command.

bun add @ai-sdk/openai

Install Mastra Inngest Packages

Source: https://mastra.ai/en/docs/workflows/inngest-workflow

Installs the necessary Mastra packages for Inngest integration via npm.

npm install @mastra/inngest @mastra/core @mastra/deployer

Install Groq Provider using bun

Source: https://mastra.ai/en/models/providers/groq

This command installs the @ai-sdk/groq package using bun, allowing direct integration with Groq models as a standalone provider. This method is an alternative to using the Mastra model router string.

bun add @ai-sdk/groq

=== COMPLETE CONTENT === This response contains all available snippets from this library. No additional content exists. Do not make further requests.

# Mastra
> Mastra is an open-source TypeScript agent framework designed to provide the essential primitives for building AI applications. It enables developers to create AI agents with memory and tool-calling capabilities, implement deterministic LLM workflows, and leverage RAG for knowledge integration. With features like model routing, workflow graphs, and automated evals, Mastra provides a complete toolkit for developing, testing, and deploying AI applications.
This documentation covers everything from getting started to advanced features, APIs, and best practices for working with Mastra's agent-based architecture.
The documentation is organized into key sections:
- **docs**: Core documentation covering concepts, features, and implementation details
- **examples**: Practical examples and use cases demonstrating Mastra's capabilities
- **showcase**: A showcase of applications built using Mastra
Each section contains detailed markdown files that provide comprehensive information about Mastra's features and how to use them effectively.
## EN - docs
- [Adding Voice to Agents | Agents](https://mastra.ai/docs/agents/adding-voice)
- [Agent Memory | Agents](https://mastra.ai/docs/agents/agent-memory): Learn how to add memory to agents to store conversation history and maintain context across interactions.
- [Guardrails | Agents](https://mastra.ai/docs/agents/guardrails): Learn how to implement guardrails using input and output processors to secure and control AI interactions.
- [Agent Networks | Agents](https://mastra.ai/docs/agents/networks): Learn how to coordinate multiple agents, workflows, and tools using agent networks for complex, non-deterministic task execution.
- [Using Agents | Agents](https://mastra.ai/docs/agents/overview): Overview of agents in Mastra, detailing their capabilities and how they interact with tools, workflows, and external systems.
- [Using Tools | Agents](https://mastra.ai/docs/agents/using-tools): Learn how to create tools and add them to agents to extend capabilities beyond text generation.
- [MastraAuthAuth0 Class | Auth](https://mastra.ai/docs/auth/auth0): Documentation for the MastraAuthAuth0 class, which authenticates Mastra applications using Auth0 authentication.
- [MastraAuthClerk Class | Auth](https://mastra.ai/docs/auth/clerk): Documentation for the MastraAuthClerk class, which authenticates Mastra applications using Clerk authentication.
- [MastraAuthFirebase Class | Auth](https://mastra.ai/docs/auth/firebase): Documentation for the MastraAuthFirebase class, which authenticates Mastra applications using Firebase Authentication.
- [Auth Overview | Auth](https://mastra.ai/docs/auth): Learn about different Auth options for your Mastra applications
- [MastraJwtAuth Class | Auth](https://mastra.ai/docs/auth/jwt): Documentation for the MastraJwtAuth class, which authenticates Mastra applications using JSON Web Tokens.
- [MastraAuthSupabase Class | Auth](https://mastra.ai/docs/auth/supabase): Documentation for the MastraAuthSupabase class, which authenticates Mastra applications using Supabase Auth.
- [MastraAuthWorkos Class | Auth](https://mastra.ai/docs/auth/workos): Documentation for the MastraAuthWorkos class, which authenticates Mastra applications using WorkOS authentication.
- [Contributing Templates | Community](https://mastra.ai/docs/community/contributing-templates): How to contribute your own templates to the Mastra ecosystem
- [Discord Community | Community](https://mastra.ai/docs/community/discord): Information about the Mastra Discord community and MCP bot.
- [License | Community](https://mastra.ai/docs/community/licensing): Mastra License
- [Building Mastra | Deployment](https://mastra.ai/docs/deployment/building-mastra): Learn how to build a Mastra server with build settings and deployment options.
- [Amazon EC2 | Deployment](https://mastra.ai/docs/deployment/cloud-providers/amazon-ec2): Deploy your Mastra applications to Amazon EC2.
- [AWS Lambda | Deployment](https://mastra.ai/docs/deployment/cloud-providers/aws-lambda): Deploy your Mastra applications to AWS Lambda using Docker containers and the AWS Lambda Web Adapter.
- [Azure App Services | Deployment](https://mastra.ai/docs/deployment/cloud-providers/azure-app-services): Deploy your Mastra applications to Azure App Services.
- [CloudflareDeployer | Deployment](https://mastra.ai/docs/deployment/cloud-providers/cloudflare-deployer): Learn how to deploy a Mastra application to Cloudflare using the Mastra CloudflareDeployer
- [Digital Ocean | Deployment](https://mastra.ai/docs/deployment/cloud-providers/digital-ocean): Deploy your Mastra applications to Digital Ocean.
- [Cloud Providers | Deployment](https://mastra.ai/docs/deployment/cloud-providers): Deploy your Mastra applications to popular cloud providers.
- [NetlifyDeployer | Deployment](https://mastra.ai/docs/deployment/cloud-providers/netlify-deployer): Learn how to deploy a Mastra application to Netlify using the Mastra NetlifyDeployer
- [VercelDeployer | Deployment](https://mastra.ai/docs/deployment/cloud-providers/vercel-deployer): Learn how to deploy a Mastra application to Vercel using the Mastra VercelDeployer
- [Navigating the Dashboard | Mastra Cloud](https://mastra.ai/docs/deployment/mastra-cloud/dashboard): Details of each feature available in Mastra Cloud
- [Understanding Tracing and Logs | Mastra Cloud](https://mastra.ai/docs/deployment/mastra-cloud/observability): Monitoring and debugging tools for Mastra Cloud deployments
- [Mastra Cloud | Mastra Cloud](https://mastra.ai/docs/deployment/mastra-cloud/overview): Deployment and monitoring service for Mastra applications
- [Setting Up and Deploying | Mastra Cloud](https://mastra.ai/docs/deployment/mastra-cloud/setting-up): Configuration steps for Mastra Cloud projects
- [Monorepo Deployment | Deployment](https://mastra.ai/docs/deployment/monorepo): Learn how to deploy Mastra applications that are part of a monorepo setup
- [Deployment Overview | Deployment](https://mastra.ai/docs/deployment/overview): Learn about different deployment options for your Mastra applications
- [Web Framework Integration | Deployment](https://mastra.ai/docs/deployment/web-framework): Learn how Mastra can be deployed when integrated with a Web Framework
- [Using Vercel AI SDK | Frameworks](https://mastra.ai/docs/frameworks/agentic-uis/ai-sdk): Learn how Mastra leverages the Vercel AI SDK library and how you can leverage it further with Mastra
- [Using with Assistant UI | Frameworks](https://mastra.ai/docs/frameworks/agentic-uis/assistant-ui): Learn how to integrate Assistant UI with Mastra
- [Integrate Cedar-OS with Mastra | Frameworks](https://mastra.ai/docs/frameworks/agentic-uis/cedar-os): Build AI-native frontends for your Mastra agents with Cedar-OS
- [Integrate CopilotKit with Mastra | Frameworks](https://mastra.ai/docs/frameworks/agentic-uis/copilotkit): Learn how Mastra leverages the CopilotKits AGUI library and how you can leverage it to build user experiences
- [Use OpenRouter with Mastra | Frameworks](https://mastra.ai/docs/frameworks/agentic-uis/openrouter): Learn how to integrate OpenRouter with Mastra
- [Integrate Mastra in your Express project | Frameworks](https://mastra.ai/docs/frameworks/servers/express): A step-by-step guide to integrating Mastra with an Express backend.
- [Integrate Mastra in your Astro project | Frameworks](https://mastra.ai/docs/frameworks/web-frameworks/astro): A step-by-step guide to integrating Mastra with Astro.
- [Integrate Mastra in your Next.js project | Frameworks](https://mastra.ai/docs/frameworks/web-frameworks/next-js): A step-by-step guide to integrating Mastra with Next.js.
- [Integrate Mastra in your SvelteKit project | Frameworks](https://mastra.ai/docs/frameworks/web-frameworks/sveltekit): A step-by-step guide to integrating Mastra with SvelteKit.
- [Integrate Mastra in your Vite/React project | Frameworks](https://mastra.ai/docs/frameworks/web-frameworks/vite-react): A step-by-step guide to integrating Mastra with Vite and React.
- [Install Mastra | Getting Started](https://mastra.ai/docs/getting-started/installation): Guide on installing Mastra and setting up the necessary prerequisites for running it with various LLM providers.
- [Mastra Docs Server | Getting Started](https://mastra.ai/docs/getting-started/mcp-docs-server): Learn how to use the Mastra MCP documentation server in your IDE to turn it into an agentic Mastra expert.
- [Project Structure | Getting Started](https://mastra.ai/docs/getting-started/project-structure): Guide on organizing folders and files in Mastra, including best practices and recommended structures.
- [Studio | Getting Started](https://mastra.ai/docs/getting-started/studio): Get started with Mastra Studio, a local UI and API to build, test, debug, and inspect agents and workflows in real time.
- [Templates | Getting Started](https://mastra.ai/docs/getting-started/templates): Pre-built project structures that demonstrate common Mastra use cases and patterns
- [About Mastra](https://mastra.ai/docs): Mastra is an all-in-one framework for building AI-powered applications and agents with a modern TypeScript stack.
- [MCP Overview | MCP](https://mastra.ai/docs/mcp/overview): Learn about the Model Context Protocol (MCP), how to use third-party tools via MCPClient, connect to registries, and share your own tools using MCPServer.
- [Publishing an MCP Server | MCP](https://mastra.ai/docs/mcp/publishing-mcp-server): Guide to setting up and building a Mastra MCP server using the stdio transport, and publishing it to NPM.
- [Conversation History | Memory](https://mastra.ai/docs/memory/conversation-history): Learn how to configure conversation history in Mastra to store recent messages from the current conversation.
- [Memory Processors | Memory](https://mastra.ai/docs/memory/memory-processors): Learn how to use memory processors in Mastra to filter, trim, and transform messages before theyre sent to the language model to manage context window limits.
- [Memory overview | Memory](https://mastra.ai/docs/memory/overview): Learn how Mastras memory system works with working memory, conversation history, and semantic recall.
- [Semantic Recall | Memory](https://mastra.ai/docs/memory/semantic-recall): Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
- [Memory with LibSQL | Memory](https://mastra.ai/docs/memory/storage/memory-with-libsql): Example for how to use Mastras memory system with LibSQL storage and vector database backend.
- [Example: Memory with MongoDB | Memory](https://mastra.ai/docs/memory/storage/memory-with-mongodb): Example for how to use Mastras memory system with MongoDB storage and vector capabilities.
- [Memory with Postgres | Memory](https://mastra.ai/docs/memory/storage/memory-with-pg): Example for how to use Mastras memory system with PostgreSQL storage and vector capabilities.
- [Memory with Upstash | Memory](https://mastra.ai/docs/memory/storage/memory-with-upstash): Example for how to use Mastras memory system with Upstash Redis storage and vector capabilities.
- [Memory threads and resources | Memory](https://mastra.ai/docs/memory/threads-and-resources): Use memory threads and resources in Mastra to control conversation scope, persistence, and recall behavior.
- [Working Memory | Memory](https://mastra.ai/docs/memory/working-memory): Learn how to configure working memory in Mastra to store persistent user data, preferences.
- [Arize Exporter | AI Tracing | Observability](https://mastra.ai/docs/observability/ai-tracing/exporters/arize): Send AI traces to Arize Phoenix or Arize AX using OpenTelemetry and OpenInference
- [Braintrust Exporter | AI Tracing | Observability](https://mastra.ai/docs/observability/ai-tracing/exporters/braintrust): Send AI traces to Braintrust for evaluation and monitoring
- [Cloud Exporter | AI Tracing | Observability](https://mastra.ai/docs/observability/ai-tracing/exporters/cloud): Send traces to Mastra Cloud for production monitoring
- [Default Exporter | AI Tracing | Observability](https://mastra.ai/docs/observability/ai-tracing/exporters/default): Store traces locally for development and debugging
- [Langfuse Exporter | AI Tracing | Observability](https://mastra.ai/docs/observability/ai-tracing/exporters/langfuse): Send AI traces to Langfuse for LLM observability and analytics
- [LangSmith Exporter | AI Tracing | Observability](https://mastra.ai/docs/observability/ai-tracing/exporters/langsmith): Send AI traces to LangSmith for LLM observability and evaluation
- [OpenTelemetry Exporter | AI Tracing | Observability](https://mastra.ai/docs/observability/ai-tracing/exporters/otel): Send AI traces to any OpenTelemetry-compatible observability platform
- [AI Tracing | Observability](https://mastra.ai/docs/observability/ai-tracing/overview): Set up AI tracing for Mastra applications
- [Sensitive Data Filter | Processors | Observability](https://mastra.ai/docs/observability/ai-tracing/processors/sensitive-data-filter): Protect sensitive information in your AI traces with automatic data redaction
- [Logging | Observability](https://mastra.ai/docs/observability/logging): Learn how to use logging in Mastra to monitor execution, capture application behavior, and improve the accuracy of AI applications.
- [Next.js Tracing | Observability](https://mastra.ai/docs/observability/nextjs-tracing): Set up OpenTelemetry tracing for Next.js applications
- [OTEL Tracing (Deprecated) | Observability](https://mastra.ai/docs/observability/otel-tracing): Set up OpenTelemetry tracing for Mastra applications
- [Observability Overview | Observability](https://mastra.ai/docs/observability/overview): Monitor and debug applications with Mastras Observability features.
- [Chunking and Embedding Documents | RAG | Mastra Docs](https://mastra.ai/docs/rag/chunking-and-embedding): Guide on chunking and embedding documents in Mastra for efficient processing and retrieval.
- [RAG (Retrieval-Augmented Generation) in Mastra | RAG](https://mastra.ai/docs/rag/overview): Overview of Retrieval-Augmented Generation (RAG) in Mastra, detailing its capabilities for enhancing LLM outputs with relevant context.
- [Retrieval, Semantic Search, Reranking | RAG](https://mastra.ai/docs/rag/retrieval): Guide on retrieval processes in Mastras RAG systems, including semantic search, filtering, and re-ranking.
- [Storing Embeddings in A Vector Database | RAG](https://mastra.ai/docs/rag/vector-databases): Guide on vector storage options in Mastra, including embedded and dedicated vector databases for similarity search.
- [Built-in Scorers | Scorers](https://mastra.ai/docs/scorers/built-in-scorers): Overview of Mastras ready-to-use scorers for evaluating AI outputs across quality, safety, and performance dimensions.
- [Custom Scorers | Scorers](https://mastra.ai/docs/scorers/custom-scorers)
- [Create a Custom Eval | Scorers](https://mastra.ai/docs/scorers/evals-legacy/custom-eval): Mastra allows you to create your own evals, here is how.
- [Evals Overview | Evals](https://mastra.ai/docs/scorers/evals-legacy/overview): Overview of evals in Mastra, detailing their capabilities for evaluating AI outputs and measuring performance.
- [Running Evals in CI | Scorers](https://mastra.ai/docs/scorers/evals-legacy/running-in-ci): Learn how to run Mastra evals in your CI/CD pipeline to monitor agent quality over time.
- [Textual Evals | Scorers](https://mastra.ai/docs/scorers/evals-legacy/textual-evals): Understand how Mastra uses LLM-as-judge methodology to evaluate text quality.
- [Scorers Overview | Scorers](https://mastra.ai/docs/scorers/overview): Overview of scorers in Mastra, detailing their capabilities for evaluating AI outputs and measuring performance.
- [Custom API Routes | Server & DB](https://mastra.ai/docs/server-db/custom-api-routes): Expose additional HTTP endpoints from your Mastra server.
- [Mastra Client SDK | Server & DB](https://mastra.ai/docs/server-db/mastra-client): Learn how to set up and use the Mastra Client SDK
- [Mastra Server | Server & DB](https://mastra.ai/docs/server-db/mastra-server): Learn how to configure and deploy a production-ready Mastra server with custom settings for APIs, CORS, and more
- [Middleware | Server & DB](https://mastra.ai/docs/server-db/middleware): Apply custom middleware functions to intercept requests.
- [Runtime Context | Server & DB](https://mastra.ai/docs/server-db/runtime-context): Learn how to use Mastras RuntimeContext to provide dynamic, request-specific configuration to agents.
- [MastraStorage | Server & DB](https://mastra.ai/docs/server-db/storage): Overview of Mastras storage system and data persistence capabilities.
- [Streaming Events | Streaming](https://mastra.ai/docs/streaming/events): Learn about the different types of streaming events in Mastra, including text deltas, tool calls, step events, and how to handle them in your applications.
- [Streaming Overview | Streaming](https://mastra.ai/docs/streaming/overview): Streaming in Mastra enables real-time, incremental responses from both agents and workflows, providing immediate feedback as AI-generated content is produced.
- [Tool streaming | Streaming](https://mastra.ai/docs/streaming/tool-streaming): Learn how to use tool streaming in Mastra, including handling tool calls, tool results, and tool execution events during streaming.
- [Workflow streaming | Streaming](https://mastra.ai/docs/streaming/workflow-streaming): Learn how to use workflow streaming in Mastra, including handling workflow execution events, step streaming, and workflow integration with agents and tools.
- [Voice in Mastra | Voice](https://mastra.ai/docs/voice/overview): Overview of voice capabilities in Mastra, including text-to-speech, speech-to-text, and real-time speech-to-speech interactions.
- [Speech-to-Speech Capabilities in Mastra | Voice](https://mastra.ai/docs/voice/speech-to-speech): Overview of speech-to-speech capabilities in Mastra, including real-time interactions and event-driven architecture.
- [Speech-to-Text (STT) | Voice](https://mastra.ai/docs/voice/speech-to-text): Overview of Speech-to-Text capabilities in Mastra, including configuration, usage, and integration with voice providers.
- [Text-to-Speech (TTS) | Voice](https://mastra.ai/docs/voice/text-to-speech): Overview of Text-to-Speech capabilities in Mastra, including configuration, usage, and integration with voice providers.
- [Agents and Tools | Workflows](https://mastra.ai/docs/workflows/agents-and-tools): Learn how to call agents and tools from workflow steps and choose between execute functions and step composition.
- [Control Flow | Workflows](https://mastra.ai/docs/workflows/control-flow): Control flow in Mastra workflows allows you to manage branching, merging, and conditions to construct workflows that meet your logic requirements.
- [Error Handling | Workflows](https://mastra.ai/docs/workflows/error-handling): Learn how to handle errors in Mastra workflows using step retries, conditional branching, and monitoring.
- [Human-in-the-loop (HITL) | Workflows](https://mastra.ai/docs/workflows/human-in-the-loop): Human-in-the-loop workflows in Mastra allow you to pause execution for manual approvals, reviews, or user input before continuing.
- [Inngest Workflow | Workflows](https://mastra.ai/docs/workflows/inngest-workflow): Inngest workflow allows you to run Mastra workflows with Inngest
- [Workflows overview | Workflows](https://mastra.ai/docs/workflows/overview): Workflows in Mastra help you orchestrate complex sequences of tasks with features like branching, parallel execution, resource suspension, and more.
- [Snapshots | Workflows](https://mastra.ai/docs/workflows/snapshots): Learn how to save and resume workflow execution state with snapshots in Mastra
- [Suspend & Resume | Workflows](https://mastra.ai/docs/workflows/suspend-and-resume): Suspend and resume in Mastra workflows allows you to pause execution while waiting for external input or resources.
- [Time Travel | Workflows](https://mastra.ai/docs/workflows/time-travel): Re-execute workflow steps from a specific point using time travel debugging in Mastra
- [Workflow state | Workflows](https://mastra.ai/docs/workflows/workflow-state): Share values across workflow steps using global state that persists through the entire workflow run.
- [Control Flow in Legacy Workflows: Branching, Merging, and Conditions | Workflows (Legacy)](https://mastra.ai/docs/workflows-legacy/control-flow): Control flow in Mastra legacy workflows allows you to manage branching, merging, and conditions to construct legacy workflows that meet your logic requirements.
- [Dynamic Workflows (Legacy) | Workflows (Legacy)](https://mastra.ai/docs/workflows-legacy/dynamic-workflows): Learn how to create dynamic workflows within legacy workflow steps, allowing for flexible workflow creation based on runtime conditions.
- [Error Handling in Workflows (Legacy) | Workflows (Legacy)](https://mastra.ai/docs/workflows-legacy/error-handling): Learn how to handle errors in Mastra legacy workflows using step retries, conditional branching, and monitoring.
- [Nested Workflows (Legacy) | Workflows (Legacy)](https://mastra.ai/docs/workflows-legacy/nested-workflows)
- [Handling Complex LLM Operations with Workflows (Legacy) | Workflows (Legacy)](https://mastra.ai/docs/workflows-legacy/overview): Workflows in Mastra help you orchestrate complex sequences of operations with features like branching, parallel execution, resource suspension, and more.
- [Workflow Runtime Variables (Legacy) | Workflows (Legacy)](https://mastra.ai/docs/workflows-legacy/runtime-variables): Learn how to use Mastras dependency injection system to provide runtime configuration to workflows and steps.
- [Defining Steps in a Workflow (Legacy) | Workflows (Legacy)](https://mastra.ai/docs/workflows-legacy/steps): Steps in Mastra workflows provide a structured way to manage operations by defining inputs, outputs, and execution logic.
- [Suspend and Resume in Workflows (Legacy) | Workflows (Legacy)](https://mastra.ai/docs/workflows-legacy/suspend-and-resume): Suspend and resume in Mastra workflows (Legacy) allows you to pause execution while waiting for external input or resources.
- [Data Mapping with Workflow Variables | Workflows (Legacy)](https://mastra.ai/docs/workflows-legacy/variables): Learn how to use workflow variables to map data between steps and create dynamic data flows in your Mastra workflows.
## EN - examples
- [Example: AI SDK v5 Integration | Agents](https://mastra.ai/examples/agents/ai-sdk-v5-integration): Example of integrating Mastra agents with AI SDK v5 for streaming chat interfaces with memory and tool integration.
- [Example: Calling Agents | Agents](https://mastra.ai/examples/agents/calling-agents): Example for how to call agents.
- [Example: Image Analysis | Agents](https://mastra.ai/examples/agents/image-analysis): Example of using a Mastra AI Agent to analyze images from Unsplash to identify objects, determine species, and describe locations.
- [Example: Runtime Context | Agents](https://mastra.ai/examples/agents/runtime-context): Learn how to create and configure dynamic agents using runtime context to adapt behavior based on user subscription tiers.
- [Example: Supervisor Agent | Agents](https://mastra.ai/examples/agents/supervisor-agent): Example of creating a supervisor agent using Mastra, where agents interact through tool functions.
- [Example: Changing the System Prompt | Agents](https://mastra.ai/examples/agents/system-prompt): Example of creating an AI agent in Mastra with a system prompt to define its personality and capabilities.
- [Example: WhatsApp Chat Bot | Agents](https://mastra.ai/examples/agents/whatsapp-chat-bot): Example of creating a WhatsApp chat bot using Mastra agents and workflows to handle incoming messages and respond naturally via text messages.
- [Example: Answer Relevancy Evaluation | Evals](https://mastra.ai/examples/evals/answer-relevancy): Example of using the Answer Relevancy metric to evaluate response relevancy to queries.
- [Example: Bias Evaluation | Evals](https://mastra.ai/examples/evals/bias): Example of using the Bias metric to evaluate responses for various forms of bias.
- [Example: Completeness Evaluation | Evals](https://mastra.ai/examples/evals/completeness): Example of using the Completeness metric to evaluate how thoroughly responses cover input elements.
- [Example: Content Similarity Evaluation | Evals](https://mastra.ai/examples/evals/content-similarity): Example of using the Content Similarity metric to evaluate text similarity between content.
- [Example: Context Position Evaluation | Evals](https://mastra.ai/examples/evals/context-position): Example of using the Context Position metric to evaluate sequential ordering in responses.
- [Example: Context Precision Evaluation | Evals](https://mastra.ai/examples/evals/context-precision): Example of using the Context Precision metric to evaluate how precisely context information is used.
- [Example: Context Relevancy Evaluation | Evals](https://mastra.ai/examples/evals/context-relevancy): Example of using the Context Relevancy metric to evaluate how relevant context information is to a query.
- [Example: Contextual Recall Evaluation | Evals](https://mastra.ai/examples/evals/contextual-recall): Example of using the Contextual Recall metric to evaluate how well responses incorporate context information.
- [Example: LLM as a Judge Evaluation | Evals](https://mastra.ai/examples/evals/custom-llm-judge-eval): Example of creating a custom LLM-based evaluation metric.
- [Example: Custom Native JavaScript Evaluation | Evals](https://mastra.ai/examples/evals/custom-native-javascript-eval): Example of creating a custom native JavaScript evaluation metric.
- [Example: Faithfulness Evaluation | Evals](https://mastra.ai/examples/evals/faithfulness): Example of using the Faithfulness metric to evaluate how factually accurate responses are compared to context.
- [Example: Hallucination Evaluation | Evals](https://mastra.ai/examples/evals/hallucination): Example of using the Hallucination metric to evaluate factual contradictions in responses.
- [Example: Keyword Coverage Evaluation | Evals](https://mastra.ai/examples/evals/keyword-coverage): Example of using the Keyword Coverage metric to evaluate how well responses cover important keywords from input text.
- [Example: Prompt Alignment Evaluation | Evals](https://mastra.ai/examples/evals/prompt-alignment): Example of using the Prompt Alignment metric to evaluate instruction adherence in responses.
- [Example: Summarization Evaluation | Evals](https://mastra.ai/examples/evals/summarization): Example of using the Summarization metric to evaluate how well LLM-generated summaries capture content while maintaining factual accuracy.
- [Example: Textual Difference Evaluation | Evals](https://mastra.ai/examples/evals/textual-difference): Example of using the Textual Difference metric to evaluate similarity between text strings by analyzing sequence differences and changes.
- [Example: Tone Consistency Evaluation | Evals](https://mastra.ai/examples/evals/tone-consistency): Example of using the Tone Consistency metric to evaluate emotional tone patterns and sentiment consistency in text.
- [Example: Toxicity Evaluation | Evals](https://mastra.ai/examples/evals/toxicity): Example of using the Toxicity metric to evaluate responses for harmful content and toxic language.
- [Examples List: Workflows, Agents, RAG](https://mastra.ai/examples): Explore practical examples of AI development with Mastra, including text generation, RAG implementations, structured outputs, and multi-modal interactions. Learn how to build AI applications using OpenAI, Anthropic, and Google Gemini.
- [Example: Working Memory with Schema | Memory](https://mastra.ai/examples/memory/working-memory-schema): Example showing how to use Zod schema to structure and validate working memory data.
- [Example: Working Memory with Template | Memory](https://mastra.ai/examples/memory/working-memory-template): Example showing how to use Markdown template to structure working memory data.
- [Example: Basic AI Tracing Example | Observability](https://mastra.ai/examples/observability/basic-ai-tracing): Get started with AI tracing in your Mastra application
- [Example: Message Length Limiter | Processors](https://mastra.ai/examples/processors/message-length-limiter): Example of creating a custom input processor that limits message length before sending to the language model.
- [Example: Response Length Limiter | Processors](https://mastra.ai/examples/processors/response-length-limiter): Example of creating a custom output processor that limits AI response length during streaming to prevent excessively long outputs.
- [Example: Response Validator | Processors](https://mastra.ai/examples/processors/response-validator): Example of creating a custom output processor that validates AI responses contain required keywords before returning them to users.
- [Example: Adjust Chunk Delimiters | RAG](https://mastra.ai/examples/rag/chunking/adjust-chunk-delimiters): Adjust chunk delimiters in Mastra to better match your content structure.
- [Example: Adjust Chunk Size | RAG](https://mastra.ai/examples/rag/chunking/adjust-chunk-size): Adjust chunk size in Mastra to better match your content and memory requirements.
- [Example: Semantically Chunking HTML | RAG](https://mastra.ai/examples/rag/chunking/chunk-html): Chunk HTML content in Mastra to semantically chunk the document.
- [Example: Semantically Chunking JSON | RAG](https://mastra.ai/examples/rag/chunking/chunk-json): Chunk JSON data in Mastra to semantically chunk the document.
- [Example: Chunk Markdown | RAG](https://mastra.ai/examples/rag/chunking/chunk-markdown): Example of using Mastra to chunk markdown documents for search or retrieval purposes.
- [Example: Chunk Text | RAG](https://mastra.ai/examples/rag/chunking/chunk-text): Example of using Mastra to split large text documents into smaller chunks for processing.
- [Example: Embed Chunk Array | RAG](https://mastra.ai/examples/rag/embedding/embed-chunk-array): Example of using Mastra to generate embeddings for an array of text chunks for similarity search.
- [Example: Embed Text Chunk | RAG](https://mastra.ai/examples/rag/embedding/embed-text-chunk): Example of using Mastra to generate an embedding for a single text chunk for similarity search.
- [Example: Embed Text with Cohere | RAG](https://mastra.ai/examples/rag/embedding/embed-text-with-cohere): Example of using Mastra to generate embeddings using Coheres embedding model.
- [Example: Metadata Extraction | RAG](https://mastra.ai/examples/rag/embedding/metadata-extraction): Example of extracting and utilizing metadata from documents in Mastra for enhanced document processing and retrieval.
- [Example: Hybrid Vector Search | RAG](https://mastra.ai/examples/rag/query/hybrid-vector-search): Example of using metadata filters with PGVector to enhance vector search results in Mastra.
- [Example: Retrieving Top-K Results | RAG](https://mastra.ai/examples/rag/query/retrieve-results): Example of using Mastra to query a vector database and retrieve semantically similar chunks.
- [Example: Re-ranking Results with Tools | RAG](https://mastra.ai/examples/rag/rerank/rerank-rag): Example of implementing a RAG system with re-ranking in Mastra using OpenAI embeddings and PGVector for vector storage.
- [Example: Re-ranking Results | RAG](https://mastra.ai/examples/rag/rerank/rerank): Example of implementing semantic re-ranking in Mastra using OpenAI embeddings and PGVector for vector storage.
- [Example: Reranking with Cohere | RAG](https://mastra.ai/examples/rag/rerank/reranking-with-cohere): Example of using Mastra to improve document retrieval relevance with Coheres reranking service.
- [Example: Reranking with ZeroEntropy | RAG](https://mastra.ai/examples/rag/rerank/reranking-with-zeroentropy): Example of using Mastra to improve document retrieval relevance with ZeroEntropys reranking service.
- [Example: Upsert Embeddings | RAG](https://mastra.ai/examples/rag/upsert/upsert-embeddings): Examples of using Mastra to store embeddings in various vector databases for similarity search.
- [Example: Using the Vector Query Tool | RAG](https://mastra.ai/examples/rag/usage/basic-rag): Example of implementing a basic RAG system in Mastra using OpenAI embeddings and PGVector for vector storage.
- [Example: Optimizing Information Density | RAG](https://mastra.ai/examples/rag/usage/cleanup-rag): Example of implementing a RAG system in Mastra to optimize information density and deduplicate data using LLM-based processing.
- [Example: Chain of Thought Prompting | RAG](https://mastra.ai/examples/rag/usage/cot-rag): Example of implementing a RAG system in Mastra with chain-of-thought reasoning using OpenAI and PGVector.
- [Example: Structured Reasoning with Workflows | RAG](https://mastra.ai/examples/rag/usage/cot-workflow-rag): Example of implementing structured reasoning in a RAG system using Mastras workflow capabilities.
- [Example: Database-Specific Configurations | RAG](https://mastra.ai/examples/rag/usage/database-specific-config): Learn how to use database-specific configurations to optimize vector search performance and leverage unique features of different vector stores.
- [Example: Agent-Driven Metadata Filtering | RAG](https://mastra.ai/examples/rag/usage/filter-rag): Example of using a Mastra agent in a RAG system to construct and apply metadata filters for document retrieval.
- [Example: Graph RAG | RAG](https://mastra.ai/examples/rag/usage/graph-rag): Example of implementing a Graph RAG system in Mastra using OpenAI embeddings and PGVector for vector storage.
- [Example: Call Analysis with Mastra | Voice](https://mastra.ai/examples/voice/speech-to-speech): Example of using Mastra to create a speech to speech application.
- [Example: Smart Voice Memo App | Voice](https://mastra.ai/examples/voice/speech-to-text): Example of using Mastra to create a speech to text application.
- [Example: Interactive Story Generator | Voice](https://mastra.ai/examples/voice/text-to-speech): Example of using Mastra to create a text to speech application.
- [Example: AI Debate with Turn Taking | Voice](https://mastra.ai/examples/voice/turn-taking): Example of using Mastra to create a multi-agent debate with turn-taking conversation flow.
- [Example: Inngest Workflow | Workflows](https://mastra.ai/examples/workflows/inngest-workflow): Example of building an inngest workflow with Mastra
- [Example: Branching Paths | Workflows (Legacy)](https://mastra.ai/examples/workflows_legacy/branching-paths): Example of using Mastra to create legacy workflows with branching paths based on intermediate results.
- [Example: Calling an Agent From a Workflow (Legacy) | Workflows (Legacy)](https://mastra.ai/examples/workflows_legacy/calling-agent): Example of using Mastra to call an AI agent from within a legacy workflow step.
- [Example: Workflow (Legacy) with Conditional Branching (experimental) | Workflows (Legacy)](https://mastra.ai/examples/workflows_legacy/conditional-branching): Example of using Mastra to create conditional branches in legacy workflows using if/else statements.
- [Example: Creating a Simple Workflow (Legacy) | Workflows (Legacy)](https://mastra.ai/examples/workflows_legacy/creating-a-workflow): Example of using Mastra to define and execute a simple workflow with a single step.
- [Example: Workflow (Legacy) with Cyclical dependencies | Workflows (Legacy)](https://mastra.ai/examples/workflows_legacy/cyclical-dependencies): Example of using Mastra to create legacy workflows with cyclical dependencies and conditional loops.
- [Example: Human in the Loop Workflow (Legacy) | Workflows (Legacy)](https://mastra.ai/examples/workflows_legacy/human-in-the-loop): Example of using Mastra to create legacy workflows with human intervention points.
- [Example: Parallel Execution with Steps | Workflows (Legacy)](https://mastra.ai/examples/workflows_legacy/parallel-steps): Example of using Mastra to execute multiple independent tasks in parallel within a workflow.
- [Example: Workflow (Legacy) with Sequential Steps | Workflows (Legacy)](https://mastra.ai/examples/workflows_legacy/sequential-steps): Example of using Mastra to chain legacy workflow steps in a specific sequence, passing data between them.
- [Example: Workflow (Legacy) with Suspend and Resume | Workflows (Legacy)](https://mastra.ai/examples/workflows_legacy/suspend-and-resume): Example of using Mastra to suspend and resume legacy workflow steps during execution.
- [Example: Tool as a Workflow step (Legacy) | Workflows (Legacy)](https://mastra.ai/examples/workflows_legacy/using-a-tool-as-a-step): Example of using Mastra to integrate a custom tool as a step in a legacy workflow.
- [Example: Data Mapping with Workflow Variables (Legacy) | Workflows (Legacy)](https://mastra.ai/examples/workflows_legacy/workflow-variables): Learn how to use workflow variables to map data between steps in Mastra workflows.
## EN - guides
- [Guide: Building an AI Recruiter | Mastra Workflows | Guides](https://mastra.ai/guides/guide/ai-recruiter): Guide on building a recruiter workflow in Mastra to gather and process candidate information using LLMs.
- [Guide: Building an AI Chef Assistant | Mastra Agent Guides](https://mastra.ai/guides/guide/chef-michel): Guide on creating a Chef Assistant agent in Mastra to help users cook meals with available ingredients.
- [Guide: Building a Notes MCP Server | Mastra Guide](https://mastra.ai/guides/guide/notes-mcp-server): A step-by-step guide to creating a fully-featured MCP (Model Context Protocol) server for managing notes using the Mastra framework.
- [Guide: Building a Research Paper Assistant with RAG | Mastra RAG Guides](https://mastra.ai/guides/guide/research-assistant): Guide on creating an AI research assistant that can analyze and answer questions about academic papers using RAG.
- [Guide: Building an AI Stock Agent | Mastra Agents | Guides](https://mastra.ai/guides/guide/stock-agent): Guide on creating a simple stock agent in Mastra to fetch the last days closing stock price for a given symbol.
- [Guide: Building an Agent that can search the web | Mastra Guide](https://mastra.ai/guides/guide/web-search): A step-by-step guide to creating an agent that can search the web.
- [Overview](https://mastra.ai/guides): Guides on building with Mastra
- [Migration: AgentNetwork to .network() | Migration Guide](https://mastra.ai/guides/migrations/agentnetwork): Learn how to migrate from AgentNetwork primitives to .network() in Mastra.
- [Migration: Upgrade to Latest 0.x | Migration Guide](https://mastra.ai/guides/migrations/upgrade-to-latest-0x): Learn how to upgrade through breaking changes in pre-v1 versions of Mastra to reach the latest 0.x release.
- [Migration: VNext to Standard APIs | Migration Guide](https://mastra.ai/guides/migrations/vnext-to-standard-apis): Learn how to migrate from VNext methods to the new standard agent APIs in Mastra.
- [Next.js Quickstart](https://mastra.ai/guides/quickstarts/nextjs): Get started with Mastra, Next.js, and AS SDK UI. Quickly.
## EN - models
- [Embedding Models](https://mastra.ai/models/embeddings): Use embedding models through Mastras model router for semantic search and RAG.
- [Custom Gateways | Models | Mastra](https://mastra.ai/models/gateways/custom-gateways): Create custom model gateways for private or specialized LLM deployments
- [Gateways](https://mastra.ai/models/gateways): Access AI models through gateway providers with caching, rate limiting, and analytics.
- [Netlify | Models | Mastra](https://mastra.ai/models/gateways/netlify): Use AI models through Netlify.
- [OpenRouter | Models | Mastra](https://mastra.ai/models/gateways/openrouter): Use AI models through OpenRouter.
- [Vercel | Models | Mastra](https://mastra.ai/models/gateways/vercel): Use AI models through Vercel.
- [Models](https://mastra.ai/models): Access 53+ AI providers and 1113+ models through Mastras model router.
- [AIHubMix](https://mastra.ai/models/providers/aihubmix): Use AIHubMix models via the AI SDK.
- [Alibaba (China) | Models | Mastra](https://mastra.ai/models/providers/alibaba-cn): Use Alibaba (China) models with Mastra. 61 models available.
- [Alibaba | Models | Mastra](https://mastra.ai/models/providers/alibaba): Use Alibaba models with Mastra. 39 models available.
- [Amazon Bedrock](https://mastra.ai/models/providers/amazon-bedrock): Use Amazon Bedrock models via the AI SDK.
- [Anthropic | Models | Mastra](https://mastra.ai/models/providers/anthropic): Use Anthropic models with Mastra. 20 models available.
- [Azure](https://mastra.ai/models/providers/azure): Use Azure models via the AI SDK.
- [Baseten | Models | Mastra](https://mastra.ai/models/providers/baseten): Use Baseten models with Mastra. 4 models available.
- [Cerebras | Models | Mastra](https://mastra.ai/models/providers/cerebras): Use Cerebras models with Mastra. 3 models available.
- [Chutes | Models | Mastra](https://mastra.ai/models/providers/chutes): Use Chutes models with Mastra. 52 models available.
- [Cloudflare Workers AI](https://mastra.ai/models/providers/cloudflare-workers-ai): Use Cloudflare Workers AI models via the AI SDK.
- [Cohere](https://mastra.ai/models/providers/cohere): Use Cohere models via the AI SDK.
- [Cortecs | Models | Mastra](https://mastra.ai/models/providers/cortecs): Use Cortecs models with Mastra. 11 models available.
- [Deep Infra | Models | Mastra](https://mastra.ai/models/providers/deepinfra): Use Deep Infra models with Mastra. 6 models available.
- [DeepSeek | Models | Mastra](https://mastra.ai/models/providers/deepseek): Use DeepSeek models with Mastra. 2 models available.
- [FastRouter | Models | Mastra](https://mastra.ai/models/providers/fastrouter): Use FastRouter models with Mastra. 14 models available.
- [Fireworks AI | Models | Mastra](https://mastra.ai/models/providers/fireworks-ai): Use Fireworks AI models with Mastra. 12 models available.
- [GitHub Models | Models | Mastra](https://mastra.ai/models/providers/github-models): Use GitHub Models models with Mastra. 55 models available.
- [Vertex](https://mastra.ai/models/providers/google-vertex): Use Vertex models via the AI SDK.
- [Google | Models | Mastra](https://mastra.ai/models/providers/google): Use Google models with Mastra. 25 models available.
- [Groq | Models | Mastra](https://mastra.ai/models/providers/groq): Use Groq models with Mastra. 17 models available.
- [Hugging Face | Models | Mastra](https://mastra.ai/models/providers/huggingface): Use Hugging Face models with Mastra. 14 models available.
- [iFlow | Models | Mastra](https://mastra.ai/models/providers/iflowcn): Use iFlow models with Mastra. 18 models available.
- [Inception | Models | Mastra](https://mastra.ai/models/providers/inception): Use Inception models with Mastra. 2 models available.
- [Providers](https://mastra.ai/models/providers): Direct access to AI model providers.
- [Inference | Models | Mastra](https://mastra.ai/models/providers/inference): Use Inference models with Mastra. 9 models available.
- [Llama | Models | Mastra](https://mastra.ai/models/providers/llama): Use Llama models with Mastra. 7 models available.
- [LMStudio | Models | Mastra](https://mastra.ai/models/providers/lmstudio): Use LMStudio models with Mastra. 3 models available.
- [LucidQuery AI | Models | Mastra](https://mastra.ai/models/providers/lucidquery): Use LucidQuery AI models with Mastra. 2 models available.
- [Minimax | Models | Mastra](https://mastra.ai/models/providers/minimax): Use Minimax models with Mastra. 1 model available.
- [Mistral | Models | Mastra](https://mastra.ai/models/providers/mistral): Use Mistral models with Mastra. 19 models available.
- [ModelScope | Models | Mastra](https://mastra.ai/models/providers/modelscope): Use ModelScope models with Mastra. 7 models available.
- [Moonshot AI (China) | Models | Mastra](https://mastra.ai/models/providers/moonshotai-cn): Use Moonshot AI (China) models with Mastra. 5 models available.
- [Moonshot AI | Models | Mastra](https://mastra.ai/models/providers/moonshotai): Use Moonshot AI models with Mastra. 5 models available.
- [Morph | Models | Mastra](https://mastra.ai/models/providers/morph): Use Morph models with Mastra. 3 models available.
- [Nebius Token Factory | Models | Mastra](https://mastra.ai/models/providers/nebius): Use Nebius Token Factory models with Mastra. 15 models available.
- [Nvidia | Models | Mastra](https://mastra.ai/models/providers/nvidia): Use Nvidia models with Mastra. 20 models available.
- [Ollama](https://mastra.ai/models/providers/ollama): Use Ollama models via the AI SDK.
- [OpenAI | Models | Mastra](https://mastra.ai/models/providers/openai): Use OpenAI models with Mastra. 35 models available.
- [OpenCode Zen | Models | Mastra](https://mastra.ai/models/providers/opencode): Use OpenCode Zen models with Mastra. 21 models available.
- [OVHcloud AI Endpoints | Models | Mastra](https://mastra.ai/models/providers/ovhcloud): Use OVHcloud AI Endpoints models with Mastra. 15 models available.
- [Perplexity | Models | Mastra](https://mastra.ai/models/providers/perplexity): Use Perplexity models with Mastra. 4 models available.
- [Poe | Models | Mastra](https://mastra.ai/models/providers/poe): Use Poe models with Mastra. 100 models available.
- [Requesty | Models | Mastra](https://mastra.ai/models/providers/requesty): Use Requesty models with Mastra. 17 models available.
- [Scaleway | Models | Mastra](https://mastra.ai/models/providers/scaleway): Use Scaleway models with Mastra. 13 models available.
- [SiliconFlow | Models | Mastra](https://mastra.ai/models/providers/siliconflow): Use SiliconFlow models with Mastra. 72 models available.
- [submodel | Models | Mastra](https://mastra.ai/models/providers/submodel): Use submodel models with Mastra. 9 models available.
- [Synthetic | Models | Mastra](https://mastra.ai/models/providers/synthetic): Use Synthetic models with Mastra. 23 models available.
- [Together AI | Models | Mastra](https://mastra.ai/models/providers/togetherai): Use Together AI models with Mastra. 6 models available.
- [Upstage | Models | Mastra](https://mastra.ai/models/providers/upstage): Use Upstage models with Mastra. 2 models available.
- [Venice AI | Models | Mastra](https://mastra.ai/models/providers/venice): Use Venice AI models with Mastra. 14 models available.
- [Vultr | Models | Mastra](https://mastra.ai/models/providers/vultr): Use Vultr models with Mastra. 5 models available.
- [Weights & Biases | Models | Mastra](https://mastra.ai/models/providers/wandb): Use Weights & Biases models with Mastra. 10 models available.
- [xAI | Models | Mastra](https://mastra.ai/models/providers/xai): Use xAI models with Mastra. 22 models available.
- [Z.AI Coding Plan | Models | Mastra](https://mastra.ai/models/providers/zai-coding-plan): Use Z.AI Coding Plan models with Mastra. 5 models available.
- [Z.AI | Models | Mastra](https://mastra.ai/models/providers/zai): Use Z.AI models with Mastra. 5 models available.
- [ZenMux | Models | Mastra](https://mastra.ai/models/providers/zenmux): Use ZenMux models with Mastra. 21 models available.
- [Zhipu AI Coding Plan | Models | Mastra](https://mastra.ai/models/providers/zhipuai-coding-plan): Use Zhipu AI Coding Plan models with Mastra. 5 models available.
- [Zhipu AI | Models | Mastra](https://mastra.ai/models/providers/zhipuai): Use Zhipu AI models with Mastra. 5 models available.
## EN - reference
- [Reference: Agent Class | Agents](https://mastra.ai/reference/agents/agent): Documentation for the `Agent` class in Mastra, which provides the foundation for creating AI agents with various capabilities.
- [Reference: Agent.generate() | Agents](https://mastra.ai/reference/agents/generate): Documentation for the `Agent.generate()` method in Mastra agents, which enables non-streaming generation of responses with enhanced capabilities.
- [Reference: Agent.generateLegacy() (Legacy) | Agents](https://mastra.ai/reference/agents/generateLegacy): Documentation for the legacy `Agent.generateLegacy()` method in Mastra agents. This method is deprecated and will be removed in a future version.
- [Reference: Agent.getDefaultGenerateOptions() | Agents](https://mastra.ai/reference/agents/getDefaultGenerateOptions): Documentation for the `Agent.getDefaultGenerateOptions()` method in Mastra agents, which retrieves the default options used for generate calls.
- [Reference: Agent.getDefaultStreamOptions() | Agents](https://mastra.ai/reference/agents/getDefaultStreamOptions): Documentation for the `Agent.getDefaultStreamOptions()` method in Mastra agents, which retrieves the default options used for stream calls.
- [Reference: Agent.getDescription() | Agents](https://mastra.ai/reference/agents/getDescription): Documentation for the `Agent.getDescription()` method in Mastra agents, which retrieves the agents description.
- [Reference: Agent.getInstructions() | Agents](https://mastra.ai/reference/agents/getInstructions): Documentation for the `Agent.getInstructions()` method in Mastra agents, which retrieves the instructions that guide the agents behavior.
- [Reference: Agent.getLLM() | Agents](https://mastra.ai/reference/agents/getLLM): Documentation for the `Agent.getLLM()` method in Mastra agents, which retrieves the language model instance.
- [Reference: Agent.getMemory() | Agents](https://mastra.ai/reference/agents/getMemory): Documentation for the `Agent.getMemory()` method in Mastra agents, which retrieves the memory system associated with the agent.
- [Reference: Agent.getModel() | Agents](https://mastra.ai/reference/agents/getModel): Documentation for the `Agent.getModel()` method in Mastra agents, which retrieves the language model that powers the agent.
- [Reference: Agent.getScorers() | Agents](https://mastra.ai/reference/agents/getScorers): Documentation for the `Agent.getScorers()` method in Mastra agents, which retrieves the scoring configuration.
- [Reference: Agent.getTools() | Agents](https://mastra.ai/reference/agents/getTools): Documentation for the `Agent.getTools()` method in Mastra agents, which retrieves the tools that the agent can use.
- [Reference: Agent.getVoice() | Agents](https://mastra.ai/reference/agents/getVoice): Documentation for the `Agent.getVoice()` method in Mastra agents, which retrieves the voice provider for speech capabilities.
- [Reference: Agent.getWorkflows() | Agents](https://mastra.ai/reference/agents/getWorkflows): Documentation for the `Agent.getWorkflows()` method in Mastra agents, which retrieves the workflows that the agent can execute.
- [Reference: Agent.listAgents() | Agents](https://mastra.ai/reference/agents/listAgents): Documentation for the `Agent.listAgents()` method in Mastra agents, which retrieves the sub-agents that the agent can access.
- [Reference: Agent.listScorers() | Agents](https://mastra.ai/reference/agents/listScorers): Documentation for the `Agent.listScorers()` method in Mastra agents, which retrieves the scoring configuration.
- [Reference: Agent.listWorkflows() | Agents](https://mastra.ai/reference/agents/listWorkflows): Documentation for the `Agent.listWorkflows()` method in Mastra agents, which retrieves the workflows that the agent can execute.
- [Reference: Agent.network() | Agents](https://mastra.ai/reference/agents/network): Documentation for the `Agent.network()` method in Mastra agents, which enables multi-agent collaboration and routing.
- [Reference: MastraAuthAuth0 Class | Auth](https://mastra.ai/reference/auth/auth0): API reference for the MastraAuthAuth0 class, which authenticates Mastra applications using Auth0 authentication.
- [Reference: MastraAuthClerk Class | Auth](https://mastra.ai/reference/auth/clerk): API reference for the MastraAuthClerk class, which authenticates Mastra applications using Clerk authentication.
- [Reference: MastraAuthFirebase Class | Auth](https://mastra.ai/reference/auth/firebase): API reference for the MastraAuthFirebase class, which authenticates Mastra applications using Firebase Authentication.
- [Reference: MastraJwtAuth Class | Auth](https://mastra.ai/reference/auth/jwt): API reference for the MastraJwtAuth class, which authenticates Mastra applications using JSON Web Tokens.
- [Reference: MastraAuthSupabase Class | Auth](https://mastra.ai/reference/auth/supabase): API reference for the MastraAuthSupabase class, which authenticates Mastra applications using Supabase Auth.
- [Reference: MastraAuthWorkos Class | Auth](https://mastra.ai/reference/auth/workos): API reference for the MastraAuthWorkos class, which authenticates Mastra applications using WorkOS authentication.
- [Reference: create-mastra | CLI](https://mastra.ai/reference/cli/create-mastra): Documentation for the create-mastra command, which creates a new Mastra project with interactive setup options.
- [Reference: CLI Commands | CLI](https://mastra.ai/reference/cli/mastra): Documentation for the Mastra CLI to develop, build, and start your project.
- [Reference: Agents API | Client SDK](https://mastra.ai/reference/client-js/agents): Learn how to interact with Mastra AI agents, including generating responses, streaming interactions, and managing agent tools using the client-js SDK.
- [Reference: Error Handling | Client SDK](https://mastra.ai/reference/client-js/error-handling): Learn about the built-in retry mechanism and error handling capabilities in the Mastra client-js SDK.
- [Reference: Logs API | Client SDK](https://mastra.ai/reference/client-js/logs): Learn how to access and query system logs and debugging information in Mastra using the client-js SDK.
- [Reference: Mastra Client SDK | Client SDK](https://mastra.ai/reference/client-js/mastra-client): Learn how to interact with Mastra using the client-js SDK.
- [Reference: Memory API | Client SDK](https://mastra.ai/reference/client-js/memory): Learn how to manage conversation threads and message history in Mastra using the client-js SDK.
- [Reference: Observability API | Client SDK](https://mastra.ai/reference/client-js/observability): Learn how to retrieve AI traces, monitor application performance, and score traces using the client-js SDK.
- [Reference: Telemetry API | Client SDK](https://mastra.ai/reference/client-js/telemetry): Learn how to retrieve and analyze traces from your Mastra application for monitoring and debugging using the client-js SDK.
- [Reference: Tools API | Client SDK](https://mastra.ai/reference/client-js/tools): Learn how to interact with and execute tools available in the Mastra platform using the client-js SDK.
- [Reference: Vectors API | Client SDK](https://mastra.ai/reference/client-js/vectors): Learn how to work with vector embeddings for semantic search and similarity matching in Mastra using the client-js SDK.
- [Reference: Workflows (Legacy) API | Client SDK](https://mastra.ai/reference/client-js/workflows-legacy): Learn how to interact with and execute automated legacy workflows in Mastra using the client-js SDK.
- [Reference: Workflows API | Client SDK](https://mastra.ai/reference/client-js/workflows): Learn how to interact with and execute automated workflows in Mastra using the client-js SDK.
- [Reference: Mastra.getAgent() | Core](https://mastra.ai/reference/core/getAgent): Documentation for the `Agent.getAgent()` method in Mastra, which retrieves an agent by name.
- [Reference: Mastra.getAgentById() | Core](https://mastra.ai/reference/core/getAgentById): Documentation for the `Mastra.getAgentById()` method in Mastra, which retrieves an agent by its ID.
- [Reference: Mastra.getAgents() | Core](https://mastra.ai/reference/core/getAgents): Documentation for the `Mastra.getAgents()` method in Mastra, which retrieves all configured agents.
- [Reference: Mastra.getDeployer() | Core](https://mastra.ai/reference/core/getDeployer): Documentation for the `Mastra.getDeployer()` method in Mastra, which retrieves the configured deployer instance.
- [Reference: Mastra.getLogger() | Core](https://mastra.ai/reference/core/getLogger): Documentation for the `Mastra.getLogger()` method in Mastra, which retrieves the configured logger instance.
- [Reference: Mastra.getLogs() | Core](https://mastra.ai/reference/core/getLogs): Documentation for the `Mastra.getLogs()` method in Mastra, which retrieves all logs for a specific transport ID.
- [Reference: Mastra.getLogsByRunId() | Core](https://mastra.ai/reference/core/getLogsByRunId): Documentation for the `Mastra.getLogsByRunId()` method in Mastra, which retrieves logs for a specific run ID and transport ID.
- [Reference: Mastra.getMCPServer() | Core](https://mastra.ai/reference/core/getMCPServer): Documentation for the `Mastra.getMCPServer()` method in Mastra, which retrieves a specific MCP server instance by ID and optional version.
- [Reference: Mastra.getMCPServers() | Core](https://mastra.ai/reference/core/getMCPServers): Documentation for the `Mastra.getMCPServers()` method in Mastra, which retrieves all registered MCP server instances.
- [Reference: Mastra.getMemory() | Core](https://mastra.ai/reference/core/getMemory): Documentation for the `Mastra.getMemory()` method in Mastra, which retrieves the configured memory instance.
- [Reference: getScorer() | Core](https://mastra.ai/reference/core/getScorer): Documentation for the `getScorer()` method in Mastra, which retrieves a specific scorer by its registration key.
- [Reference: getScorerByName() | Core](https://mastra.ai/reference/core/getScorerByName): Documentation for the `getScorerByName()` method in Mastra, which retrieves a scorer by its name property rather than registration key.
- [Reference: getScorers() | Core](https://mastra.ai/reference/core/getScorers): Documentation for the `getScorers()` method in Mastra, which returns all registered scorers for evaluating AI outputs.
- [Reference: Mastra.getServer() | Core](https://mastra.ai/reference/core/getServer): Documentation for the `Mastra.getServer()` method in Mastra, which retrieves the configured server configuration.
- [Reference: Mastra.getStorage() | Core](https://mastra.ai/reference/core/getStorage): Documentation for the `Mastra.getStorage()` method in Mastra, which retrieves the configured storage instance.
- [Reference: Mastra.getTelemetry() | Core](https://mastra.ai/reference/core/getTelemetry): Documentation for the `Mastra.getTelemetry()` method in Mastra, which retrieves the configured telemetry instance.
- [Reference: Mastra.getVector() | Core](https://mastra.ai/reference/core/getVector): Documentation for the `Mastra.getVector()` method in Mastra, which retrieves a vector store by name.
- [Reference: Mastra.getVectors() | Core](https://mastra.ai/reference/core/getVectors): Documentation for the `Mastra.getVectors()` method in Mastra, which retrieves all configured vector stores.
- [Reference: Mastra.getWorkflow() | Core](https://mastra.ai/reference/core/getWorkflow): Documentation for the `Mastra.getWorkflow()` method in Mastra, which retrieves a workflow by ID.
- [Reference: Mastra.getWorkflows() | Core](https://mastra.ai/reference/core/getWorkflows): Documentation for the `Mastra.getWorkflows()` method in Mastra, which retrieves all configured workflows.
- [Reference: Mastra.listLogs() | Core](https://mastra.ai/reference/core/listLogs): Documentation for the `Mastra.listLogs()` method in Mastra, which retrieves all logs for a specific transport ID.
- [Reference: Mastra.listLogsByRunId() | Core](https://mastra.ai/reference/core/listLogsByRunId): Documentation for the `Mastra.listLogsByRunId()` method in Mastra, which retrieves logs for a specific run ID and transport ID.
- [Reference: listScorers() | Core](https://mastra.ai/reference/core/listScorers): Documentation for the `listScorers()` method in Mastra, which returns all registered scorers for evaluating AI outputs.
- [Reference: Mastra.listWorkflows() | Core](https://mastra.ai/reference/core/listWorkflows): Documentation for the `Mastra.listWorkflows()` method in Mastra, which retrieves all configured workflows.
- [Reference: Mastra Class | Core](https://mastra.ai/reference/core/mastra-class): Documentation for the `Mastra` class in Mastra, the core entry point for managing agents, workflows, MCP servers, and server endpoints.
- [Reference: MastraModelGateway | Core](https://mastra.ai/reference/core/mastra-model-gateway): Base class for creating custom model gateways
- [Reference: Mastra.setLogger() | Core](https://mastra.ai/reference/core/setLogger): Documentation for the `Mastra.setLogger()` method in Mastra, which sets the logger for all components (agents, workflows, etc.).
- [Reference: Mastra.setStorage() | Core](https://mastra.ai/reference/core/setStorage): Documentation for the `Mastra.setStorage()` method in Mastra, which sets the storage instance for the Mastra instance.
- [Reference: Mastra.setTelemetry() | Core](https://mastra.ai/reference/core/setTelemetry): Documentation for the `Mastra.setTelemetry()` method in Mastra, which sets the telemetry configuration for all components.
- [Reference: CloudflareDeployer | Deployer](https://mastra.ai/reference/deployer/cloudflare): Documentation for the CloudflareDeployer class, which deploys Mastra applications to Cloudflare Workers.
- [Reference: Deployer | Deployer](https://mastra.ai/reference/deployer/deployer): Documentation for the Deployer abstract class, which handles packaging and deployment of Mastra applications.
- [Reference: NetlifyDeployer | Deployer](https://mastra.ai/reference/deployer/netlify): Documentation for the NetlifyDeployer class, which deploys Mastra applications to Netlify Functions.
- [Reference: VercelDeployer | Deployer](https://mastra.ai/reference/deployer/vercel): Documentation for the VercelDeployer class, which deploys Mastra applications to Vercel.
- [Reference: AnswerRelevancyMetric | Evals](https://mastra.ai/reference/evals/answer-relevancy): Documentation for the Answer Relevancy Metric in Mastra, which evaluates how well LLM outputs address the input query.
- [Reference: BiasMetric | Evals](https://mastra.ai/reference/evals/bias): Documentation for the Bias Metric in Mastra, which evaluates LLM outputs for various forms of bias, including gender, political, racial/ethnic, or geographical bias.
- [Reference: CompletenessMetric | Evals](https://mastra.ai/reference/evals/completeness): Documentation for the Completeness Metric in Mastra, which evaluates how thoroughly LLM outputs cover key elements present in the input.
- [Reference: ContentSimilarityMetric | Evals](https://mastra.ai/reference/evals/content-similarity): Documentation for the Content Similarity Metric in Mastra, which measures textual similarity between strings and provides a matching score.
- [Reference: ContextPositionMetric | Evals](https://mastra.ai/reference/evals/context-position): Documentation for the Context Position Metric in Mastra, which evaluates the ordering of context nodes based on their relevance to the query and output.
- [Reference: ContextPrecisionMetric | Evals](https://mastra.ai/reference/evals/context-precision): Documentation for the Context Precision Metric in Mastra, which evaluates the relevance and precision of retrieved context nodes for generating expected outputs.
- [Reference: ContextRelevancyMetric | Evals](https://mastra.ai/reference/evals/context-relevancy): Documentation for the Context Relevancy Metric, which evaluates the relevance of retrieved context in RAG pipelines.
- [Reference: ContextualRecallMetric | Evals](https://mastra.ai/reference/evals/contextual-recall): Documentation for the Contextual Recall Metric, which evaluates the completeness of LLM responses in incorporating relevant context.
- [Reference: FaithfulnessMetric Reference | Evals](https://mastra.ai/reference/evals/faithfulness): Documentation for the Faithfulness Metric in Mastra, which evaluates the factual accuracy of LLM outputs compared to the provided context.
- [Reference: HallucinationMetric | Evals](https://mastra.ai/reference/evals/hallucination): Documentation for the Hallucination Metric in Mastra, which evaluates the factual correctness of LLM outputs by identifying contradictions with provided context.
- [Reference: KeywordCoverageMetric | Evals](https://mastra.ai/reference/evals/keyword-coverage): Documentation for the Keyword Coverage Metric in Mastra, which evaluates how well LLM outputs cover important keywords from the input.
- [Reference: PromptAlignmentMetric | Evals](https://mastra.ai/reference/evals/prompt-alignment): Documentation for the Prompt Alignment Metric in Mastra, which evaluates how well LLM outputs adhere to given prompt instructions.
- [Reference: SummarizationMetric | Evals](https://mastra.ai/reference/evals/summarization): Documentation for the Summarization Metric in Mastra, which evaluates the quality of LLM-generated summaries for content and factual accuracy.
- [Reference: TextualDifferenceMetric | Evals](https://mastra.ai/reference/evals/textual-difference): Documentation for the Textual Difference Metric in Mastra, which measures textual differences between strings using sequence matching.
- [Reference: ToneConsistencyMetric | Evals](https://mastra.ai/reference/evals/tone-consistency): Documentation for the Tone Consistency Metric in Mastra, which evaluates emotional tone and sentiment consistency in text.
- [Reference: ToxicityMetric | Evals](https://mastra.ai/reference/evals/toxicity): Documentation for the Toxicity Metric in Mastra, which evaluates LLM outputs for racist, biased, or toxic elements.
- [Reference: Overview](https://mastra.ai/reference): Reference documentation on Mastras APIs and tools
- [Reference: .after() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/after): Documentation for the `after()` method in workflows (legacy), enabling branching and merging paths.
- [Reference: afterEvent() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/afterEvent): Reference for the afterEvent method in Mastra workflows that creates event-based suspension points.
- [Reference: Workflow.commit() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/commit): Documentation for the `.commit()` method in workflows, which re-initializes the workflow machine with the current step configuration.
- [Reference: Workflow.createRun() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/createRun): Documentation for the `.createRun()` method in workflows (legacy), which initializes a new workflow run instance.
- [Reference: Workflow.else() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/else): Documentation for the `.else()` method in Mastra workflows, which creates an alternative branch when an if condition is false.
- [Reference: Event-Driven Workflows | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/events): Learn how to create event-driven workflows using afterEvent and resumeWithEvent methods in Mastra.
- [Reference: Workflow.execute() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/execute): Documentation for the `.execute()` method in Mastra workflows, which runs workflow steps and returns results.
- [Reference: Workflow.if() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/if): Documentation for the `.if()` method in Mastra workflows, which creates conditional branches based on specified conditions.
- [Reference: run.resume() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/resume): Documentation for the `.resume()` method in workflows, which continues execution of a suspended workflow step.
- [Reference: resumeWithEvent() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/resumeWithEvent): Reference for the resumeWithEvent method that resumes suspended workflows using event data.
- [Reference: Snapshots | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/snapshots): Technical reference on snapshots in Mastra - the serialized workflow state that enables suspend and resume functionality
- [Reference: start() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/start): Documentation for the `start()` method in workflows, which begins execution of a workflow run.
- [Reference: Step | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/step-class): Documentation for the Step class, which defines individual units of work within a workflow.
- [Reference: StepCondition | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/step-condition): Documentation for the step condition class in workflows, which determines whether a step should execute based on the output of previous steps or trigger data.
- [Reference: Workflow.step() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/step-function): Documentation for the `.step()` method in workflows, which adds a new step to the workflow.
- [Reference: StepOptions | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/step-options): Documentation for the step options in workflows, which control variable mapping, execution conditions, and other runtime behavior.
- [Reference: Step Retries | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/step-retries): Automatically retry failed steps in Mastra workflows with configurable retry policies.
- [Reference: suspend() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/suspend): Documentation for the suspend function in Mastra workflows, which pauses execution until resumed.
- [Reference: Workflow.then() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/then): Documentation for the `.then()` method in workflows, which creates sequential dependencies between steps.
- [Reference: Workflow.until() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/until): Documentation for the `.until()` method in Mastra workflows, which repeats a step until a specified condition becomes true.
- [Reference: run.watch() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/watch): Documentation for the `.watch()` method in workflows, which monitors the status of a workflow run.
- [Reference: Workflow.while() | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/while): Documentation for the `.while()` method in Mastra workflows, which repeats a step as long as a specified condition remains true.
- [Reference: Workflow Class | Legacy Workflows](https://mastra.ai/reference/legacyWorkflows/workflow): Documentation for the Workflow class in Mastra, which enables you to create state machines for complex sequences of operations with conditional branching and data validation.
- [Reference: Memory.createThread() | Memory](https://mastra.ai/reference/memory/createThread): Documentation for the `Memory.createThread()` method in Mastra, which creates a new conversation thread in the memory system.
- [Reference: Memory.deleteMessages() | Memory](https://mastra.ai/reference/memory/deleteMessages): Documentation for the `Memory.deleteMessages()` method in Mastra, which deletes multiple messages by their IDs.
- [Reference: Memory.getThreadById() | Memory](https://mastra.ai/reference/memory/getThreadById): Documentation for the `Memory.getThreadById()` method in Mastra, which retrieves a specific thread by its ID.
- [Reference: Memory.getThreadsByResourceId() | Memory](https://mastra.ai/reference/memory/getThreadsByResourceId): Documentation for the `Memory.getThreadsByResourceId()` method in Mastra, which retrieves all threads that belong to a specific resource.
- [Reference: Memory.getThreadsByResourceIdPaginated() | Memory](https://mastra.ai/reference/memory/getThreadsByResourceIdPaginated): Documentation for the `Memory.getThreadsByResourceIdPaginated()` method in Mastra, which retrieves threads associated with a specific resource ID with pagination support.
- [Reference: Memory Class | Memory](https://mastra.ai/reference/memory/memory-class): Documentation for the `Memory` class in Mastra, which provides a robust system for managing conversation history and thread-based message storage.
- [Reference: Memory.query() | Memory](https://mastra.ai/reference/memory/query): Documentation for the `Memory.query()` method in Mastra, which retrieves messages from a specific thread with support for pagination, filtering options, and semantic search.
- [Reference: AITracing | Observability](https://mastra.ai/reference/observability/ai-tracing/ai-tracing): Core AI Tracing classes and methods
- [Reference: Configuration | Observability](https://mastra.ai/reference/observability/ai-tracing/configuration): AI Tracing configuration types and registry functions
- [Reference: ArizeExporter | Observability](https://mastra.ai/reference/observability/ai-tracing/exporters/arize): Arize exporter for AI tracing using OpenInference
- [Reference: BraintrustExporter | Observability](https://mastra.ai/reference/observability/ai-tracing/exporters/braintrust): Braintrust exporter for AI tracing
- [Reference: CloudExporter | Observability](https://mastra.ai/reference/observability/ai-tracing/exporters/cloud-exporter): API reference for the CloudExporter
- [Reference: ConsoleExporter | Observability](https://mastra.ai/reference/observability/ai-tracing/exporters/console-exporter): API reference for the ConsoleExporter
- [Reference: DefaultExporter | Observability](https://mastra.ai/reference/observability/ai-tracing/exporters/default-exporter): API reference for the DefaultExporter
- [Reference: LangfuseExporter | Observability](https://mastra.ai/reference/observability/ai-tracing/exporters/langfuse): Langfuse exporter for AI tracing
- [Reference: LangSmithExporter | Observability](https://mastra.ai/reference/observability/ai-tracing/exporters/langsmith): LangSmith exporter for AI tracing
- [Reference: OtelExporter | Observability](https://mastra.ai/reference/observability/ai-tracing/exporters/otel): OpenTelemetry exporter for AI tracing
- [Reference: Interfaces | Observability](https://mastra.ai/reference/observability/ai-tracing/interfaces): AI Tracing type definitions and interfaces
- [Reference: SensitiveDataFilter | Observability](https://mastra.ai/reference/observability/ai-tracing/processors/sensitive-data-filter): API reference for the SensitiveDataFilter processor
- [Reference: Span | Observability](https://mastra.ai/reference/observability/ai-tracing/span): Span interfaces, methods, and lifecycle events
- [Reference: PinoLogger | Observability](https://mastra.ai/reference/observability/logging/pino-logger): Documentation for PinoLogger, which provides methods to record events at various severity levels.
- [Reference: `OtelConfig` | Observability](https://mastra.ai/reference/observability/otel-tracing/otel-config): Documentation for the OtelConfig object, which configures OpenTelemetry instrumentation, tracing, and exporting behavior.
- [Reference: Arize AX | Observability](https://mastra.ai/reference/observability/otel-tracing/providers/arize-ax): Documentation for integrating Arize AX with Mastra, a comprehensive AI observability platform for monitoring and evaluating LLM applications.
- [Reference: Arize Phoenix | Observability](https://mastra.ai/reference/observability/otel-tracing/providers/arize-phoenix): Documentation for integrating Arize Phoenix with Mastra, an open-source AI observability platform for monitoring and evaluating LLM applications.
- [Reference: Braintrust | Observability](https://mastra.ai/reference/observability/otel-tracing/providers/braintrust): Documentation for integrating Braintrust with Mastra, an evaluation and monitoring platform for LLM applications.
- [Reference: Dash0 | Observability](https://mastra.ai/reference/observability/otel-tracing/providers/dash0): Documentation for integrating Mastra with Dash0, an Open Telemetry native observability solution.
- [Reference: OTLP Providers | Observability](https://mastra.ai/reference/observability/otel-tracing/providers): Overview of OTLP observability providers.
- [Reference: Keywords AI Integration | Mastra Observability Docs](https://mastra.ai/reference/observability/otel-tracing/providers/keywordsai): Documentation for integrating Keywords AI (an observability platform for LLM applications) with Mastra.
- [Reference: Laminar | Observability](https://mastra.ai/reference/observability/otel-tracing/providers/laminar): Documentation for integrating Laminar with Mastra, a specialized observability platform for LLM applications.
- [Reference: Langfuse | Observability](https://mastra.ai/reference/observability/otel-tracing/providers/langfuse): Documentation for integrating Langfuse with Mastra, an open-source observability platform for LLM applications.
- [Reference: LangSmith | Observability](https://mastra.ai/reference/observability/otel-tracing/providers/langsmith): Documentation for integrating LangSmith with Mastra, a platform for debugging, testing, evaluating, and monitoring LLM applications.
- [Reference: LangWatch | Observability](https://mastra.ai/reference/observability/otel-tracing/providers/langwatch): Documentation for integrating LangWatch with Mastra, a specialized observability platform for LLM applications.
- [Reference: New Relic | Observability](https://mastra.ai/reference/observability/otel-tracing/providers/new-relic): Documentation for integrating New Relic with Mastra, a comprehensive observability platform supporting OpenTelemetry for full-stack monitoring.
- [Reference: SigNoz | Observability](https://mastra.ai/reference/observability/otel-tracing/providers/signoz): Documentation for integrating SigNoz with Mastra, an open-source APM and observability platform providing full-stack monitoring through OpenTelemetry.
- [Reference: Traceloop | Observability](https://mastra.ai/reference/observability/otel-tracing/providers/traceloop): Documentation for integrating Traceloop with Mastra, an OpenTelemetry-native observability platform for LLM applications.
- [Reference: Batch Parts Processor | Processors](https://mastra.ai/reference/processors/batch-parts-processor): Documentation for the BatchPartsProcessor in Mastra, which batches multiple stream parts together to reduce frequency of emissions.
- [Reference: Language Detector | Processors](https://mastra.ai/reference/processors/language-detector): Documentation for the LanguageDetector in Mastra, which detects language and can translate content in AI responses.
- [Reference: Moderation Processor | Processors](https://mastra.ai/reference/processors/moderation-processor): Documentation for the ModerationProcessor in Mastra, which provides content moderation using LLM to detect inappropriate content across multiple categories.
- [Reference: PII Detector | Processors](https://mastra.ai/reference/processors/pii-detector): Documentation for the PIIDetector in Mastra, which detects and redacts personally identifiable information (PII) from AI responses.
- [Reference: Prompt Injection Detector | Processors](https://mastra.ai/reference/processors/prompt-injection-detector): Documentation for the PromptInjectionDetector in Mastra, which detects prompt injection attempts in user input.
- [Reference: System Prompt Scrubber | Processors](https://mastra.ai/reference/processors/system-prompt-scrubber): Documentation for the SystemPromptScrubber in Mastra, which detects and redacts system prompts from AI responses.
- [Reference: Token Limiter Processor | Processors](https://mastra.ai/reference/processors/token-limiter-processor): Documentation for the TokenLimiterProcessor in Mastra, which limits the number of tokens in AI responses.
- [Reference: Unicode Normalizer | Processors](https://mastra.ai/reference/processors/unicode-normalizer): Documentation for the UnicodeNormalizer in Mastra, which normalizes Unicode text to ensure consistent formatting and remove potentially problematic characters.
- [Reference: Reference: .chunk() | RAG](https://mastra.ai/reference/rag/chunk): Documentation for the chunk function in Mastra, which splits documents into smaller segments using various strategies.
- [Reference: DatabaseConfig | RAG](https://mastra.ai/reference/rag/database-config): API reference for database-specific configuration types used with vector query tools in Mastra RAG systems.
- [Reference: MDocument | Document Processing | RAG](https://mastra.ai/reference/rag/document): Documentation for the MDocument class in Mastra, which handles document processing and chunking.
- [Reference: Embed | RAG](https://mastra.ai/reference/rag/embeddings): Documentation for embedding functionality in Mastra using the AI SDK.
- [Reference: ExtractParams | RAG](https://mastra.ai/reference/rag/extract-params): Documentation for metadata extraction configuration in Mastra.
- [Reference: GraphRAG | RAG](https://mastra.ai/reference/rag/graph-rag): Documentation for the GraphRAG class in Mastra, which implements a graph-based approach to retrieval augmented generation.
- [Reference: Metadata Filters | RAG](https://mastra.ai/reference/rag/metadata-filters): Documentation for metadata filtering capabilities in Mastra, which allow for precise querying of vector search results across different vector stores.
- [Reference: rerank() | RAG](https://mastra.ai/reference/rag/rerank): Documentation for the rerank function in Mastra, which provides advanced reranking capabilities for vector search results.
- [Reference: rerankWithScorer() | RAG](https://mastra.ai/reference/rag/rerankWithScorer): Documentation for the rerankWithScorer function in Mastra, which provides advanced reranking capabilities for vector search results.
- [Reference: Answer Relevancy Scorer | Scorers](https://mastra.ai/reference/scorers/answer-relevancy): Documentation for the Answer Relevancy Scorer in Mastra, which evaluates how well LLM outputs address the input query.
- [Reference: Answer Similarity Scorer | Scorers](https://mastra.ai/reference/scorers/answer-similarity): Documentation for the Answer Similarity Scorer in Mastra, which compares agent outputs against ground truth answers for CI/CD testing.
- [Reference: Bias Scorer | Scorers](https://mastra.ai/reference/scorers/bias): Documentation for the Bias Scorer in Mastra, which evaluates LLM outputs for various forms of bias, including gender, political, racial/ethnic, or geographical bias.
- [Reference: Completeness Scorer | Scorers](https://mastra.ai/reference/scorers/completeness): Documentation for the Completeness Scorer in Mastra, which evaluates how thoroughly LLM outputs cover key elements present in the input.
- [Reference: Content Similarity Scorer | Scorers](https://mastra.ai/reference/scorers/content-similarity): Documentation for the Content Similarity Scorer in Mastra, which measures textual similarity between strings and provides a matching score.
- [Reference: Context Precision Scorer | Scorers](https://mastra.ai/reference/scorers/context-precision): Documentation for the Context Precision Scorer in Mastra. Evaluates the relevance and precision of retrieved context for generating expected outputs using Mean Average Precision.
- [Reference: Context Relevance Scorer | Scorers](https://mastra.ai/reference/scorers/context-relevance): Documentation for the Context Relevance Scorer in Mastra. Evaluates the relevance and utility of provided context for generating agent responses using weighted relevance scoring.
- [Reference: createScorer | Scorers](https://mastra.ai/reference/scorers/create-scorer): Documentation for creating custom scorers in Mastra, allowing users to define their own evaluation logic using either JavaScript functions or LLM-based prompts.
- [Reference: Faithfulness Scorer | Scorers](https://mastra.ai/reference/scorers/faithfulness): Documentation for the Faithfulness Scorer in Mastra, which evaluates the factual accuracy of LLM outputs compared to the provided context.
- [Reference: Hallucination Scorer | Scorers](https://mastra.ai/reference/scorers/hallucination): Documentation for the Hallucination Scorer in Mastra, which evaluates the factual correctness of LLM outputs by identifying contradictions with provided context.
- [Reference: Keyword Coverage Scorer | Scorers](https://mastra.ai/reference/scorers/keyword-coverage): Documentation for the Keyword Coverage Scorer in Mastra, which evaluates how well LLM outputs cover important keywords from the input.
- [Reference: MastraScorer | Scorers](https://mastra.ai/reference/scorers/mastra-scorer): Documentation for the MastraScorer base class in Mastra, which provides the foundation for all custom and built-in scorers.
- [Reference: Noise Sensitivity Scorer (CI/Testing Only) | Scorers](https://mastra.ai/reference/scorers/noise-sensitivity): Documentation for the Noise Sensitivity Scorer in Mastra. A CI/testing scorer that evaluates agent robustness by comparing responses between clean and noisy inputs in controlled test environments.
- [Reference: Prompt Alignment Scorer | Scorers](https://mastra.ai/reference/scorers/prompt-alignment): Documentation for the Prompt Alignment Scorer in Mastra. Evaluates how well agent responses align with user prompt intent, requirements, completeness, and appropriateness using multi-dimensional analysis.
- [Reference: runExperiment | Scorers](https://mastra.ai/reference/scorers/run-experiment): Documentation for the runExperiment function in Mastra, which enables batch evaluation of agents and workflows using multiple scorers.
- [Reference: Textual Difference Scorer | Scorers](https://mastra.ai/reference/scorers/textual-difference): Documentation for the Textual Difference Scorer in Mastra, which measures textual differences between strings using sequence matching.
- [Reference: Tone Consistency Scorer | Scorers](https://mastra.ai/reference/scorers/tone-consistency): Documentation for the Tone Consistency Scorer in Mastra, which evaluates emotional tone and sentiment consistency in text.
- [Reference: Tool Call Accuracy Scorers | Scorers](https://mastra.ai/reference/scorers/tool-call-accuracy): Documentation for the Tool Call Accuracy Scorers in Mastra, which evaluate whether LLM outputs call the correct tools from available options.
- [Reference: Toxicity Scorer | Scorers](https://mastra.ai/reference/scorers/toxicity): Documentation for the Toxicity Scorer in Mastra, which evaluates LLM outputs for racist, biased, or toxic elements.
- [Reference: Cloudflare D1 Storage | Storage](https://mastra.ai/reference/storage/cloudflare-d1): Documentation for the Cloudflare D1 SQL storage implementation in Mastra.
- [Reference: Cloudflare Storage | Storage](https://mastra.ai/reference/storage/cloudflare): Documentation for the Cloudflare KV storage implementation in Mastra.
- [Reference: DynamoDB Storage | Storage](https://mastra.ai/reference/storage/dynamodb): Documentation for the DynamoDB storage implementation in Mastra, using a single-table design with ElectroDB.
- [Reference: LanceDB Storage | Storage](https://mastra.ai/reference/storage/lance): Documentation for the LanceDB storage implementation in Mastra.
- [Reference: LibSQL Storage | Storage](https://mastra.ai/reference/storage/libsql): Documentation for the LibSQL storage implementation in Mastra.
- [Reference: MongoDB Storage | Storage](https://mastra.ai/reference/storage/mongodb): Documentation for the MongoDB storage implementation in Mastra.
- [Reference: MSSQL Storage | Storage](https://mastra.ai/reference/storage/mssql): Documentation for the MSSQL storage implementation in Mastra.
- [Reference: PostgreSQL Storage | Storage](https://mastra.ai/reference/storage/postgresql): Documentation for the PostgreSQL storage implementation in Mastra.
- [Reference: Upstash Storage | Storage](https://mastra.ai/reference/storage/upstash): Documentation for the Upstash storage implementation in Mastra.
- [Reference: ChunkType | Streaming](https://mastra.ai/reference/streaming/ChunkType): Documentation for the ChunkType type used in Mastra streaming responses, defining all possible chunk types and their payloads.
- [Reference: MastraModelOutput | Streaming](https://mastra.ai/reference/streaming/agents/MastraModelOutput): Complete reference for MastraModelOutput - the stream object returned by agent.stream() with streaming and promise-based access to model outputs.
- [Reference: Agent.stream() | Streaming](https://mastra.ai/reference/streaming/agents/stream): Documentation for the `Agent.stream()` method in Mastra agents, which enables real-time streaming of responses with enhanced capabilities.
- [Reference: Agent.streamLegacy() (Legacy) | Streaming](https://mastra.ai/reference/streaming/agents/streamLegacy): Documentation for the legacy `Agent.streamLegacy()` method in Mastra agents. This method is deprecated and will be removed in a future version.
- [Reference: Run.observeStream() | Streaming](https://mastra.ai/reference/streaming/workflows/observeStream): Documentation for the `Run.observeStream()` method in workflows, which enables reopening the stream of an already active workflow run.
- [Reference: Run.observeStreamVNext() (Experimental) | Streaming](https://mastra.ai/reference/streaming/workflows/observeStreamVNext): Documentation for the `Run.observeStreamVNext()` method in workflows, which enables reopening the stream of an already active workflow run.
- [Reference: Run.resumeStreamVNext() (Experimental) | Streaming](https://mastra.ai/reference/streaming/workflows/resumeStreamVNext): Documentation for the `Run.resumeStreamVNext()` method in workflows, which enables real-time resumption and streaming of suspended workflow runs.
- [Reference: Run.stream() | Streaming](https://mastra.ai/reference/streaming/workflows/stream): Documentation for the `Run.stream()` method in workflows, which allows you to monitor the execution of a workflow run as a stream.
- [Reference: Run.streamVNext() (Experimental) | Streaming](https://mastra.ai/reference/streaming/workflows/streamVNext): Documentation for the `Run.streamVNext()` method in workflows, which enables real-time streaming of responses.
- [Reference: Run.timeTravelStream() | Streaming](https://mastra.ai/reference/streaming/workflows/timeTravelStream): Documentation for the `Run.timeTravelStream()` method for streaming workflow time travel execution.
- [Reference: LLM provider API keys (choose one or more) | Templates](https://mastra.ai/reference/templates/overview): Complete guide to creating, using, and contributing Mastra templates
- [Reference: MastraMCPClient (Deprecated) | Tools & MCP](https://mastra.ai/reference/tools/client): API Reference for MastraMCPClient - A client implementation for the Model Context Protocol.
- [Reference: createTool() | Tools & MCP](https://mastra.ai/reference/tools/create-tool): Documentation for the `createTool()` function in Mastra, used to define custom tools for agents.
- [Reference: createDocumentChunkerTool() | Tools & MCP](https://mastra.ai/reference/tools/document-chunker-tool): Documentation for the Document Chunker Tool in Mastra, which splits documents into smaller chunks for efficient processing and retrieval.
- [Reference: createGraphRAGTool() | Tools & MCP](https://mastra.ai/reference/tools/graph-rag-tool): Documentation for the Graph RAG Tool in Mastra, which enhances RAG by building a graph of semantic relationships between documents.
- [Reference: MCPClient | Tools & MCP](https://mastra.ai/reference/tools/mcp-client): API Reference for MCPClient - A class for managing multiple Model Context Protocol servers and their tools.
- [Reference: MCPServer | Tools & MCP](https://mastra.ai/reference/tools/mcp-server): API Reference for MCPServer - A class for exposing Mastra tools and capabilities as a Model Context Protocol server.
- [Reference: createVectorQueryTool() | Tools & MCP](https://mastra.ai/reference/tools/vector-query-tool): Documentation for the Vector Query Tool in Mastra, which facilitates semantic search over vector stores with filtering and reranking capabilities.
- [Reference: Astra Vector Store | Vectors](https://mastra.ai/reference/vectors/astra): Documentation for the AstraVector class in Mastra, which provides vector search using DataStax Astra DB.
- [Reference: Chroma Vector Store | Vectors](https://mastra.ai/reference/vectors/chroma): Documentation for the ChromaVector class in Mastra, which provides vector search using ChromaDB.
- [Reference: Couchbase Vector Store | Vectors](https://mastra.ai/reference/vectors/couchbase): Documentation for the CouchbaseVector class in Mastra, which provides vector search using Couchbase Vector Search.
- [Reference: Lance Vector Store | Vectors](https://mastra.ai/reference/vectors/lance): Documentation for the LanceVectorStore class in Mastra, which provides vector search using LanceDB, an embedded vector database based on the Lance columnar format.
- [Reference: LibSQLVector Store | Vectors](https://mastra.ai/reference/vectors/libsql): Documentation for the LibSQLVector class in Mastra, which provides vector search using LibSQL with vector extensions.
- [Reference: MongoDB Vector Store | Vectors](https://mastra.ai/reference/vectors/mongodb): Documentation for the MongoDBVector class in Mastra, which provides vector search using MongoDB Atlas and Atlas Vector Search.
- [Reference: OpenSearch Vector Store | Vectors](https://mastra.ai/reference/vectors/opensearch): Documentation for the OpenSearchVector class in Mastra, which provides vector search using OpenSearch.
- [Reference: PG Vector Store | Vectors](https://mastra.ai/reference/vectors/pg): Documentation for the PgVector class in Mastra, which provides vector search using PostgreSQL with pgvector extension.
- [Reference: Pinecone Vector Store | Vectors](https://mastra.ai/reference/vectors/pinecone): Documentation for the PineconeVector class in Mastra, which provides an interface to Pinecones vector database.
- [Reference: Qdrant Vector Store | Vectors](https://mastra.ai/reference/vectors/qdrant): Documentation for integrating Qdrant with Mastra, a vector similarity search engine for managing vectors and payloads.
- [Reference: Amazon S3 Vectors Store | Vectors](https://mastra.ai/reference/vectors/s3vectors): Documentation for the S3Vectors class in Mastra, which provides vector search using Amazon S3 Vectors (Preview).
- [Reference: Turbopuffer Vector Store | Vectors](https://mastra.ai/reference/vectors/turbopuffer): Documentation for integrating Turbopuffer with Mastra, a high-performance vector database for efficient similarity search.
- [Reference: Upstash Vector Store | Vectors](https://mastra.ai/reference/vectors/upstash): Documentation for the UpstashVector class in Mastra, which provides vector search using Upstash Vector.
- [Reference: Cloudflare Vector Store | Vectors](https://mastra.ai/reference/vectors/vectorize): Documentation for the CloudflareVector class in Mastra, which provides vector search using Cloudflare Vectorize.
- [Reference: Azure | Voice](https://mastra.ai/reference/voice/azure): Documentation for the AzureVoice class, providing text-to-speech and speech-to-text capabilities using Azure Cognitive Services.
- [Reference: Cloudflare | Voice](https://mastra.ai/reference/voice/cloudflare): Documentation for the CloudflareVoice class, providing text-to-speech capabilities using Cloudflare Workers AI.
- [Reference: CompositeVoice | Voice](https://mastra.ai/reference/voice/composite-voice): Documentation for the CompositeVoice class, which enables combining multiple voice providers for flexible text-to-speech and speech-to-text operations.
- [Reference: Deepgram | Voice](https://mastra.ai/reference/voice/deepgram): Documentation for the Deepgram voice implementation, providing text-to-speech and speech-to-text capabilities with multiple voice models and languages.
- [Reference: ElevenLabs | Voice](https://mastra.ai/reference/voice/elevenlabs): Documentation for the ElevenLabs voice implementation, offering high-quality text-to-speech capabilities with multiple voice models and natural-sounding synthesis.
- [Reference: Google Gemini Live Voice | Voice](https://mastra.ai/reference/voice/google-gemini-live): Documentation for the GeminiLiveVoice class, providing real-time multimodal voice interactions using Googles Gemini Live API with support for both Gemini API and Vertex AI.
- [Reference: Google | Voice](https://mastra.ai/reference/voice/google): Documentation for the Google Voice implementation, providing text-to-speech and speech-to-text capabilities.
- [Reference: MastraVoice | Voice](https://mastra.ai/reference/voice/mastra-voice): Documentation for the MastraVoice abstract base class, which defines the core interface for all voice services in Mastra, including speech-to-speech capabilities.
- [Reference: Murf | Voice](https://mastra.ai/reference/voice/murf): Documentation for the Murf voice implementation, providing text-to-speech capabilities.
- [Reference: OpenAI Realtime Voice | Voice](https://mastra.ai/reference/voice/openai-realtime): Documentation for the OpenAIRealtimeVoice class, providing real-time text-to-speech and speech-to-text capabilities via WebSockets.
- [Reference: OpenAI | Voice](https://mastra.ai/reference/voice/openai): Documentation for the OpenAIVoice class, providing text-to-speech and speech-to-text capabilities.
- [Reference: PlayAI | Voice](https://mastra.ai/reference/voice/playai): Documentation for the PlayAI voice implementation, providing text-to-speech capabilities.
- [Reference: Sarvam | Voice](https://mastra.ai/reference/voice/sarvam): Documentation for the Sarvam class, providing text-to-speech and speech-to-text capabilities.
- [Reference: Speechify | Voice](https://mastra.ai/reference/voice/speechify): Documentation for the Speechify voice implementation, providing text-to-speech capabilities.
- [Reference: voice.addInstructions() | Voice](https://mastra.ai/reference/voice/voice.addInstructions): Documentation for the addInstructions() method available in voice providers, which adds instructions to guide the voice models behavior.
- [Reference: voice.addTools() | Voice](https://mastra.ai/reference/voice/voice.addTools): Documentation for the addTools() method available in voice providers, which equips voice models with function calling capabilities.
- [Reference: voice.answer() | Voice](https://mastra.ai/reference/voice/voice.answer): Documentation for the answer() method available in real-time voice providers, which triggers the voice provider to generate a response.
- [Reference: voice.close() | Voice](https://mastra.ai/reference/voice/voice.close): Documentation for the close() method available in voice providers, which disconnects from real-time voice services.
- [Reference: voice.connect() | Voice](https://mastra.ai/reference/voice/voice.connect): Documentation for the connect() method available in real-time voice providers, which establishes a connection for speech-to-speech communication.
- [Reference: Voice Events | Voice](https://mastra.ai/reference/voice/voice.events): Documentation for events emitted by voice providers, particularly for real-time voice interactions.
- [Reference: voice.getSpeakers() | Voice Providers](https://mastra.ai/reference/voice/voice.getSpeakers): Documentation for the getSpeakers() method available in voice providers, which retrieves available voice options.
- [Reference: voice.listen() | Voice](https://mastra.ai/reference/voice/voice.listen): Documentation for the listen() method available in all Mastra voice providers, which converts speech to text.
- [Reference: voice.off() | Voice](https://mastra.ai/reference/voice/voice.off): Documentation for the off() method available in voice providers, which removes event listeners for voice events.
- [Reference: voice.on() | Voice](https://mastra.ai/reference/voice/voice.on): Documentation for the on() method available in voice providers, which registers event listeners for voice events.
- [Reference: voice.send() | Voice](https://mastra.ai/reference/voice/voice.send): Documentation for the send() method available in real-time voice providers, which streams audio data for continuous processing.
- [Reference: voice.speak() | Voice](https://mastra.ai/reference/voice/voice.speak): Documentation for the speak() method available in all Mastra voice providers, which converts text to speech.
- [Reference: voice.updateConfig() | Voice](https://mastra.ai/reference/voice/voice.updateConfig): Documentation for the updateConfig() method available in voice providers, which updates the configuration of a voice provider at runtime.
- [Reference: Run.cancel() | Workflows](https://mastra.ai/reference/workflows/run-methods/cancel): Documentation for the `Run.cancel()` method in workflows, which cancels a workflow run.
- [Reference: Run.restart() | Workflows](https://mastra.ai/reference/workflows/run-methods/restart): Documentation for the `Run.restart()` method in workflows, which restarts an active workflow run that lost connection to the server.
- [Reference: Run.resume() | Workflows](https://mastra.ai/reference/workflows/run-methods/resume): Documentation for the `Run.resume()` method in workflows, which resumes a suspended workflow run with new data.
- [Reference: Run.start() | Workflows](https://mastra.ai/reference/workflows/run-methods/start): Documentation for the `Run.start()` method in workflows, which starts a workflow run with input data.
- [Reference: Run.timeTravel() | Workflows](https://mastra.ai/reference/workflows/run-methods/timeTravel): Documentation for the `Run.timeTravel()` method in workflows, which re-executes a workflow from a specific step.
- [Reference: Run.watch() | Workflows](https://mastra.ai/reference/workflows/run-methods/watch): Documentation for the `Run.watch()` method in workflows, which allows you to monitor the execution of a workflow run.
- [Reference: Run Class | Workflows](https://mastra.ai/reference/workflows/run): Documentation for the Run class in Mastra, which represents a workflow execution instance.
- [Reference: Step Class | Workflows](https://mastra.ai/reference/workflows/step): Documentation for the Step class in Mastra, which defines individual units of work within a workflow.
- [Reference: Workflow.branch() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/branch): Documentation for the `Workflow.branch()` method in workflows, which creates conditional branches between steps.
- [Reference: Workflow.commit() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/commit): Documentation for the `Workflow.commit()` method in workflows, which finalizes the workflow and returns the final result.
- [Reference: Workflow.createRunAsync() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/create-run): Documentation for the `Workflow.createRunAsync()` method in workflows, which creates a new workflow run instance.
- [Reference: Workflow.dountil() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/dountil): Documentation for the `Workflow.dountil()` method in workflows, which creates a loop that executes a step until a condition is met.
- [Reference: Workflow.dowhile() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/dowhile): Documentation for the `Workflow.dowhile()` method in workflows, which creates a loop that executes a step while a condition is met.
- [Reference: Workflow.foreach() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/foreach): Documentation for the `Workflow.foreach()` method in workflows, which creates a loop that executes a step for each item in an array.
- [Reference: Workflow.map() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/map): Documentation for the `Workflow.map()` method in workflows, which maps output data from a previous step to the input of a subsequent step.
- [Reference: Workflow.parallel() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/parallel): Documentation for the `Workflow.parallel()` method in workflows, which executes multiple steps in parallel.
- [Reference: Workflow.sendEvent() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/sendEvent): Documentation for the `Workflow.sendEvent()` method in workflows, which resumes execution when an event is sent.
- [Reference: Workflow.sleep() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/sleep): Documentation for the `Workflow.sleep()` method in workflows, which pauses execution for a specified number of milliseconds.
- [Reference: Workflow.sleepUntil() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/sleepUntil): Documentation for the `Workflow.sleepUntil()` method in workflows, which pauses execution until a specified date.
- [Reference: Workflow.then() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/then): Documentation for the `Workflow.then()` method in workflows, which creates sequential dependencies between steps.
- [Reference: Workflow.waitForEvent() | Workflows](https://mastra.ai/reference/workflows/workflow-methods/waitForEvent): Documentation for the `Workflow.waitForEvent()` method in workflows, which pauses execution until an event is received.
- [Reference: Workflow Class | Workflows](https://mastra.ai/reference/workflows/workflow): Documentation for the `Workflow` class in Mastra, which enables you to create state machines for complex sequences of operations with conditional branching and data validation.

Mastra AI Framework

Mastra is a TypeScript framework for building AI-powered applications and agents. From the team behind Gatsby, it provides a unified architecture for creating AI systems that integrate with modern web frameworks like React, Next.js, and Node.js. The framework handles the complexity of orchestrating LLMs, tools, workflows, memory systems, and vector stores through a central dependency injection pattern.

The framework is built around modular components that work together seamlessly: agents for autonomous AI interactions, workflows for controlled multi-step processes, tools for extending agent capabilities, memory for conversation persistence, and storage adapters for data persistence. With support for 40+ LLM providers, 25+ vector stores (including Pinecone, Chroma, Qdrant, Astra, MongoDB, PostgreSQL, Upstash, Turbopuffer, LanceDB, Elasticsearch, OpenSearch, Couchbase, ClickHouse, S3Vectors, Cloudflare Vectorize, and more), built-in observability, and evaluation tools, Mastra provides everything needed to go from prototype to production while maintaining type safety and developer ergonomics.

Core APIs and Functions

Mastra Class - Central Orchestration Hub

The Mastra class serves as the dependency injection container and configuration registry for all framework components.

import { Mastra } from '@mastra/core';
import { Agent } from '@mastra/core/agent';
import { LibSQLStore } from '@mastra/libsql';
import { PineconeVector } from '@mastra/pinecone';
import { PinoLogger } from '@mastra/loggers';
import { openai } from '@ai-sdk/openai';

// Initialize with all components
const mastra = new Mastra({
  agents: {
    assistant: new Agent({
      name: 'assistant',
      instructions: 'You are a helpful AI assistant',
      model: openai('gpt-4o'),
      tools: { weatherTool, calculatorTool }
    }),
    researcher: new Agent({
      name: 'researcher',
      instructions: 'You research topics deeply',
      model: openai('gpt-4o-mini')
    })
  },

  storage: new LibSQLStore({
    url: 'file:./mastra.db'
  }),

  vectors: {
    knowledge: new PineconeVector({
      apiKey: process.env.PINECONE_API_KEY,
      indexName: 'knowledge-base'
    })
  },

  logger: new PinoLogger({
    name: 'MyApp',
    level: 'info'
  }),

  workflows: {
    dataProcessor: myWorkflow
  },

  observability: {
    default: { enabled: true }
  }
});

// Access registered components
const agent = mastra.getAgent('assistant');
const storage = mastra.getStorage();
const vector = mastra.getVector('knowledge');

// Graceful shutdown
process.on('SIGINT', async () => {
  await mastra.shutdown();
  process.exit(0);
});

Agent - Autonomous AI Entity

Agents use LLMs and tools to autonomously solve tasks through reasoning and iteration.

import { Agent } from '@mastra/core/agent';
import { Memory } from '@mastra/memory';
import { OpenAIVoice } from '@mastra/voice-openai';
import { openai } from '@ai-sdk/openai';
import { createTool } from '@mastra/core/tools';
import { z } from 'zod';

// Create tools for the agent
const weatherTool = createTool({
  id: 'get-weather',
  description: 'Get weather for a location',
  inputSchema: z.object({
    location: z.string(),
    units: z.enum(['celsius', 'fahrenheit']).optional()
  }),
  execute: async ({ context }) => {
    const weather = await fetchWeatherAPI(context.location);
    return {
      temperature: weather.temp,
      conditions: weather.description,
      humidity: weather.humidity
    };
  }
});

// Create agent with tools, memory, and voice
const weatherAgent = new Agent({
  name: 'weatherAgent',
  instructions: 'You help users with weather information worldwide',
  model: openai('gpt-4o'),
  tools: { weatherTool },
  memory: new Memory(),
  voice: new OpenAIVoice(),
  maxRetries: 2
});

// Text generation
const result = await weatherAgent.generate('What is the weather in Tokyo?');
console.log(result.text);

// Streaming response
const stream = await weatherAgent.stream('Compare weather in Tokyo and London');
for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}

// With conversation memory
const response = await weatherAgent.generate('Remember that I live in Seattle', {
  memory: { thread: 'user-123' }
});

const followUp = await weatherAgent.generate('What is my local weather?', {
  memory: { thread: 'user-123' }
});

Dynamic Agent Configuration

Agents support runtime-computed configuration based on context.

import { RuntimeContext } from '@mastra/core';

const dynamicAgent = new Agent({
  name: 'dynamic',

  // Instructions computed at runtime
  instructions: ({ runtimeContext }) => {
    const isPremium = runtimeContext.get('isPremiumUser');
    return isPremium
      ? 'You are an advanced AI with full capabilities'
      : 'You are a basic AI assistant';
  },

  // Model selection based on context
  model: ({ runtimeContext }) => {
    const isPremium = runtimeContext.get('isPremiumUser');
    return isPremium ? openai('gpt-4o') : openai('gpt-4o-mini');
  },

  // Tools computed dynamically
  tools: ({ runtimeContext }) => {
    const tools = { basicTool };
    if (runtimeContext.get('isPremiumUser')) {
      tools.advancedTool = premiumTool;
    }
    return tools;
  }
});

// Execute with runtime context
const context = new RuntimeContext();
context.set('isPremiumUser', true);

const result = await dynamicAgent.generate('Help me', {
  runtimeContext: context
});

Tool - Type-Safe Agent Capabilities

Tools extend agent capabilities with validated functions.

import { createTool } from '@mastra/core/tools';
import { z } from 'zod';

// Simple tool
const calculatorTool = createTool({
  id: 'calculate',
  description: 'Perform mathematical calculations',
  inputSchema: z.object({
    operation: z.enum(['add', 'subtract', 'multiply', 'divide']),
    a: z.number(),
    b: z.number()
  }),
  outputSchema: z.object({
    result: z.number()
  }),
  execute: async ({ context }) => {
    let result;
    switch (context.operation) {
      case 'add': result = context.a + context.b; break;
      case 'subtract': result = context.a - context.b; break;
      case 'multiply': result = context.a * context.b; break;
      case 'divide': result = context.a / context.b; break;
    }
    return { result };
  }
});

// Tool with Mastra integration
const saveTool = createTool({
  id: 'save-data',
  description: 'Save data to storage',
  inputSchema: z.object({
    key: z.string(),
    value: z.any()
  }),
  execute: async ({ context, mastra }) => {
    const storage = mastra?.getStorage();
    await storage?.set(context.key, context.value);
    return { saved: true };
  }
});

// Tool requiring approval
const deleteFileTool = createTool({
  id: 'delete-file',
  description: 'Delete a file from the system',
  requireApproval: true,
  inputSchema: z.object({
    filepath: z.string()
  }),
  execute: async ({ context }) => {
    await fs.unlink(context.filepath);
    return { deleted: true, filepath: context.filepath };
  }
});

// Use tools with agent
const agent = new Agent({
  name: 'assistant',
  instructions: 'You are a helpful assistant',
  model: openai('gpt-4o'),
  tools: {
    calculator: calculatorTool,
    save: saveTool,
    deleteFile: deleteFileTool
  }
});

Workflow - Graph-Based Multi-Step Execution

Workflows provide explicit control over multi-step processes with type safety.

import { createWorkflow, createStep, mapVariable } from '@mastra/core/workflows';
import { z } from 'zod';

// Define workflow steps
const fetchDataStep = createStep({
  id: 'fetch-data',
  description: 'Fetch data from API',
  inputSchema: z.object({
    userId: z.string()
  }),
  outputSchema: z.object({
    userData: z.object({
      name: z.string(),
      email: z.string()
    })
  }),
  execute: async ({ inputData }) => {
    const response = await fetch(`/api/users/${inputData.userId}`);
    const userData = await response.json();
    return { userData };
  }
});

const validateStep = createStep({
  id: 'validate',
  description: 'Validate user data',
  inputSchema: z.object({
    userData: z.object({
      name: z.string(),
      email: z.string()
    })
  }),
  outputSchema: z.object({
    isValid: z.boolean(),
    userData: z.object({
      name: z.string(),
      email: z.string()
    })
  }),
  execute: async ({ inputData }) => {
    const isValid = inputData.userData.email.includes('@');
    return { isValid, userData: inputData.userData };
  }
});

const saveStep = createStep({
  id: 'save',
  description: 'Save to database',
  inputSchema: z.object({
    userData: z.object({
      name: z.string(),
      email: z.string()
    })
  }),
  outputSchema: z.object({
    saved: z.boolean()
  }),
  execute: async ({ inputData }) => {
    await db.users.insert(inputData.userData);
    return { saved: true };
  }
});

// Create workflow with control flow
const userWorkflow = createWorkflow({
  id: 'process-user',
  description: 'Process user data',
  inputSchema: z.object({
    userId: z.string()
  }),
  outputSchema: z.object({
    saved: z.boolean()
  })
})
  .then(fetchDataStep)
  .then(validateStep)
  .branch([
    [
      async ({ inputData }) => inputData.isValid,
      saveStep
    ],
    [
      async ({ inputData }) => !inputData.isValid,
      createStep({
        id: 'reject',
        inputSchema: z.any(),
        outputSchema: z.object({ saved: z.boolean() }),
        execute: async () => ({ saved: false })
      })
    ]
  ])
  .commit();

// Execute workflow
const result = await userWorkflow.execute({
  triggerData: { userId: '123' }
});

console.log('Workflow result:', result.results.save);

// Resume suspended workflow
const run = await userWorkflow.execute({
  triggerData: { userId: '456' }
});

if (run.status === 'suspended') {
  const resumed = await userWorkflow.resume(run.runId, {
    stepId: 'validate',
    resumeData: { approved: true }
  });
}

Workflow Advanced Patterns

Workflows support parallel execution, loops, and nested workflows.

const parallelStep1 = createStep({
  id: 'step-1',
  inputSchema: z.object({ text: z.string() }),
  outputSchema: z.object({ result: z.string() }),
  execute: async ({ inputData }) => ({ result: inputData.text + 'A' })
});

const parallelStep2 = createStep({
  id: 'step-2',
  inputSchema: z.object({ text: z.string() }),
  outputSchema: z.object({ result: z.string() }),
  execute: async ({ inputData }) => ({ result: inputData.text + 'B' })
});

const loopStep = createStep({
  id: 'loop',
  inputSchema: z.object({
    count: z.number(),
    text: z.string()
  }),
  outputSchema: z.object({
    count: z.number(),
    text: z.string()
  }),
  execute: async ({ inputData }) => ({
    count: inputData.count + 1,
    text: inputData.text + 'X'
  })
});

const complexWorkflow = createWorkflow({
  id: 'complex',
  inputSchema: z.object({ text: z.string() }),
  outputSchema: z.object({ text: z.string() })
})
  // Parallel execution
  .parallel([parallelStep1, parallelStep2])

  // Map parallel results
  .map(async ({ inputData }) => {
    const result1 = inputData['step-1'].result;
    const result2 = inputData['step-2'].result;
    return { text: result1 + result2, count: 0 };
  })

  // Loop until condition
  .dountil(
    loopStep,
    async ({ inputData }) => inputData.count >= 5
  )

  // Conditional branch
  .branch([
    [
      async ({ inputData }) => inputData.text.length > 10,
      createStep({
        id: 'long',
        inputSchema: z.any(),
        outputSchema: z.object({ text: z.string() }),
        execute: async ({ inputData }) => ({ text: inputData.text + '-LONG' })
      })
    ],
    [
      async ({ inputData }) => inputData.text.length <= 10,
      createStep({
        id: 'short',
        inputSchema: z.any(),
        outputSchema: z.object({ text: z.string() }),
        execute: async ({ inputData }) => ({ text: inputData.text + '-SHORT' })
      })
    ]
  ])
  .commit();

const result = await complexWorkflow.execute({
  triggerData: { text: 'Start' }
});

Workflow Suspend and Resume

Workflows can suspend for human input and resume later.

const approvalStep = createStep({
  id: 'approval',
  description: 'Wait for approval',
  inputSchema: z.object({
    amount: z.number()
  }),
  outputSchema: z.object({
    approved: z.boolean(),
    amount: z.number()
  }),
  suspendSchema: z.object({
    reason: z.string(),
    amount: z.number()
  }),
  resumeSchema: z.object({
    approved: z.boolean()
  }),
  execute: async ({ inputData, resumeData, suspend }) => {
    // First time: suspend for approval
    if (!resumeData) {
      return await suspend({
        reason: 'Awaiting manager approval',
        amount: inputData.amount
      });
    }

    // After resume: continue with approval status
    return {
      approved: resumeData.approved,
      amount: inputData.amount
    };
  }
});

const purchaseWorkflow = createWorkflow({
  id: 'purchase',
  inputSchema: z.object({ amount: z.number() }),
  outputSchema: z.object({ approved: z.boolean() })
})
  .then(approvalStep)
  .commit();

// Start workflow
const run = await purchaseWorkflow.execute({
  triggerData: { amount: 5000 }
});

console.log('Status:', run.status); // 'suspended'
console.log('Suspend data:', run.suspendData);

// Later: resume with approval
const resumed = await purchaseWorkflow.resume(run.runId, {
  stepId: 'approval',
  resumeData: { approved: true }
});

console.log('Final result:', resumed.results);

Agent as Workflow Step

Agents can be wrapped as workflow steps for LLM-powered steps.

import { createStep } from '@mastra/core/workflows';

const researchAgent = new Agent({
  name: 'researcher',
  instructions: 'Research topics and provide detailed analysis',
  model: openai('gpt-4o')
});

// Wrap agent as step
const researchStep = createStep(researchAgent);

const researchWorkflow = createWorkflow({
  id: 'research',
  inputSchema: z.object({ prompt: z.string() }),
  outputSchema: z.object({ text: z.string() })
})
  .then(researchStep)
  .then(createStep({
    id: 'format',
    inputSchema: z.object({ text: z.string() }),
    outputSchema: z.object({ text: z.string() }),
    execute: async ({ inputData }) => ({
      text: `Research Report:\n${inputData.text}`
    })
  }))
  .commit();

const result = await researchWorkflow.execute({
  triggerData: { prompt: 'Research quantum computing' }
});

Memory - Conversation Persistence

Memory systems maintain conversation history and semantic recall.

import { Memory } from '@mastra/memory';
import { LibSQLStore } from '@mastra/libsql';
import { PineconeVector } from '@mastra/pinecone';

// Create memory with storage
const memory = new Memory({
  storage: new LibSQLStore({ url: 'file:./memory.db' }),
  vectors: {
    semantic: new PineconeVector({
      apiKey: process.env.PINECONE_API_KEY,
      indexName: 'conversations'
    })
  },
  options: {
    workingMemory: { enabled: true },
    semanticRecall: { enabled: true, topK: 5 }
  }
});

// Create agent with memory
const agent = new Agent({
  name: 'assistant',
  instructions: 'You are a helpful assistant with memory',
  model: openai('gpt-4o'),
  memory
});

// First conversation
await agent.generate('My name is Alice', {
  memory: { thread: 'user-123' }
});

// Agent remembers previous context
await agent.generate('What is my name?', {
  memory: { thread: 'user-123' }
});

// Save thread metadata
await memory.saveThread({
  id: 'user-123',
  userId: 'alice',
  metadata: { topic: 'introduction' }
});

// Get conversation history
const messages = await memory.getMessages({
  threadId: 'user-123'
});

// Semantic search across conversations
const relevant = await memory.semanticRecall({
  query: 'user preferences',
  threadId: 'user-123',
  topK: 5
});

Vector Stores - Semantic Search

Vector stores enable semantic search across documents and data.

import { PineconeVector } from '@mastra/pinecone';
import { OpenAIEmbedder } from '@mastra/embedders';

const vectorStore = new PineconeVector({
  apiKey: process.env.PINECONE_API_KEY,
  indexName: 'knowledge-base',
  embedder: new OpenAIEmbedder({
    apiKey: process.env.OPENAI_API_KEY,
    model: 'text-embedding-3-small'
  })
});

// Index documents
await vectorStore.upsert([
  {
    id: 'doc1',
    values: [0.1, 0.2, 0.3],
    metadata: {
      text: 'TypeScript is a typed superset of JavaScript',
      category: 'programming'
    }
  },
  {
    id: 'doc2',
    values: [0.4, 0.5, 0.6],
    metadata: {
      text: 'React is a JavaScript library for building UIs',
      category: 'frameworks'
    }
  }
]);

// Semantic search
const results = await vectorStore.query({
  query: 'What is TypeScript?',
  topK: 5,
  filter: { category: 'programming' }
});

// Use with Mastra
const mastra = new Mastra({
  vectors: {
    knowledge: vectorStore
  }
});

const knowledgeBase = mastra.getVector('knowledge');

Input and Output Processors

Processors transform agent inputs and outputs for safety and customization.

import {
  PIIDetector,
  LanguageDetector,
  PromptInjectionDetector,
  ModerationProcessor
} from '@mastra/core/processors';
import { openai } from '@ai-sdk/openai';
import { google } from '@ai-sdk/google';

// Detect and redact PII
const piiDetector = new PIIDetector({
  model: openai('gpt-4o'),
  redactionMethod: 'mask',
  preserveFormat: true,
  includeDetections: true
});

// Language detection and translation
const languageDetector = new LanguageDetector({
  model: google('gemini-2.0-flash-001'),
  targetLanguages: ['en'],
  strategy: 'translate'
});

// Prompt injection defense
const promptInjectionDetector = new PromptInjectionDetector({
  model: google('gemini-2.0-flash-001'),
  strategy: 'block'
});

// Content moderation
const moderationProcessor = new ModerationProcessor({
  model: google('gemini-2.0-flash-001'),
  strategy: 'block',
  chunkWindow: 10
});

// Custom processor
const customProcessor = {
  name: 'add-context',
  process: async ({ messages, abort }) => {
    // Check for blocked content
    const hasBlockedWord = messages.some(msg =>
      msg.content.parts.some(part =>
        part.type === 'text' && part.text.includes('forbidden')
      )
    );

    if (hasBlockedWord) {
      abort('Request contains forbidden content');
    }

    // Add context to messages
    messages.push({
      id: crypto.randomUUID(),
      createdAt: new Date(),
      role: 'user',
      content: {
        format: 2,
        parts: [{
          type: 'text',
          text: 'Please respond professionally'
        }]
      }
    });

    return messages;
  }
};

// Use processors with agent
const safeAgent = new Agent({
  name: 'safe-agent',
  instructions: 'You are a helpful assistant',
  model: openai('gpt-4o'),
  inputProcessors: [
    piiDetector,
    languageDetector,
    promptInjectionDetector,
    customProcessor
  ],
  outputProcessors: [
    moderationProcessor
  ]
});

Model Configuration and Fallbacks

Configure models with fallback strategies for reliability.

import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';

// Single model configuration
const simpleAgent = new Agent({
  name: 'simple',
  instructions: 'You are a helpful assistant',
  model: openai('gpt-4o')
});

// Model with retries
const reliableAgent = new Agent({
  name: 'reliable',
  instructions: 'You are a helpful assistant',
  model: openai('gpt-4o'),
  maxRetries: 3
});

// Model fallback chain
const fallbackAgent = new Agent({
  name: 'fallback',
  instructions: 'You are a helpful assistant',
  model: [
    {
      model: openai('gpt-4o'),
      maxRetries: 2,
      enabled: true
    },
    {
      model: openai('gpt-4o-mini'),
      maxRetries: 2,
      enabled: true
    },
    {
      model: anthropic('claude-3-5-sonnet-20241022'),
      maxRetries: 1,
      enabled: true
    }
  ]
});

// Model responses (streaming native format)
const responsesAgent = new Agent({
  name: 'responses',
  instructions: 'You are a helpful assistant',
  model: openai.responses('gpt-4o')
});

Observability and Tracing

Built-in observability for monitoring AI operations.

import { Mastra } from '@mastra/core';

const mastra = new Mastra({
  agents: { myAgent },
  workflows: { myWorkflow },

  observability: {
    default: {
      enabled: true
    }
  },

  logger: new PinoLogger({
    name: 'MyApp',
    level: 'info'
  })
});

// Tracing context for operations
const tracingContext = {
  traceId: 'trace-123',
  spanId: 'span-456'
};

await agent.generate('Hello', {
  tracingContext
});

// Get logs by run ID
const logs = await mastra.getLogsByRunId({
  runId: 'run-123',
  transportId: 'console',
  logLevel: 'INFO',
  page: 1,
  perPage: 50
});

Evaluation and Scoring

Assess agent and workflow quality with scorers.

import { createAnswerRelevancyScorer } from '@mastra/evals/scorers/prebuilt';
import { createScorer } from '@mastra/core/scores';
import { openai } from '@ai-sdk/openai';

// Built-in LLM scorer
const answerRelevance = createAnswerRelevancyScorer({
  model: openai('gpt-4o')
});

// Custom scorer
const accuracyScorer = createScorer({
  name: 'accuracy',
  description: 'Measure response accuracy'
}).generateScore(({ input, output, expected }) => {
  const similarity = calculateSimilarity(output, expected);
  return similarity;
});

// Use with agent
const agent = new Agent({
  name: 'eval-agent',
  instructions: 'You are a helpful assistant',
  model: openai('gpt-4o'),
  scorers: {
    relevance: { scorer: answerRelevance },
    accuracy: { scorer: accuracyScorer }
  }
});

// Scores run automatically
const result = await agent.generate('What is the capital of France?');

// Access scores
console.log('Scores:', result.scores);

// Register scorers globally
const mastra = new Mastra({
  scorers: {
    relevance: answerRelevance,
    accuracy: accuracyScorer
  }
});

const scorer = mastra.getScorer('relevance');

MCP (Model Context Protocol) Integration

Integrate external MCP servers to extend agent capabilities.

import { MCPServerBase } from '@mastra/core/mcp';

// Define MCP server
class WeatherMCPServer extends MCPServerBase {
  constructor() {
    super({
      id: 'weather-server',
      version: '1.0.0',
      description: 'Weather data server'
    });
  }

  async getTools() {
    return [{
      name: 'get_weather',
      description: 'Get weather data',
      inputSchema: {
        type: 'object',
        properties: {
          location: { type: 'string' }
        }
      }
    }];
  }
}

// Register with Mastra
const mastra = new Mastra({
  mcpServers: {
    weather: new WeatherMCPServer()
  }
});

// Access MCP tools in agents
const agent = new Agent({
  name: 'assistant',
  instructions: 'You help with weather',
  model: openai('gpt-4o')
  // MCP tools automatically available
});

Main Use Cases and Integration Patterns

Mastra excels at building AI agents that autonomously solve tasks by reasoning about goals, selecting appropriate tools, and iterating until completion. The agent system handles the complexity of LLM orchestration, tool execution, and conversation memory, making it straightforward to create chatbots, research assistants, data processors, and automated workflows. Agents integrate with web frameworks through streaming APIs, support human-in-the-loop patterns with suspend/resume, and maintain context through conversation history and semantic memory.

For applications requiring explicit control over execution flow, Mastra's workflow engine provides type-safe multi-step orchestration with branching, parallel execution, and loops. Workflows integrate seamlessly with agents to combine autonomous AI reasoning with deterministic control flow. The framework supports production deployments through pluggable storage adapters (PostgreSQL, LibSQL, MongoDB, DynamoDB, Cloudflare D1, and more), vector stores for RAG (Pinecone, Chroma, Qdrant, Astra, Turbopuffer, Upstash, LanceDB, Elasticsearch, OpenSearch, Couchbase, ClickHouse, S3Vectors, Cloudflare Vectorize, and 20+ others), and observability tools for monitoring and evaluation. All components work together through the central Mastra instance, providing dependency injection, configuration management, and lifecycle control for building, testing, and scaling AI applications.

MongoDB Documentation Monorepo

The MongoDB Documentation Monorepo is the centralized repository for all MongoDB documentation projects, located at https://github.com/10gen/docs-mongodb-internal (private repository). This monorepo consolidates 58 separate documentation properties including the MongoDB Server Manual, MongoDB Atlas documentation (Atlas, Atlas CLI, Atlas Government, Atlas Operator, Atlas Architecture), driver documentation for all supported programming languages (Node.js, Python/PyMongo, PyMongo Arrow, Java, C#, Go, Rust, Kotlin, Scala, PHP, Ruby, etc.), framework integrations (Django, Laravel, Entity Framework, Hibernate, Mongoid), developer tools (MongoDB Shell, Compass, VS Code extension, IntelliJ plugin, MCP Server, MongoDB Analyzer), cloud services (App Services, Realm, Kubernetes Operator, Voyage), data tools (Database Tools, BI Connector, Relational Migrator, Spark Connector, Kafka Connector, MongoSync), and other MongoDB products. The repository implements a unified build and testing infrastructure using the Snooty documentation platform, Next.js for rendering, automated code example testing across all supported languages (including OpenAPI specification testing), and GitHub Copilot prompt templates for documentation assistance.

The monorepo structure separates concerns into three primary directories: content/ for all documentation source files, code-example-tests/ for tested and validated code examples, and platform/ for shared build tooling and Next.js rendering infrastructure. Each documentation property in the content/ directory maintains its own structure with source files, configuration, GitHub workflows, and build settings, while sharing common platform components. The code example testing framework ensures all code snippets in documentation are runnable, tested, and validated through CI/CD pipelines before publication, with automated extraction of tested snippets using the Bluehawk markup tool. The repository implements a consistent versioning pattern across projects where many projects use current/ directories for latest stable versions, others include upcoming/ directories for future/unreleased versions, and some remain unversioned representing current documentation only. Documentation properties are organized systematically with 58 total properties in the content/ directory covering all MongoDB products, services, drivers, and tools.

Repository Structure

Monorepo organization with content, testing, and platform directories

The repository follows a three-tier architecture separating documentation content, code example testing infrastructure, and shared platform tooling. The content/ directory contains subdirectories for each documentation property (atlas, manual, node, python, etc.), each functioning as an independent documentation project with its own snooty.toml configuration, source files, and build settings. The code-example-tests/ directory houses language-specific testing suites with runnable examples organized by driver/product, while platform/ contains the Next.js rendering engine and shared build utilities. The root directory includes repo_sync.py for repository synchronization, deprecated_examples.json for tracking deprecated code examples, and .copier/config.yml for automated example distribution to external repositories.

# View top-level repository structure
ls
# Output: content/, code-example-tests/, platform/, .github/, .gitmodules,
#         README.md, context7.md, repo_sync.py, requirements.txt,
#         deprecated_examples.json, .drone.yml, package-lock.json, .copier/

# Browse all documentation properties
ls content/
# Output: 58 properties total including:
#         Core: manual/, atlas/, atlas-cli/, atlas-government/, atlas-operator/,
#               atlas-architecture/, app-services/, realm/, compass/, charts/,
#               database-tools/, ops-manager/, cloud-manager/, voyage/, datalake/
#         Drivers: node/, pymongo-driver/, pymongo-arrow-driver/, java/, golang/,
#                 csharp/, rust/, c-driver/, cpp-driver/, php-library/,
#                 ruby-driver/, scala-driver/, java-rs/
#         Frameworks: django-mongodb/, laravel-mongodb/, entity-framework/,
#                    hibernate/, mongoid/, kotlin/, kotlin-sync/
#         Tools: mongodb-shell/, mongodb-vscode/, mongodb-intellij/, mcp-server/,
#               mongodb-analyzer/, kubernetes-operator/, kubernetes/, kafka-connector/,
#               mongocli/, mongosync/, bi-connector/, relational-migrator/,
#               spark-connector/, drivers/, guides/, tools/
#         Meta: 404/, landing/, table-of-contents/, shared/, meta/, docs-platform/,
#               code-examples/

# View code example testing structure
ls code-example-tests/
# Output: command-line/mongosh/, csharp/driver/, go/driver/, go/atlas-sdk/,
#         java/driver-sync/, java/driver-reactive/, javascript/driver/,
#         python/pymongo/, openapi/, README.md, comparison-spec.md,
#         sample-data-utility-spec.md, processFiles.js, deprecated_examples.json
#         Each language directory contains: examples/, tests/, snip.js

# Check platform tooling
ls platform/
# Output: docs-nextjs/, snooty-ast-to-mdx/, tools/, template/,
#         package.json, pnpm-lock.yaml, pnpm-workspace.yaml, turbo.json,
#         README.md, netlify.sample.toml

Version Management Strategy

Consistent versioning pattern across documentation properties

The repository implements a standardized versioning strategy across documentation properties where many versioned projects maintain current/ directories for the latest stable production version and upcoming/ directories for future/unreleased versions. The MongoDB Server Manual uses a unique approach with a named manual/manual/ subdirectory as its current version alongside explicit version directories (v8.1/, v8.0/, v7.0/, v6.0/, v5.0/, v4.4/). Many projects remain unversioned to represent current documentation only. Versioned projects organize documentation by directory with clear separation between stable releases (current), future releases (upcoming), and archived versions (v7.0, v6.0, etc.).

# Check current version directories across all projects
ls content/*/current
# Output: Multiple directories including node/current, pymongo-driver/current,
#         atlas-cli/current, golang/current, rust/current, etc.

# Check upcoming version directories
ls content/*/upcoming
# Output: Multiple directories for future/unreleased versions

# View unversioned projects
ls content/ | grep -E "^(atlas|app-services|guides|compass|charts)$"
# Output: Projects without versioning represent current documentation

# Analyze version structure for a specific project
ls content/manual/
# Output: manual/, upcoming/, v8.1/, v8.0/, v7.0/, v6.0/, v5.0/, v4.4/
#         Shows: MongoDB Server manual has explicit version directories

Content Organization

Documentation properties as independent projects

Each documentation property in the content/ directory operates as an independent Snooty-based documentation project with its own configuration, source files, build workflows, and deployment settings. Properties maintain their own snooty.toml configuration defining project metadata, version numbers, intersphinx references, and content organization. Common properties include: manual/ (MongoDB Server), atlas/ (MongoDB Atlas), atlas variants (atlas-cli/, atlas-government/, atlas-operator/, atlas-architecture/), driver documentation (node/, pymongo-driver/, pymongo-arrow-driver/, java/, csharp/, golang/, rust/, kotlin/, kotlin-sync/, scala-driver/, php-library/, ruby-driver/, etc.), framework integrations (django-mongodb/, laravel-mongodb/, entity-framework/, hibernate/, mongoid/), developer tools (database-tools/, mongodb-shell/, compass/, mongodb-vscode/, mongodb-intellij/, mongodb-analyzer/, mcp-server/), platform services (app-services/, realm/, charts/, voyage/, datalake/), data tools (bi-connector/, relational-migrator/, spark-connector/, kafka-connector/, mongosync/, mongocli/), and meta directories (landing/, guides/, drivers/, tools/, shared/, docs-platform/, 404/, table-of-contents/, code-examples/, kubernetes/, meta/).

# Example: MongoDB Server Manual structure
ls content/manual/
# Output: manual/ (versioned documentation), .github/, README.md
#         Contains: source/, snooty.toml, conf.py, Makefile

# Example: Node.js driver documentation
ls content/node/
# Output: node/ (versioned docs), .github/, examples/, README.md
#         Contains: source/, snooty.toml, config files, workflows

# Example: Atlas documentation
ls content/atlas/
# Output: atlas/ (source), .github/, README.md, snooty.toml
#         Contains: comprehensive Atlas platform documentation

# View a documentation property configuration
cat content/manual/manual/snooty.toml
# Shows: name, title, version, intersphinx references, constants, banners
# Example snooty.toml structure (manual documentation)
name = "docs"
title = "MongoDB Manual"
version = "7.2"
intersphinx = [
    "https://www.mongodb.com/docs/atlas/objects.inv",
    "https://www.mongodb.com/docs/database-tools/objects.inv"
]

[constants]
version = "7.2"
release = "7.2.1"
atlas = "MongoDB Atlas"
mongosh = ":binary:`~bin.mongosh`"

Documentation Property Examples

Exploring versioned and unversioned project structures

Documentation properties follow different versioning strategies based on product needs. Versioned properties like driver documentation maintain explicit current/ and upcoming/ directories for stable and future releases, while unversioned properties like Atlas documentation maintain a single directory representing the latest version. The MongoDB Server Manual uses a specialized structure with named version subdirectories alongside a primary manual/ directory for current documentation.

# Example: Versioned driver documentation
ls content/node/
# Output: current/, upcoming/, examples/, .github/, README.md
#         Shows: Node.js driver with versioned documentation

ls content/pymongo-driver/
# Output: current/, upcoming/, .github/, README.md
#         Shows: Python driver with versioned documentation

# Example: Unversioned cloud documentation
ls content/atlas/
# Output: source/, .github/, snooty.toml, README.md
#         Shows: Atlas documentation without versioning

ls content/app-services/
# Output: source/, .github/, snooty.toml, README.md
#         Shows: App Services documentation without versioning

# Example: MongoDB Server manual with explicit versions
ls content/manual/
# Output: manual/, upcoming/, v8.1/, v8.0/, v7.0/, v6.0/, v5.0/, v4.4/
#         Shows: Multiple version directories for historical documentation

# Count all content properties
ls content/ | wc -l
# Output: 58 total documentation properties

Code Example Testing Framework

Automated validation of documentation code snippets

The code-example-tests/ directory implements a comprehensive testing infrastructure ensuring all code examples in documentation are runnable and produce expected results. Each language directory contains an examples/ subdirectory with standalone runnable code files organized by topic, a tests/ subdirectory with test suites that execute examples and verify outputs, and a snip.js script using Bluehawk markup to extract documentation-ready snippets. The framework now includes testing for OpenAPI specifications in addition to driver code examples. Tests run automatically via GitHub Actions workflows when examples are modified, preventing broken or incorrect code from reaching published documentation.

# View code testing structure
ls code-example-tests/
# Output: command-line/mongosh/, csharp/driver/, go/driver/, go/atlas-sdk/,
#         java/, javascript/, python/, openapi/, README.md, comparison-spec.md,
#         sample-data-utility-spec.md, processFiles.js

# Example: MongoDB Shell (mongosh) tests
ls code-example-tests/command-line/mongosh/
# Output: examples/, tests/, utils/, snip.js, package.json, README.md

# Browse example categories
ls code-example-tests/command-line/mongosh/examples/
# Output: aggregation/pipelines/filter/, aggregation/pipelines/group/,
#         aggregation/pipelines/join-one-to-one/, aggregation/pipelines/unwind/

# View test files
ls code-example-tests/command-line/mongosh/tests/
# Output: aggregation/pipelines/tutorials.test.js

# Example: OpenAPI testing structure
ls code-example-tests/openapi/
# Output: tests/, README.md, package.json, babel.config.cjs, jest.config.cjs
// File: code-example-tests/command-line/mongosh/examples/aggregation/pipelines/filter/load-data.js
// Load sample data for filter pipeline example
db.persons.insertMany([
  { name: "Jane Doe", age: 28, city: "New York" },
  { name: "John Smith", age: 35, city: "Boston" },
  { name: "Alice Johnson", age: 42, city: "Chicago" }
]);

// File: code-example-tests/command-line/mongosh/examples/aggregation/pipelines/filter/run-pipeline.js
// Filter persons over age 30
const result = db.persons.aggregate([
  { $match: { age: { $gt: 30 } } },
  { $project: { name: 1, age: 1, _id: 0 } },
  { $sort: { age: 1 } }
]);

printjson(result.toArray());

// File: code-example-tests/command-line/mongosh/examples/aggregation/pipelines/filter/output.sh
# Expected output:
[
  { "name": "John Smith", "age": 35 },
  { "name": "Alice Johnson", "age": 42 }
]
// File: code-example-tests/command-line/mongosh/tests/aggregation/pipelines/tutorials.test.js
const { testExamplesSequentially } = require('../../../utils/testExamplesSequentially');

describe('Aggregation Pipeline Tutorials', () => {
  test('filter tutorial produces expected output', async () => {
    const examplePath = 'examples/aggregation/pipelines/filter';
    await testExamplesSequentially(examplePath, [
      'load-data.js',
      'run-pipeline.js'
    ], 'output.sh');
  });
});

Multi-Language Code Example Testing

Expanded driver testing infrastructure across languages

The monorepo now includes comprehensive testing infrastructure for six primary languages: JavaScript (Node.js), Python (PyMongo), Java, C#, Go, and command-line tools (MongoDB Shell), plus OpenAPI specification testing. Each language directory follows a consistent pattern with examples/ for runnable code, tests/ for validation suites, and utils/ or Utilities/ for shared testing libraries. Language-specific implementations leverage idiomatic frameworks (xUnit for C#, Jest for JavaScript, pytest for Python, JUnit for Java) while following the universal comparison specification to ensure consistent validation behavior across all languages. The OpenAPI testing validates API specifications for accuracy and completeness.

# View all language testing directories
ls code-example-tests/
# Output: command-line/mongosh/, csharp/driver/, go/driver/, go/atlas-sdk/,
#         java/driver-sync/, java/driver-reactive/, java/utilities/,
#         javascript/driver/, python/pymongo/, openapi/

# Each language maintains consistent structure
ls code-example-tests/python/pymongo/
# Output: examples/, tests/, utils/, snip.js, requirements.txt, pytest.ini

ls code-example-tests/javascript/driver/
# Output: examples/, tests/, utils/, snip.js, package.json, jest.config.js

ls code-example-tests/java/
# Output: driver-sync/, driver-reactive/, utilities/, README.md, pom.xml

C# Driver Code Examples

.NET driver testing infrastructure

The C# driver documentation uses a .NET solution-based testing approach with separate projects for examples, tests, and utilities. The csharp/driver/ directory contains a Visual Studio solution (driver.sln) with the Examples project containing tutorial code, the Tests project using xUnit for validation, and Utilities projects providing comparison engines for validating output against expected results including MongoDB document format parsing, ellipsis pattern matching, and numeric type compatibility.

# C# testing structure
ls code-example-tests/csharp/driver/
# Output: Examples/, Tests/, Utilities/, driver.sln, snip.js, README.md

# View example tutorials
ls code-example-tests/csharp/driver/Examples/Aggregation/Pipelines/
# Output: Filter/, Group/, JoinOneToOne/, JoinMultiField/

# View testing utilities
ls code-example-tests/csharp/driver/Utilities/
# Output: Comparison/, SampleData/, Utilities.csproj
// File: code-example-tests/csharp/driver/Examples/Aggregation/Pipelines/Filter/Person.cs
namespace Examples.Aggregation.Pipelines.Filter;

public class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
    public string City { get; set; }
}

// File: code-example-tests/csharp/driver/Examples/Aggregation/Pipelines/Filter/Tutorial.cs
using MongoDB.Driver;

namespace Examples.Aggregation.Pipelines.Filter;

public class Tutorial
{
    public static void Run()
    {
        var client = new MongoClient(Environment.GetEnvironmentVariable("MONGODB_URI"));
        var database = client.GetDatabase("test");
        var collection = database.GetCollection<Person>("persons");

        // Insert sample data
        collection.InsertMany(new[]
        {
            new Person { Name = "Jane Doe", Age = 28, City = "New York" },
            new Person { Name = "John Smith", Age = 35, City = "Boston" },
            new Person { Name = "Alice Johnson", Age = 42, City = "Chicago" }
        });

        // Filter persons over age 30
        var pipeline = collection.Aggregate()
            .Match(p => p.Age > 30)
            .Project(p => new { p.Name, p.Age })
            .SortBy(p => p.Age);

        var results = pipeline.ToList();
        foreach (var result in results)
        {
            Console.WriteLine($"{result.Name}, {result.Age}");
        }
    }
}
// File: code-example-tests/csharp/driver/Tests/ExampleStubTest.cs
using Xunit;
using Examples.Aggregation.Pipelines.Filter;

namespace Tests;

public class FilterTutorialTest
{
    [Fact]
    public void FilterTutorialProducesExpectedOutput()
    {
        // Test validates tutorial runs successfully
        Tutorial.Run();

        // Compare output against expected results in TutorialOutput.txt
        var expectedOutput = File.ReadAllText("Examples/Aggregation/Pipelines/Filter/TutorialOutput.txt");
        // Validation logic using Utilities.Comparison
    }
}

Bluehawk Code Extraction

Snippet extraction from tested examples

The Bluehawk markup tool extracts documentation-ready code snippets from tested examples, removing test infrastructure code (connection string handling, test harness setup, assertions) while preserving the core functionality developers need to see. Each language directory includes a snip.js script that processes examples with Bluehawk markup directives (state, snippet, remove, etc.) and outputs clean snippets to content/code-examples/tested/language/driver/topic/. Documentation projects reference these extracted snippets via symlinks using RST literalinclude directives.

// File: code-example-tests/command-line/mongosh/snip.js
const { execSync } = require('child_process');

// Run Bluehawk to extract snippets
execSync('bluehawk snip examples/ -o ../../../content/code-examples/tested/mongosh/', {
  stdio: 'inherit'
});

// Example source with Bluehawk markup
// File: examples/aggregation/pipelines/filter/run-pipeline.js

// :snippet-start: filter-pipeline
const result = db.persons.aggregate([
  { $match: { age: { $gt: 30 } } },
  { $project: { name: 1, age: 1, _id: 0 } },
  { $sort: { age: 1 } }
]);
// :snippet-end:

// :remove-start:
// Test infrastructure code (removed from docs)
printjson(result.toArray());
// :remove-end:

// Generates: content/code-examples/tested/mongosh/aggregation/pipelines/filter-pipeline.js
// Contains only the aggregation pipeline without test code
.. Example RST usage in documentation
.. File: content/manual/manual/source/aggregation/pipelines.txt

Filter Documents Example
------------------------

The following example filters documents where age is greater than 30:

.. literalinclude:: /code-examples/tested/mongosh/aggregation/pipelines/filter-pipeline.js
   :language: javascript
   :copyable: true
   :dedent:

Platform Infrastructure

Next.js rendering and build tooling

The platform/ directory contains shared infrastructure for building and rendering MongoDB documentation using Next.js and the Snooty AST (Abstract Syntax Tree) format. The docs-nextjs/ package implements the Next.js-based documentation rendering engine converting Snooty AST to React components, while snooty-ast-to-mdx/ handles transformation of Snooty AST nodes to MDX format. The platform uses pnpm workspaces for monorepo management and Turbo for orchestrating parallel builds across multiple documentation properties. The infrastructure requires Node.js v24 and uses pnpm@10.11.1 as the package manager with turbo@2.5.4 for build orchestration.

# Platform structure
ls platform/
# Output: docs-nextjs/, snooty-ast-to-mdx/, tools/, template/,
#         package.json, pnpm-workspace.yaml, turbo.json

# View Next.js documentation renderer
ls platform/docs-nextjs/
# Contains: Next.js application for rendering Snooty AST documentation

# Check workspace configuration
cat platform/package.json
# Shows: Workspace scripts, shared dependencies, build commands

# View monorepo build configuration
cat platform/turbo.json
# Shows: Build pipeline configuration for parallel builds
// File: platform/package.json
{
  "name": "platform",
  "private": true,
  "packageManager": "pnpm@10.11.1",
  "scripts": {
    "dev": "turbo run dev",
    "build": "turbo run build",
    "start": "turbo run start",
    "test": "turbo run test",
    "lint": "turbo run lint",
    "ingest:all": "turbo run ingest:all",
    "ingest:pages": "turbo run ingest:pages"
  },
  "devDependencies": {
    "jest": "^30.0.0",
    "turbo": "^2.5.4"
  }
}
# File: platform/pnpm-workspace.yaml
packages:
  - 'docs-nextjs'
  - 'snooty-ast-to-mdx'
  - 'tools/*'

Automated CI/CD Workflows

GitHub Actions testing and deployment

Each documentation property includes .github/workflows/ with automated testing, building, and deployment pipelines. The monorepo root contains 23 shared workflows for cross-language code example testing, formatting checks, and deployment automation. The workflows include language-specific code example testing in Docker containers (mongosh-examples-test-in-docker.yml, csharp-driver-examples-test-in-docker.yml, go-driver-examples-test-in-docker.yml, java-driver-sync-examples-test-in-docker.yml, node-driver-examples-test-in-docker.yml, pymongo-driver-examples-test-in-docker.yml), formatting verification for driver examples (csharp-driver-examples-check-formatting.yml, go-driver-examples-check-formatting.yml, java-driver-sync-examples-check-formatting.yml, node-driver-examples-check-formatting.yml), OpenAPI specification validation (openapi-tests.yml), repository synchronization (repo-sync.yml), unit testing (next-js-unit-tests.yml, go-sdk-examples-unit-tests.yml), build automation (trigger-branch-build.yml, trigger-preprd-build.yml, trigger-prod-build.yml, build-bump-rm-api.yml), table of contents testing (toc-test.yml), branch management (cleanup-temp-branch.yml, generate-branch-name.yml), notifications (notify-devdocs.yml), and automated labeling (labeler.yml). Each language's testing workflow runs in isolated Docker environments with MongoDB services to ensure consistent, reproducible test execution across all code examples. Formatting checks are implemented for C#, Go, Java, and Node.js drivers but not for Python/PyMongo examples.

# View monorepo-level workflows
ls .github/workflows/
# Output: build-bump-rm-api.yml, cleanup-temp-branch.yml,
#         csharp-driver-examples-check-formatting.yml,
#         csharp-driver-examples-test-in-docker.yml,
#         generate-branch-name.yml,
#         go-driver-examples-check-formatting.yml,
#         go-driver-examples-test-in-docker.yml,
#         go-sdk-examples-unit-tests.yml,
#         java-driver-sync-examples-check-formatting.yml,
#         java-driver-sync-examples-test-in-docker.yml,
#         labeler.yml, mongosh-examples-test-in-docker.yml,
#         next-js-unit-tests.yml,
#         node-driver-examples-check-formatting.yml,
#         node-driver-examples-test-in-docker.yml,
#         notify-devdocs.yml, openapi-tests.yml,
#         pymongo-driver-examples-test-in-docker.yml,
#         repo-sync.yml, toc-test.yml, trigger-branch-build.yml,
#         trigger-preprd-build.yml, trigger-prod-build.yml

# View workflows for a documentation property
ls content/app-services/.github/workflows/
# Output: check-links.yml, check-openapi-admin-v3.yml, check-redirects.yml,
#         test-data-api.yml, test-functions.yml, find-unused.yml

# Example workflow files
cat content/app-services/.github/workflows/test-data-api.yml
# Shows: Automated testing for Data API code examples

cat content/app-services/.github/workflows/check-links.yml
# Shows: Link validation across documentation
# Example GitHub Actions workflow for MongoDB Shell examples
# File: .github/workflows/mongosh-examples-test-in-docker.yml
name: Test MongoDB Shell Examples

on:
  pull_request:
    paths:
      - 'code-example-tests/command-line/mongosh/**'

jobs:
  test-mongosh:
    runs-on: ubuntu-latest
    services:
      mongodb:
        image: mongo:7.0
        ports:
          - 27017:27017
    steps:
      - uses: actions/checkout@v4
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
      - name: Install dependencies
        working-directory: code-example-tests/command-line/mongosh
        run: npm install
      - name: Run tests
        working-directory: code-example-tests/command-line/mongosh
        run: npm test
        env:
          MONGODB_URI: mongodb://mongodb:27017

# Additional workflows exist for: csharp, go, java, node, pymongo
# Each uses Docker containers and language-specific test frameworks

Repository Synchronization

Automated syncing with public repositories

The repo_sync.py script implements automated synchronization between the internal monorepo and public documentation repositories using GitHub App authentication. This allows the internal monorepo to remain private while pushing updates to public repositories like mongodb/docs for the MongoDB Manual. The script uses the PyGithub library to authenticate as a GitHub App installation, obtains access tokens, and pushes specific branches to target repositories.

# File: repo_sync.py
import subprocess
from typing_extensions import Annotated
import typer
import github

def get_installation_access_token(app_id: int, private_key: str,
                                  installation_id: int) -> str:
    """
    Obtain an installation access token using JWT.
    """
    integration = github.GithubIntegration(app_id, private_key)
    auth = integration.get_access_token(installation_id)
    assert auth
    assert auth.token
    return auth.token

def main(branch: Annotated[str, typer.Option(envvar="GITHUB_REF_NAME")],
         app_id: Annotated[int, typer.Option(envvar="APP_ID")],
         installation_id: Annotated[int, typer.Option(envvar="INSTALLATION_ID")],
         server_docs_private_key: Annotated[str, typer.Option(envvar="SERVER_DOCS_PRIVATE_KEY")]):

    access_token = get_installation_access_token(app_id, server_docs_private_key, installation_id)
    git_destination_url_with_token = f"https://x-access-token:{access_token}@github.com/mongodb/docs.git"

    subprocess.run(["git", "config", "--unset-all", "http.https://github.com/.extraheader"], check=True)
    subprocess.run(["git", "push", git_destination_url_with_token, branch], check=True)

if __name__ == "__main__":
    typer.run(main)
# Run repository synchronization
python repo_sync.py \
  --branch main \
  --app-id $APP_ID \
  --installation-id $INSTALLATION_ID \
  --server-docs-private-key "$SERVER_DOCS_PRIVATE_KEY"

# Output: Pushes current branch to mongodb/docs repository

GitHub Copilot Prompt Templates

AI-assisted documentation authoring

The .github/prompts/ directory contains GitHub Copilot prompt templates that provide specialized AI assistance for documentation tasks. These templates standardize common documentation workflows including converting content to reStructuredText format (convert-to-rst.prompt.md), checking adherence to MongoDB style guidelines (style-guide-check.prompt.md), creating reusable content includes (create-includes.prompt.md), converting inline code blocks to literalinclude directives (code-block-to-literalinclude.prompt.md), and verifying source constant substitution (source-constant-substitution-check.prompt.md). Product-specific prompts exist for Atlas and Grove documentation. The templates embed MongoDB's documentation standards including ASCII character sets, 72-character line limits, proper reStructuredText formatting with 2-space indents, heading underline conventions, and correct usage of directives like procedures, tabs, admonitions, and literalinclude.

# View available Copilot prompt templates
ls .github/prompts/
# Output: convert-to-rst.prompt.md, style-guide-check.prompt.md,
#         create-includes.prompt.md, code-block-to-literalinclude.prompt.md,
#         source-constant-substitution-check.prompt.md, cleanup-prompt.md,
#         targeted-edits.md, context-prompt.md, README.md, atlas/, grove/

# View RST conversion prompt template
cat .github/prompts/convert-to-rst.prompt.md
# Shows: Comprehensive RST formatting guidelines, directive syntax,
#        MongoDB documentation standards, heading conventions
# Example prompt template structure (convert-to-rst.prompt.md excerpt)
You will convert some text into properly formatted .rST for MongoDB's documentation.

MongoDB rST Standards:
- Character Set: ASCII (to avoid errors with Unicode)
- Line length: Hard breaks at 72 characters
- Line endings: UNIX (LF)
- Naming Convention: kebab-case for variables and files
- Indents: 2 spaces per level; 3 spaces under a directive

Common Inline Markup:
- Bold: **text**
- Italic: *text*
- Monospace: `text`

Procedures:
.. procedure::
   :style: normal

   .. step:: Step 1

      Description

Code Example Copier Configuration

Automated example distribution to external repositories

The monorepo includes a copier configuration system in the .copier/ directory for automating code example distribution to external repositories. The configuration file .copier/config.yml defines workflow rules with support for complex file transformations (move, copy, glob patterns), enabling distribution of code examples from the monorepo to external artifact repositories like the Atlas Architecture Go SDK repository. Workflows specify transformation patterns, target repositories, path transformations, pull request templates, and deprecation checking (via deprecated_examples.json) to ensure examples remain current across all distribution points. The copier system is webhook-driven, automatically detecting merged PRs and copying matching files to destination repositories based on configured workflows. The monorepo also includes processFiles.js for batch processing code example files and sample-data-utility-spec.md for standardizing test data generation across languages.

# View copier configuration directory
ls .copier/
# Output: config.yml, README.md

# Read copier configuration
cat .copier/config.yml
# Shows: Workflow definitions with transformations and commit strategies

# Check deprecated examples tracking
cat deprecated_examples.json
# Shows: Array of deprecated code example paths (currently empty)
# File: .copier/config.yml
# Referenced from main config file in https://github.com/grove-platform/github-copier

defaults:
  exclude:
    - "**/.env"
    - "**/node_modules/**"
  deprecation_check:
    enabled: true
    file: "deprecated_examples.json"

workflows:
  # Go SDK Project Examples → Artifact Repo (Architecture Center)
  - name: "atlas-sdk-go-project-examples"
    destination:
      repo: "mongodb/atlas-architecture-go-sdk"
      branch: "main"

    transformations:
      - glob:
          pattern: "content/code-examples/tested/go/atlas-sdk/project-copy/**/*"
          transform: "${relative_path}"

    commit_strategy:
      type: "pull_request"
      pr_title: "Update Atlas SDK Go examples from ${source_repo} PR ${pr_number}"
      pr_body: |
        Automated update of Atlas SDK Go project examples

        - Source: ${source_repo}
        - Source PR: #${pr_number}
        - Commit: ${commit_sha}

        **Changes:**
        - Files updated: ${file_count}
      commit_message: "Automated copy from ${source_repo} PR #${pr_number}"
      auto_merge: false
# How copier works:
# 1. PR merged to main branch
# 2. Webhook notifies copier service
# 3. Copier checks if changed files match any workflow patterns
# 4. If match found, creates PR in destination repository with copied files

Documentation Symlinks

Cross-project code example access

Documentation projects access tested code examples through symlinks from their source/code-examples/ directories to the central content/code-examples/tested/ directory. This architecture avoids duplicating code examples across projects while enabling the Snooty build system to access snippets as if they were local to each project. Symlinks must be created for each documentation version directory, with relative paths adjusted based on directory nesting depth.

# Check if symlink exists for a documentation project
cd content/manual/manual/source/code-examples
ls -l tested
# Output: tested -> ../../../../code-examples/tested

# Verify symlink target
realpath tested
# Output: /path/to/repo/content/code-examples/tested

# Create new symlink for a documentation project
cd content/node/source/code-examples
ln -s ../../../../code-examples/tested tested

# Verify creation
ls -l tested
# Output: tested -> ../../../../code-examples/tested
.. Example using symlinked code examples in documentation
.. File: content/manual/manual/source/tutorial/aggregation.txt

Filter Documents with Aggregation
----------------------------------

The following example demonstrates filtering documents:

.. literalinclude:: /code-examples/tested/mongosh/aggregation/pipelines/filter-pipeline.js
   :language: javascript
   :copyable: true
   :dedent:

For more information, see :manual:`Aggregation Pipeline </core/aggregation-pipeline>`.

Monorepo Testing Strategy

Comprehensive validation across all documentation properties

The monorepo implements a multi-layered testing strategy combining code example validation, link checking, content validation, build verification, and deployment testing. Code example tests run language-specific test suites validating that snippets compile, execute, and produce expected outputs. Link checking validates internal cross-references and external URLs across all properties. Build verification ensures each property builds successfully with the platform infrastructure. Integration tests validate the complete documentation pipeline from source to rendered output.

# Run code example tests for specific language
cd code-example-tests/command-line/mongosh
npm install
npm test
# Output: Runs all MongoDB Shell example tests

# Run C# driver tests
cd code-example-tests/csharp/driver
dotnet test
# Output: Runs all C# driver example tests with xUnit

# Example test output
PASS tests/aggregation/pipelines/tutorials.test.js
  Aggregation Pipeline Tutorials
    ✓ filter tutorial produces expected output (1234ms)
    ✓ group tutorial produces expected output (987ms)
    ✓ join one-to-one tutorial produces expected output (1567ms)

Test Suites: 1 passed, 1 total
Tests:       3 passed, 3 total

Universal Comparison Testing Specification

Cross-language output validation framework

The code-example-tests/comparison-spec.md defines a comprehensive, language-agnostic specification for validating code example outputs across all programming languages. This specification enables consistent comparison behavior across JavaScript, Python, Java, C#, Go, and other language implementations, providing advanced features like MongoDB type normalization, ellipsis pattern matching, field value ignoring, and flexible array comparison strategies. The specification handles MongoDB Extended JSON format, BSON types (ObjectId, Decimal128, Date), numeric type compatibility, and supports both ordered and unordered array comparisons with backtracking algorithms for complex nested structures.

# View comparison specification
cat code-example-tests/comparison-spec.md
# Contains: 1200+ line specification with pseudocode algorithms

# Key features documented in specification:
# - MongoDB type normalization (ObjectId, Decimal128, Date)
# - Ellipsis patterns: property-level ("..."), array-level (["..."]), object-level
# - Field value ignoring for dynamic fields (_id, timestamps, UUIDs)
# - Array comparison strategies (ordered, unordered, backtracking, hybrid)
# - Cross-language implementation guidelines
# - Error handling and reporting patterns
// Example comparison scenarios from specification

// Scenario 1: MongoDB type normalization
expected: {_id: ObjectId("507f1f77bcf86cd799439011"), amount: Decimal128("123.45")}
actual:   {_id: "507f1f77bcf86cd799439011", amount: "123.45"}
result:   true  // Types normalized to strings for comparison

// Scenario 2: Field value ignoring
expected: {_id: "any-id-1", name: "John", timestamp: "2024-01-01T00:00:00Z"}
actual:   {_id: "different-id-2", name: "John", timestamp: "2024-01-02T12:30:00Z"}
options:  {ignoreFieldValues: ["_id", "timestamp"]}
result:   true  // Dynamic fields ignored, only name compared

// Scenario 3: Ellipsis array matching
expected: [1, "...", 4]
actual:   [1, 2, 3, 4]
result:   true  // "..." matches any number of elements

// Scenario 4: Truncated string matching
expected: {message: "Error: Connection failed..."}
actual:   {message: "Error: Connection failed after 3 retries to server"}
result:   true  // "..." suffix allows partial matching
// Core comparison algorithm from specification
function compareValues(expected, actual, options, hasOmittedFields):
    // Step 1: Handle ellipsis patterns first
    if expected == "...":
        return true
    if expected ends with "...":
        return actual.startsWith(expected.slice(0, -3))

    // Step 2: Handle null/undefined cases
    if expected == null OR actual == null:
        return expected === actual

    // Step 3: Normalize MongoDB types
    expected = normalizeMongoTypes(expected)
    actual = normalizeMongoTypes(actual)

    // Step 4: Route to specialized comparison
    if both are arrays:
        return compareArrays(expected, actual, options)
    if both are objects:
        return compareObjects(expected, actual, options)

    // Step 5: Primitive comparison
    return expected === actual

Comparison Testing Framework

Advanced output validation for C# examples

The C# testing infrastructure includes sophisticated comparison utilities in Utilities/Comparison/ enabling flexible validation of example outputs against expected results. The framework supports MongoDB Extended JSON parsing, ellipsis patterns for partial matching (allowing ... in expected output to match any content), numeric type compatibility (matching int32 against int64), date normalization, and JSON structure comparison. This allows tests to validate example behavior without requiring exact output matches including object IDs, timestamps, or system-specific values.

// File: code-example-tests/csharp/driver/Utilities/Comparison/Expect.cs (conceptual)
namespace Utilities.Comparison;

public class Expect
{
    public static void OutputMatches(string actualOutput, string expectedFilePath)
    {
        var expectedOutput = File.ReadAllText(expectedFilePath);
        var comparisonEngine = new ComparisonEngine();
        var result = comparisonEngine.Compare(actualOutput, expectedOutput);

        if (!result.IsMatch)
        {
            throw new ComparisonException($"Output mismatch: {result.Differences}");
        }
    }
}

// Example expected output with ellipsis patterns
// File: Examples/Aggregation/Pipelines/Filter/TutorialOutput.txt
[
  { "name": "John Smith", "age": 35, "_id": { "$oid": "..." } },
  { "name": "Alice Johnson", "age": 42, "_id": { "$oid": "..." } }
]
// The "..." pattern matches any object ID value

// File: Tests/ExampleTest.cs
[Fact]
public void FilterTutorialTest()
{
    var output = CaptureOutput(() => FilterTutorial.Run());
    Expect.OutputMatches(output, "Examples/Aggregation/Pipelines/Filter/TutorialOutput.txt");
}

Summary

The MongoDB Documentation Monorepo represents a comprehensive consolidation of all MongoDB documentation properties into a unified repository with shared infrastructure, centralized code example testing, and consistent build processes. This architecture eliminates duplication across 58 separate documentation repositories while maintaining independent versioning, deployment, and content management for each property. The monorepo combines documentation source content in content/ directories covering server documentation, drivers for all major languages, framework integrations, developer tools, and meta directories (landing pages, guides, shared resources); runnable and tested code examples in code-example-tests/ with language-specific test suites for JavaScript, Python, Java, C#, Go, MongoDB Shell, and OpenAPI specifications; shared platform infrastructure in platform/ including the Next.js rendering engine (Node.js v24, pnpm@10.11.1, turbo@2.5.4) and Snooty AST processing tools; GitHub Copilot prompt templates for AI-assisted documentation authoring; comprehensive version management where versioned projects maintain current/ directories for latest stable versions and upcoming/ directories for future releases, while unversioned projects represent current documentation; and deprecated_examples.json for tracking deprecated code examples across the copier distribution system configured in .copier/config.yml.

Primary use cases include: documentation writers authoring content for any MongoDB product with access to centrally tested code examples and AI-assisted formatting tools; developers contributing code examples that are automatically validated through Docker-based testing workflows (with formatting checks for C#, Go, Java, and Node.js) and distributed to external repositories via the copier system; platform engineers maintaining shared build and rendering infrastructure with Turbo-orchestrated parallel builds; version management across all versioned projects with standardized current/upcoming directory conventions; and CI/CD systems executing automated testing, building, and deployment across all documentation properties simultaneously using 23 GitHub Actions workflows for testing (mongosh-examples-test-in-docker.yml, csharp-driver-examples-test-in-docker.yml, etc.), formatting validation (csharp-driver-examples-check-formatting.yml, etc.), OpenAPI validation (openapi-tests.yml), build automation (trigger-branch-build.yml, trigger-preprd-build.yml, trigger-prod-build.yml), and quality checks (toc-test.yml). The monorepo's integration patterns emphasize symlink-based code example sharing enabling zero-duplication snippet reuse, Bluehawk markup extraction producing documentation-ready code from tested examples, automated synchronization distributing content to public repositories via repo-sync.yml workflows, standardized version management with current/upcoming directory conventions separating stable from future releases, universal comparison testing specification (comparison-spec.md) ensuring consistent validation across all programming languages, GitHub Copilot templates standardizing documentation formatting and style adherence, and comprehensive testing validating code correctness in isolated Docker containers, formatting consistency for compiled languages, OpenAPI specification accuracy, and build success across the entire documentation ecosystem. This architecture ensures MongoDB documentation remains accurate, consistent, and maintainable across all 58 properties (including Atlas variants like Atlas Government, Atlas Operator, and Atlas Architecture; framework integrations like Django MongoDB, Laravel MongoDB, Entity Framework, and Hibernate; developer tools like MCP Server, VS Code extension, IntelliJ plugin, and MongoDB Analyzer; cloud services like Atlas CLI, Kubernetes Operator, and App Services; data tools like Kafka Connector, BI Connector, Relational Migrator, Spark Connector, and MongoSync; and PyMongo Arrow driver for data analytics) while enabling independent release cycles and version-specific content for each documentation property.

Next.js Framework Documentation

Introduction

Next.js is a powerful React framework for building full-stack web applications developed by Vercel. This documentation covers Next.js version 16.1.1-canary.7 (canary release), which extends React's capabilities with features like server-side rendering (SSR), static site generation (SSG), and hybrid approaches, all optimized through Rust-based JavaScript tooling for high-performance builds. Next.js enables developers to create production-ready applications with automatic code splitting, built-in routing, API routes, and seamless integration between frontend and backend code. The framework supports both the modern App Router (introduced in Next.js 13 and continuously refined through version 16) and the traditional Pages Router, offering flexibility for different project needs. Version 16 introduces stable support for the Form component, enhanced caching with cacheLife() and cacheTag() APIs, new revalidation methods including updateTag() and refresh(), and improved developer experience with better TypeScript integration and performance optimizations.

Next.js addresses common challenges in modern web development by providing solutions for routing, data fetching, image optimization, internationalization, and SEO out of the box. It supports React Server Components for efficient server-side rendering, Client Components for interactive UI, and Server Actions for server-side mutations without needing separate API endpoints. The framework's architecture is designed to enable optimal performance with automatic optimizations like lazy loading, prefetching, and intelligent caching strategies while maintaining developer productivity through conventions and best practices. Version 16 brings significant improvements including the stable Form component for progressive enhancement, cacheLife() for declarative cache control with predefined profiles (seconds, minutes, hours, days, weeks, max), cacheTag() for granular cache invalidation, enhanced metadata API for comprehensive SEO control, and improved support for React 19 features. Whether building marketing sites, e-commerce platforms, dashboards, or content-heavy applications, Next.js 16 provides the APIs and patterns to build performant, scalable applications.

Core APIs and Functions

App Router - Basic Page Structure

The App Router uses a file-system based routing where folders define routes and special files (page.tsx, layout.tsx) define UI components.

// app/page.tsx
export default function Page() {
  return <h1>Hello, Next.js!</h1>;
}

// app/layout.tsx
export default function RootLayout({
  children,
}: {
  children: React.ReactNode;
}) {
  return (
    <html lang="en">
      <body>{children}</body>
    </html>
  );
}

Form Component - Progressive Enhancement

Next.js 16 introduces a built-in Form component that provides progressive enhancement for forms with Server Actions.

// app/users/new/page.tsx
import Form from "next/form";
import { redirect } from "next/navigation";
import prisma from "@/lib/prisma";

export default function NewUser() {
  async function createUser(formData: FormData) {
    "use server";

    const name = formData.get("name") as string;
    const email = formData.get("email") as string;

    await prisma.user.create({
      data: { name, email },
    });

    redirect("/");
  }

  return (
    <div>
      <h1>Create New User</h1>
      <Form action={createUser}>
        <label htmlFor="name">Name</label>
        <input type="text" id="name" name="name" required />

        <label htmlFor="email">Email</label>
        <input type="email" id="email" name="email" required />

        <button type="submit">Create User</button>
      </Form>
    </div>
  );
}

// With revalidation
// app/posts/new/page.tsx
import Form from "next/form";
import { revalidatePath } from "next/cache";
import { redirect } from "next/navigation";

export default function NewPost() {
  async function createPost(formData: FormData) {
    "use server";

    const title = formData.get("title") as string;
    const content = formData.get("content") as string;

    await db.posts.create({
      data: { title, content },
    });

    revalidatePath("/posts");
    redirect("/posts");
  }

  return (
    <Form action={createPost}>
      <input type="text" name="title" required />
      <textarea name="content" rows={6} />
      <button type="submit">Create Post</button>
    </Form>
  );
}

Cache Management with cacheLife() and cacheTag()

Next.js 16 introduces cacheLife() for declarative cache control and cacheTag() for granular cache invalidation.

// lib/data.ts
"use cache";

import { cacheLife, cacheTag } from "next/cache";

// Use predefined cache profiles
export async function getProducts() {
  "use cache";
  cacheLife("hours"); // Cache for 1 hour with 5 min stale time
  cacheTag("products");

  const response = await fetch("https://api.example.com/products");
  return response.json();
}

// Custom cache configuration
export async function getUserData(userId: string) {
  "use cache";
  cacheLife({
    stale: 60,      // 1 minute stale
    revalidate: 300, // 5 minutes revalidate
    expire: 3600,   // 1 hour expire
  });
  cacheTag("user", `user-${userId}`);

  const response = await fetch(`https://api.example.com/users/${userId}`);
  return response.json();
}

// Different cache profiles available:
// - "seconds": stale: 30s, revalidate: 1s, expire: 1m
// - "minutes": stale: 5m, revalidate: 1m, expire: 1h
// - "hours": stale: 5m, revalidate: 1h, expire: 1d
// - "days": stale: 5m, revalidate: 1d, expire: 1w
// - "weeks": stale: 5m, revalidate: 1w, expire: 30d
// - "max": stale: 5m, revalidate: 30d, expire: never
// - "default": stale: 5m, revalidate: 15m, expire: never

// Server Action using new cache APIs
// app/actions.ts
"use server";

import { revalidateTag, updateTag, refresh } from "next/cache";

export async function updateProduct(productId: string, data: any) {
  await db.products.update(productId, data);

  // Revalidate all caches with this tag
  revalidateTag("products");
  revalidateTag(`product-${productId}`);

  // Or use updateTag for more granular control
  updateTag(`product-${productId}`);

  return { success: true };
}

export async function refreshData() {
  // Refresh all data on the current page
  refresh();
}

Request APIs - headers(), cookies(), and draftMode()

Access request information in Server Components and Route Handlers using synchronous APIs.

// app/dashboard/page.tsx
import { cookies, headers } from 'next/headers';

export default function DashboardPage() {
  // Access cookies
  const cookieStore = cookies();
  const token = cookieStore.get('auth-token');

  // Access headers
  const headersList = headers();
  const userAgent = headersList.get('user-agent');

  return (
    <div>
      <h1>Dashboard</h1>
      <p>User Agent: {userAgent}</p>
      <p>Token: {token?.value}</p>
    </div>
  );
}

// Route Handler with request APIs
// app/api/user/route.ts
import { cookies, headers } from 'next/headers';
import { NextResponse } from 'next/server';

export async function GET() {
  const cookieStore = cookies();
  const session = cookieStore.get('session');

  const headersList = headers();
  const authorization = headersList.get('authorization');

  return NextResponse.json({
    session: session?.value,
    auth: authorization
  });
}

// Draft Mode for CMS preview
// app/api/draft/route.ts
import { draftMode } from 'next/headers';
import { redirect } from 'next/navigation';

export async function GET(request: Request) {
  const { searchParams } = new URL(request.url);
  const slug = searchParams.get('slug');

  // Enable draft mode
  draftMode().enable();

  // Redirect to the preview page
  redirect(`/posts/${slug}`);
}

// Check draft mode in page
// app/posts/[slug]/page.tsx
import { draftMode } from 'next/headers';

export default async function PostPage({ params }: { params: { slug: string } }) {
  const { isEnabled } = draftMode();

  const post = await getPost(params.slug, isEnabled);

  return (
    <article>
      {isEnabled && <p>Draft mode is enabled</p>}
      <h1>{post.title}</h1>
      <div>{post.content}</div>
    </article>
  );
}

Dynamic Routes with generateStaticParams

Create dynamic routes and pre-render pages at build time using generateStaticParams for static site generation.

// app/posts/[slug]/page.tsx
import { Metadata } from "next";
import { notFound } from "next/navigation";
import { getAllPosts, getPostBySlug } from "@/lib/api";

export default async function Post({ params }: { params: { slug: string } }) {
  const post = getPostBySlug(params.slug);

  if (!post) {
    return notFound();
  }

  const content = await markdownToHtml(post.content || "");

  return (
    <main>
      <article className="mb-32">
        <PostHeader
          title={post.title}
          coverImage={post.coverImage}
          date={post.date}
          author={post.author}
        />
        <PostBody content={content} />
      </article>
    </main>
  );
}

export async function generateMetadata({ params }: {
  params: { slug: string };
}): Promise<Metadata> {
  const post = getPostBySlug(params.slug);

  if (!post) {
    return notFound();
  }

  const title = `${post.title} | Next.js Blog Example`;

  return {
    title,
    openGraph: {
      title,
      images: [post.ogImage.url],
    },
  };
}

export async function generateStaticParams() {
  const posts = getAllPosts();

  return posts.map((post) => ({
    slug: post.slug,
  }));
}

Cache Management and Revalidation

Next.js 16 provides enhanced APIs for controlling cache behavior and revalidating data at runtime.

// lib/data.ts
import { unstable_cache } from 'next/cache';

// Cache function results with tags
export const getProducts = unstable_cache(
  async () => {
    const response = await fetch('https://api.example.com/products');
    return response.json();
  },
  ['products'], // cache key
  {
    tags: ['products-list'],
    revalidate: 3600, // revalidate every hour
  }
);

// Opt out of caching for dynamic data
import { unstable_noStore } from 'next/cache';

export async function getUserData(userId: string) {
  unstable_noStore(); // This data should never be cached
  const response = await fetch(`https://api.example.com/users/${userId}`);
  return response.json();
}

// Server Action to revalidate cache
// app/actions.ts
'use server';

import { revalidatePath, revalidateTag, updateTag, refresh } from 'next/cache';

export async function updateProduct(formData: FormData) {
  const productId = formData.get('productId') as string;

  // Update product in database
  await db.products.update(productId, {
    name: formData.get('name') as string,
    price: parseFloat(formData.get('price') as string),
  });

  // Revalidate specific path
  revalidatePath('/products');
  revalidatePath(`/products/${productId}`);

  // Or revalidate by tag
  revalidateTag('products-list');

  // Or use updateTag for more granular control
  updateTag('products-list');

  // Or refresh the current page
  refresh();

  return { success: true };
}

Internationalization with Dynamic Routes

Implement i18n by using dynamic route segments and generateStaticParams to create localized pages.

// app/[lang]/layout.tsx
import { i18n, type Locale } from "@/i18n-config";

export const metadata = {
  title: "i18n within app router - Vercel Examples",
  description: "How to do i18n in Next.js 16 within app router",
};

export async function generateStaticParams() {
  return i18n.locales.map((locale) => ({ lang: locale }));
}

export default function Root({
  children,
  params,
}: {
  children: React.ReactNode;
  params: { lang: Locale };
}) {
  return (
    <html lang={params.lang}>
      <body>{children}</body>
    </html>
  );
}

// app/[lang]/page.tsx
import { getDictionary } from './dictionaries';

export default async function Page({ params: { lang } }: { params: { lang: Locale } }) {
  const dict = await getDictionary(lang);

  return (
    <div>
      <h1>{dict.home.title}</h1>
      <p>{dict.home.description}</p>
    </div>
  );
}

// i18n-config.ts
export const i18n = {
  defaultLocale: 'en',
  locales: ['en', 'de', 'fr', 'es'],
} as const;

export type Locale = (typeof i18n)['locales'][number];

// dictionaries.ts
const dictionaries = {
  en: () => import('./dictionaries/en.json').then((module) => module.default),
  de: () => import('./dictionaries/de.json').then((module) => module.default),
  fr: () => import('./dictionaries/fr.json').then((module) => module.default),
};

export const getDictionary = async (locale: Locale) => dictionaries[locale]();

API Routes - App Router Style

Create API endpoints using Route Handlers with route.ts/route.js files supporting HTTP methods.

// app/api/set-token/route.ts
import { NextResponse } from 'next/server';
import { cookies } from 'next/headers';

export async function POST() {
  const cookieStore = cookies();
  const res = NextResponse.json({ message: 'successful' });
  res.cookies.set('token', 'this is a token');
  return res;
}

// app/api/revalidate/route.ts
import { NextRequest, NextResponse } from "next/server";
import { revalidateTag } from "next/cache";
import { headers } from "next/headers";

export async function POST(request: NextRequest) {
  const headersList = headers();
  const secret = headersList.get("x-vercel-reval-key");

  if (secret !== process.env.CONTENTFUL_REVALIDATE_SECRET) {
    return NextResponse.json({ message: "Invalid secret" }, { status: 401 });
  }

  revalidateTag("posts");

  return NextResponse.json({ revalidated: true, now: Date.now() });
}

// Dynamic route handler
// app/api/posts/[id]/route.ts
import { NextRequest, NextResponse } from 'next/server';

export async function GET(
  request: NextRequest,
  { params }: { params: { id: string } }
) {
  const post = await getPost(params.id);

  if (!post) {
    return NextResponse.json({ error: 'Post not found' }, { status: 404 });
  }

  return NextResponse.json(post);
}

export async function PUT(
  request: NextRequest,
  { params }: { params: { id: string } }
) {
  const body = await request.json();
  const post = await updatePost(params.id, body);

  return NextResponse.json(post);
}

export async function DELETE(
  request: NextRequest,
  { params }: { params: { id: string } }
) {
  await deletePost(params.id);

  return NextResponse.json({ success: true });
}

API Routes - Pages Router Style

Traditional API routes in the pages/api directory supporting various HTTP methods.

// pages/api/users.ts
import type { NextApiRequest, NextApiResponse } from "next";
import type { User } from "../../interfaces";

const users: User[] = [{ id: 1 }, { id: 2 }, { id: 3 }];

export default function handler(
  _req: NextApiRequest,
  res: NextApiResponse<User[]>,
) {
  res.status(200).json(users);
}

// pages/api/user/[id].ts
import type { NextApiRequest, NextApiResponse } from "next";
import type { User } from "../../../interfaces";

export default function userHandler(
  req: NextApiRequest,
  res: NextApiResponse<User>,
) {
  const { query, method } = req;
  const id = parseInt(query.id as string, 10);
  const name = query.name as string;

  switch (method) {
    case "GET":
      res.status(200).json({ id, name: `User ${id}` });
      break;
    case "PUT":
      res.status(200).json({ id, name: name || `User ${id}` });
      break;
    default:
      res.setHeader("Allow", ["GET", "PUT"]);
      res.status(405).end(`Method ${method} Not Allowed`);
  }
}

Server Actions and Form Handling

Server Actions allow you to define server-side functions that can be called from Client Components, perfect for form submissions and data mutations.

// app/actions.ts
"use server";

import { revalidatePath } from "next/cache";
import postgres from "postgres";
import { z } from "zod";

let sql = postgres(process.env.DATABASE_URL || process.env.POSTGRES_URL!, {
  ssl: "allow",
});

export async function createTodo(
  prevState: {
    message: string;
  },
  formData: FormData,
) {
  const schema = z.object({
    todo: z.string().min(1),
  });
  const parse = schema.safeParse({
    todo: formData.get("todo"),
  });

  if (!parse.success) {
    return { message: "Failed to create todo" };
  }

  const data = parse.data;

  try {
    await sql`
      INSERT INTO todos (text)
      VALUES (${data.todo})
    `;

    revalidatePath("/");
    return { message: `Added todo ${data.todo}` };
  } catch (e) {
    return { message: "Failed to create todo" };
  }
}

export async function deleteTodo(
  prevState: {
    message: string;
  },
  formData: FormData,
) {
  const schema = z.object({
    id: z.string().min(1),
    todo: z.string().min(1),
  });
  const data = schema.parse({
    id: formData.get("id"),
    todo: formData.get("todo"),
  });

  try {
    await sql`
      DELETE FROM todos
      WHERE id = ${data.id};
    `;

    revalidatePath("/");
    return { message: `Deleted todo ${data.todo}` };
  } catch (e) {
    return { message: "Failed to delete todo" };
  }
}

// Server Action with file upload
// app/upload/actions.ts
"use server";

import { writeFile } from 'fs/promises';
import { join } from 'path';

export async function uploadFile(formData: FormData) {
  const file = formData.get('file') as File;

  if (!file) {
    return { success: false, error: 'No file provided' };
  }

  const bytes = await file.arrayBuffer();
  const buffer = Buffer.from(bytes);

  // Save to public directory
  const path = join(process.cwd(), 'public', 'uploads', file.name);
  await writeFile(path, buffer);

  return { success: true, name: file.name };
}

// app/add-form.tsx
"use client";

import { useFormState, useFormStatus } from "react-dom";
import { createTodo } from "@/app/actions";

const initialState = {
  message: "",
};

function SubmitButton() {
  const { pending } = useFormStatus();

  return (
    <button type="submit" disabled={pending}>
      {pending ? 'Adding...' : 'Add'}
    </button>
  );
}

export function AddForm() {
  const [state, formAction] = useFormState(createTodo, initialState);

  return (
    <form action={formAction}>
      <label htmlFor="todo">Enter Task</label>
      <input type="text" id="todo" name="todo" required />
      <SubmitButton />
      <p aria-live="polite" role="status">
        {state?.message}
      </p>
    </form>
  );
}

// app/page.tsx
import postgres from "postgres";
import { AddForm } from "@/app/add-form";
import { DeleteForm } from "@/app/delete-form";

let sql = postgres(process.env.DATABASE_URL || process.env.POSTGRES_URL!, {
  ssl: "allow",
});

export default async function Home() {
  let todos = await sql`SELECT * FROM todos`;

  return (
    <main>
      <h1>Todos</h1>
      <AddForm />
      <ul>
        {todos.map((todo) => (
          <li key={todo.id}>
            {todo.text}
            <DeleteForm id={todo.id} todo={todo.text} />
          </li>
        ))}
      </ul>
    </main>
  );
}

Navigation Hooks - useRouter, usePathname, useSearchParams

Client-side navigation hooks for reading and manipulating the current URL in Client Components.

// app/components/locale-switcher.tsx
"use client";

import { usePathname, useRouter, useSearchParams } from "next/navigation";
import Link from "next/link";
import { i18n, type Locale } from "@/i18n-config";

export default function LocaleSwitcher() {
  const pathname = usePathname();
  const router = useRouter();
  const searchParams = useSearchParams();

  const redirectedPathname = (locale: Locale) => {
    if (!pathname) return "/";
    const segments = pathname.split("/");
    segments[1] = locale;
    return segments.join("/");
  };

  return (
    <div>
      <p>Locale switcher:</p>
      <ul>
        {i18n.locales.map((locale) => {
          return (
            <li key={locale}>
              <Link href={redirectedPathname(locale)}>{locale}</Link>
            </li>
          );
        })}
      </ul>
    </div>
  );
}

// Search component with useSearchParams
// app/search/search-bar.tsx
"use client";

import { useSearchParams, useRouter, usePathname } from 'next/navigation';

export function SearchBar() {
  const searchParams = useSearchParams();
  const pathname = usePathname();
  const router = useRouter();

  function handleSearch(term: string) {
    const params = new URLSearchParams(searchParams);

    if (term) {
      params.set('query', term);
    } else {
      params.delete('query');
    }

    router.replace(`${pathname}?${params.toString()}`);
  }

  return (
    <input
      type="text"
      placeholder="Search..."
      onChange={(e) => handleSearch(e.target.value)}
      defaultValue={searchParams.get('query') || ''}
    />
  );
}

// Page that reads search params
// app/search/page.tsx
export default async function SearchPage({
  searchParams,
}: {
  searchParams: { query?: string };
}) {
  const query = searchParams.query;
  const results = query ? await searchProducts(query) : [];

  return (
    <div>
      <h1>Search Results</h1>
      {query && <p>Showing results for: {query}</p>}
      <SearchBar />
      <ul>
        {results.map((result) => (
          <li key={result.id}>{result.name}</li>
        ))}
      </ul>
    </div>
  );
}

Pages Router - getStaticProps and getStaticPaths

Pages Router data fetching methods for static site generation with dynamic routes.

// pages/gsp/[slug].tsx
import type {
  GetStaticProps,
  GetStaticPaths,
  InferGetStaticPropsType,
} from "next";
import Link from "next/link";
import { useRouter } from "next/router";
import LocaleSwitcher from "../../components/locale-switcher";

type GspPageProps = InferGetStaticPropsType<typeof getStaticProps>;

export default function GspPage(props: GspPageProps) {
  const router = useRouter();
  const { defaultLocale, isFallback, query } = router;

  if (isFallback) {
    return "Loading...";
  }

  return (
    <div>
      <h1>getStaticProps page</h1>
      <p>Current slug: {query.slug}</p>
      <p>Current locale: {props.locale}</p>
      <p>Default locale: {defaultLocale}</p>
      <p>Configured locales: {JSON.stringify(props.locales)}</p>

      <LocaleSwitcher />

      <Link href="/gsp">To getStaticProps page</Link>
      <br />

      <Link href="/gssp">To getServerSideProps page</Link>
      <br />

      <Link href="/">To index page</Link>
      <br />
    </div>
  );
}

type Props = {
  locale?: string;
  locales?: string[];
};

export const getStaticProps: GetStaticProps<Props> = async ({
  locale,
  locales,
}) => {
  return {
    props: {
      locale,
      locales,
    },
  };
};

export const getStaticPaths: GetStaticPaths = ({ locales = [] }) => {
  const paths = [];

  for (const locale of locales) {
    paths.push({ params: { slug: "first" }, locale });
    paths.push({ params: { slug: "second" }, locale });
  }

  return {
    paths,
    fallback: true,
  };
};

// pages/posts/[id].tsx - getServerSideProps
import type { GetServerSideProps, InferGetServerSidePropsType } from 'next';

type Post = {
  id: string;
  title: string;
  content: string;
};

export const getServerSideProps: GetServerSideProps<{ post: Post }> = async (context) => {
  const { id } = context.params!;
  const post = await getPost(id as string);

  if (!post) {
    return {
      notFound: true,
    };
  }

  return {
    props: {
      post,
    },
  };
};

export default function PostPage({ post }: InferGetServerSidePropsType<typeof getServerSideProps>) {
  return (
    <article>
      <h1>{post.title}</h1>
      <div>{post.content}</div>
    </article>
  );
}

Image Optimization

Next.js Image component provides automatic image optimization with lazy loading, responsive images, and modern formats.

// app/page.tsx
import Image from "next/image";
import Link from "next/link";
import vercel from "../public/vercel.png";

const Index = () => (
  <div>
    <h2 id="internal">Internal Image</h2>
    <p>The following is an example of a reference to an internal image from the public directory.</p>

    <Image
      alt="Vercel logo"
      src={vercel}
      width={1000}
      height={1000}
      style={{
        maxWidth: "100%",
        height: "auto",
      }}
    />

    <h2 id="external">External Image</h2>
    <p>External images must be configured in next.config.js using the remotePatterns property.</p>

    <Image
      alt="Next.js logo"
      src="https://assets.vercel.com/image/upload/v1538361091/repositories/next-js/next-js-bg.png"
      width={1200}
      height={400}
      style={{
        maxWidth: "100%",
        height: "auto",
      }}
    />

    <h2 id="responsive">Responsive Image</h2>
    <Image
      alt="Responsive image"
      src="/hero.jpg"
      fill
      sizes="(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 33vw"
      style={{
        objectFit: "cover",
      }}
    />
  </div>
);

export default Index;

// next.config.js
/** @type {import('next').NextConfig} */
module.exports = {
  images: {
    remotePatterns: [
      {
        protocol: "https",
        hostname: "assets.vercel.com",
        port: "",
        pathname: "/image/upload/**",
      },
    ],
    // Optional: Configure image sizes
    deviceSizes: [640, 750, 828, 1080, 1200, 1920, 2048, 3840],
    imageSizes: [16, 32, 48, 64, 96, 128, 256, 384],
    // Optional: Configure formats
    formats: ['image/webp'],
  },
};

Middleware

Middleware runs before a request is completed, allowing you to modify the response, rewrite, redirect, or add headers.

// middleware.ts
import { NextRequest, NextResponse } from 'next/server'

export const config = {
  matcher: [
    /*
     * Match all request paths except for the ones starting with:
     * - api (API routes)
     * - _next/static (static files)
     * - _next/image (image optimization files)
     * - favicon.ico (favicon file)
     */
    '/((?!api|_next/static|_next/image|favicon.ico).*)',
  ],
}

export function middleware(request: NextRequest) {
  const url = request.nextUrl

  // Authentication check
  const token = request.cookies.get('token')
  if (!token && url.pathname.startsWith('/dashboard')) {
    return NextResponse.redirect(new URL('/login', request.url))
  }

  // Add custom header
  const response = NextResponse.next()
  response.headers.set('x-custom-header', 'my-value')

  return response
}

// Middleware with rewrite
// middleware.ts
import { NextRequest, NextResponse } from 'next/server'

export function middleware(request: NextRequest) {
  const url = request.nextUrl

  // Rewrite root to a different page
  if (url.pathname === '/') {
    url.pathname = '/home'
    return NextResponse.rewrite(url)
  }

  // Redirect example
  if (url.pathname === '/old-path') {
    return NextResponse.redirect(new URL('/new-path', request.url))
  }

  // A/B testing
  const bucket = request.cookies.get('bucket')
  if (url.pathname === '/experiment' && bucket?.value === 'b') {
    url.pathname = '/experiment-b'
    return NextResponse.rewrite(url)
  }

  return NextResponse.next()
}

// Internationalization middleware
// middleware.ts
import { NextRequest, NextResponse } from 'next/server'

const locales = ['en', 'fr', 'de']
const defaultLocale = 'en'

export function middleware(request: NextRequest) {
  const { pathname } = request.nextUrl
  const pathnameHasLocale = locales.some(
    (locale) => pathname.startsWith(`/${locale}/`) || pathname === `/${locale}`
  )

  if (pathnameHasLocale) return

  // Redirect if there is no locale
  const locale = getLocale(request)
  request.nextUrl.pathname = `/${locale}${pathname}`
  return NextResponse.redirect(request.nextUrl)
}

function getLocale(request: NextRequest): string {
  // Get locale from cookie or accept-language header
  return request.cookies.get('locale')?.value || defaultLocale
}

Data Fetching with Caching

Fetch API with built-in caching options for optimized data fetching in Server Components.

// app/cases/fetch_cached/page.tsx
export default async function Page() {
  return (
    <>
      <p>This page renders two components each performing cached fetches.</p>
      <ComponentOne />
      <ComponentTwo />
    </>
  )
}

async function ComponentOne() {
  return <div>message 1: {await fetchRandomCached('a')}</div>
}

async function ComponentTwo() {
  return (
    <>
      <div>message 2: {await fetchRandomCached('b')}</div>
      <div>message 3: {await fetchRandomCached('c')}</div>
    </>
  )
}

const fetchRandomCached = async (entropy: string) => {
  const response = await fetch(
    'https://next-data-api-endpoint.vercel.app/api/random?b=' + entropy,
    { cache: 'force-cache' }
  )
  return response.text()
}

// Different caching strategies
// app/lib/data.ts

// Force cache (default for fetch in Server Components)
export async function getCachedData() {
  const res = await fetch('https://api.example.com/data', {
    cache: 'force-cache',
  })
  return res.json()
}

// No store - always fetch fresh data
export async function getDynamicData() {
  const res = await fetch('https://api.example.com/data', {
    cache: 'no-store',
  })
  return res.json()
}

// Revalidate - cache for a specific time
export async function getRevalidatedData() {
  const res = await fetch('https://api.example.com/data', {
    next: { revalidate: 3600 }, // revalidate every hour
  })
  return res.json()
}

// Tag-based revalidation
export async function getTaggedData() {
  const res = await fetch('https://api.example.com/data', {
    next: { tags: ['products'] },
  })
  return res.json()
}

Metadata Configuration

Define page metadata for SEO using the Metadata API in App Router.

// app/layout.tsx
import type { Metadata } from "next";
import { Inter } from "next/font/google";
import "./globals.css";

const inter = Inter({ subsets: ["latin"] });

export const metadata: Metadata = {
  title: `Next.js Blog Example`,
  description: `A statically generated blog example using Next.js.`,
  openGraph: {
    images: ['/og-image.jpg'],
  },
};

export default function RootLayout({
  children,
}: Readonly<{
  children: React.ReactNode;
}>) {
  return (
    <html lang="en">
      <head>
        <link rel="icon" type="image/png" sizes="32x32" href="/favicon/favicon-32x32.png" />
        <link rel="manifest" href="/favicon/site.webmanifest" />
        <meta name="theme-color" content="#000" />
      </head>
      <body className={inter.className}>
        <div className="min-h-screen">{children}</div>
      </body>
    </html>
  );
}

// Dynamic metadata
// app/posts/[slug]/page.tsx
import { Metadata } from 'next'

export async function generateMetadata({ params }: { params: { slug: string } }): Promise<Metadata> {
  const post = await getPost(params.slug)

  return {
    title: post.title,
    description: post.excerpt,
    openGraph: {
      title: post.title,
      description: post.excerpt,
      images: [
        {
          url: post.coverImage,
          width: 1200,
          height: 630,
        },
      ],
      type: 'article',
      publishedTime: post.date,
      authors: [post.author.name],
    },
    twitter: {
      card: 'summary_large_image',
      title: post.title,
      description: post.excerpt,
      images: [post.coverImage],
    },
  }
}

// File-based metadata
// app/opengraph-image.tsx
import { ImageResponse } from 'next/og'

export const runtime = 'edge'

export const size = {
  width: 1200,
  height: 630,
}

export const contentType = 'image/png'

export default async function Image() {
  return new ImageResponse(
    (
      <div
        style={{
          fontSize: 128,
          background: 'white',
          width: '100%',
          height: '100%',
          display: 'flex',
          alignItems: 'center',
          justifyContent: 'center',
        }}
      >
        Hello World!
      </div>
    ),
    {
      ...size,
    }
  )
}

Dynamic Import and Code Splitting

Dynamically import components to optimize bundle size and loading performance.

// app/page.tsx
"use client";

import { useState } from "react";
import dynamic from "next/dynamic";

const DynamicComponent1 = dynamic(() => import("./_components/hello1"));

const DynamicComponent2WithCustomLoading = dynamic(
  () => import("./_components/hello2"),
  { loading: () => <p>Loading caused by client page transition ...</p> },
);

const DynamicComponent3WithNoSSR = dynamic(
  () => import("./_components/hello3"),
  { loading: () => <p>Loading ...</p>, ssr: false },
);

const names = ["Tim", "Joe", "Bel", "Max", "Lee"];

export default function IndexPage() {
  const [showMore, setShowMore] = useState(false);
  const [results, setResults] = useState();

  return (
    <div>
      {/* Load immediately, but in a separate bundle */}
      <DynamicComponent1 />

      {/* Show a progress indicator while loading */}
      <DynamicComponent2WithCustomLoading />

      {/* Load only on the client side */}
      <DynamicComponent3WithNoSSR />

      {/* Load on demand */}
      {showMore && <DynamicComponent4 />}
      <button onClick={() => setShowMore(!showMore)}>Toggle Show More</button>

      {/* Load library on demand */}
      <div style={{ marginTop: "1rem" }}>
        <input
          type="text"
          placeholder="Search"
          onChange={async (e) => {
            const { value } = e.currentTarget;
            // Dynamically load fuse.js
            const Fuse = (await import("fuse.js")).default;
            const fuse = new Fuse(names);
            setResults(fuse.search(value));
          }}
        />
        <pre>Results: {JSON.stringify(results, null, 2)}</pre>
      </div>
    </div>
  );
}

// Server Component with dynamic imports
// app/dashboard/page.tsx
import dynamic from 'next/dynamic'

const Chart = dynamic(() => import('@/components/chart'), {
  ssr: false,
  loading: () => <p>Loading chart...</p>,
})

export default function Dashboard() {
  return (
    <div>
      <h1>Dashboard</h1>
      <Chart />
    </div>
  )
}

Next.js Configuration

Configure Next.js behavior through next.config.js including image domains, redirects, rewrites, experimental features.

// next.config.ts - Basic configuration
import type { NextConfig } from "next";

const nextConfig: NextConfig = {
  /* config options here */
};

export default nextConfig;

// next.config.js - Advanced configuration with experimental features
/** @type {import('next').NextConfig} */
const nextConfig = {
  // Basic options
  reactStrictMode: true,
  poweredByHeader: false,
  compress: true,
  trailingSlash: true,

  // Image optimization
  images: {
    remotePatterns: [
      {
        protocol: "https",
        hostname: "example.com",
        port: "",
        pathname: "/images/**",
      },
    ],
    deviceSizes: [640, 750, 828, 1080, 1200, 1920, 2048, 3840],
    imageSizes: [16, 32, 48, 64, 96, 128, 256, 384],
    formats: ['image/webp'],
  },

  // Redirects
  async redirects() {
    return [
      {
        source: '/old-path',
        destination: '/new-path',
        permanent: true,
      },
    ]
  },

  // Rewrites
  async rewrites() {
    return [
      {
        source: '/api/:path*',
        destination: 'https://api.example.com/:path*',
      },
    ]
  },

  // Headers
  async headers() {
    return [
      {
        source: '/(.*)',
        headers: [
          {
            key: 'X-Content-Type-Options',
            value: 'nosniff',
          },
          {
            key: 'X-Frame-Options',
            value: 'DENY',
          },
        ],
      },
    ]
  },

  // Environment variables
  env: {
    CUSTOM_KEY: 'my-value',
  },

  // Experimental features
  experimental: {
    // Partial Prerendering
    ppr: false,

    // Server Actions
    serverActions: {
      bodySizeLimit: '2mb',
      allowedOrigins: ['example.com'],
    },

    // Typed routes
    typedRoutes: false,

    // Optimize package imports
    optimizePackageImports: ['lodash', 'date-fns'],

    // MDX Rust compiler
    mdxRs: false,

    // Server minification and source maps
    serverMinification: true,
    serverSourceMaps: false,

    // Instrumentation
    instrumentationHook: false,

    // External packages for Server Components
    serverComponentsExternalPackages: ['@prisma/client'],
  },

  // Webpack customization
  webpack: (config, { isServer }) => {
    if (!isServer) {
      config.resolve.fallback = {
        ...config.resolve.fallback,
        fs: false,
      }
    }
    return config
  },
}

module.exports = nextConfig;

Loading and Error UI

Special files for handling loading states and errors in App Router with automatic integration.

// app/loading.tsx
import React from 'react'

export default function Loading() {
  return (
    <div className="loading">
      <div className="spinner"></div>
      <p>Loading...</p>
    </div>
  )
}

// app/error.tsx
'use client'

import { useEffect } from 'react'

export default function Error({
  error,
  reset,
}: {
  error: Error & { digest?: string }
  reset: () => void
}) {
  useEffect(() => {
    // Log error to error reporting service
    console.error(error)
  }, [error])

  return (
    <div>
      <h2>Something went wrong!</h2>
      <button onClick={() => reset()}>Try again</button>
    </div>
  )
}

// app/not-found.tsx
import Link from 'next/link'

export default function NotFound() {
  return (
    <div>
      <h2>Not Found</h2>
      <p>Could not find requested resource</p>
      <Link href="/">Return Home</Link>
    </div>
  )
}

// app/global-error.tsx
'use client'

export default function GlobalError({
  error,
  reset,
}: {
  error: Error & { digest?: string }
  reset: () => void
}) {
  return (
    <html>
      <body>
        <h2>Something went wrong!</h2>
        <button onClick={() => reset()}>Try again</button>
      </body>
    </html>
  )
}

// Trigger not-found in a page
// app/posts/[id]/page.tsx
import { notFound } from 'next/navigation'

export default async function Post({ params }: { params: { id: string } }) {
  const post = await getPost(params.id)

  if (!post) {
    notFound()
  }

  return <article>{post.content}</article>
}

Link Component for Client-Side Navigation

Next.js Link component enables client-side navigation between routes with automatic prefetching.

// app/page.tsx
import Link from "next/link";

export default function Home() {
  return (
    <nav>
      <ul>
        <li>
          <Link href="/">Home</Link>
        </li>
        <li>
          <Link href="/about">About</Link>
        </li>
        <li>
          <Link href="/blog">Blog</Link>
        </li>
        <li>
          {/* Dynamic route */}
          <Link href="/posts/123">Post 123</Link>
        </li>
        <li>
          {/* External link */}
          <Link href="https://example.com" target="_blank" rel="noopener noreferrer">
            External Link
          </Link>
        </li>
        <li>
          {/* Link with query params */}
          <Link href={{ pathname: '/search', query: { q: 'next.js' } }}>
            Search Next.js
          </Link>
        </li>
        <li>
          {/* Disable prefetch */}
          <Link href="/heavy-page" prefetch={false}>
            Heavy Page (no prefetch)
          </Link>
        </li>
      </ul>
    </nav>
  );
}

// Pages Router example with dynamic routing
// pages/index.tsx
import type { User } from "../interfaces";
import useSwr from "swr";
import Link from "next/link";

const fetcher = (url: string) => fetch(url).then((res) => res.json());

export default function Index() {
  const { data, error, isLoading } = useSwr<User[]>("/api/users", fetcher);

  if (error) return <div>Failed to load users</div>;
  if (isLoading) return <div>Loading...</div>;
  if (!data) return null;

  return (
    <ul>
      {data.map((user) => (
        <li key={user.id}>
          <Link href={`/user/${user.id}`}>
            {user.name ?? `User ${user.id}`}
          </Link>
        </li>
      ))}
    </ul>
  );
}

Pages Router - Custom App Component

The _app.tsx file allows you to override the default App component to control page initialization and add global layouts.

// pages/_app.tsx
import type { AppProps } from "next/app";
import Head from "next/head";
import "../styles/globals.css";

export default function App({ Component, pageProps }: AppProps) {
  return (
    <>
      <Head>
        <meta name="viewport" content="width=device-width, initial-scale=1" />
        <link
          rel="preload"
          href="/fonts/Inter-roman.latin.var.woff2"
          as="font"
          type="font/woff2"
          crossOrigin="anonymous"
        />
      </Head>
      <Component {...pageProps} />
    </>
  );
}

// pages/_document.tsx
import { Html, Head, Main, NextScript } from 'next/document'

export default function Document() {
  return (
    <Html lang="en">
      <Head>
        <link rel="icon" href="/favicon.ico" />
        <meta name="theme-color" content="#000000" />
      </Head>
      <body>
        <Main />
        <NextScript />
      </body>
    </Html>
  )
}

Redirects and Navigation

Programmatic navigation and redirects in App Router.

// Server Component redirect
// app/profile/page.tsx
import { redirect } from 'next/navigation'
import { getSession } from '@/lib/auth'

export default async function ProfilePage() {
  const session = await getSession()

  if (!session) {
    redirect('/login')
  }

  return <div>Profile: {session.user.name}</div>
}

// Client Component navigation
// app/components/login-button.tsx
'use client'

import { useRouter } from 'next/navigation'

export function LoginButton() {
  const router = useRouter()

  return (
    <button onClick={() => router.push('/login')}>
      Login
    </button>
  )
}

// Navigation with search params
// app/components/pagination.tsx
'use client'

import { useRouter, useSearchParams } from 'next/navigation'

export function Pagination({ totalPages }: { totalPages: number }) {
  const router = useRouter()
  const searchParams = useSearchParams()
  const currentPage = Number(searchParams.get('page')) || 1

  function goToPage(page: number) {
    const params = new URLSearchParams(searchParams)
    params.set('page', page.toString())
    router.push(`?${params.toString()}`)
  }

  return (
    <div>
      <button
        disabled={currentPage === 1}
        onClick={() => goToPage(currentPage - 1)}
      >
        Previous
      </button>
      <span>Page {currentPage} of {totalPages}</span>
      <button
        disabled={currentPage === totalPages}
        onClick={() => goToPage(currentPage + 1)}
      >
        Next
      </button>
    </div>
  )
}

// Permanent redirect
// app/old-page/page.tsx
import { permanentRedirect } from 'next/navigation'

export default function OldPage() {
  permanentRedirect('/new-page')
}

Summary

Next.js 16 has evolved into a comprehensive framework that addresses the full spectrum of web development needs, from simple static sites to complex, data-driven applications. The framework's dual router support (App Router and Pages Router) provides flexibility for both greenfield projects and gradual migrations, while the App Router's innovative features like React Server Components, Server Actions, and streaming represent the cutting edge of React development patterns. Version 16.1.0 introduces significant enhancements including the stable Form component for progressive enhancement and improved user experience, cacheLife() API with predefined profiles (seconds, minutes, hours, days, weeks, max) for declarative cache control, cacheTag() for granular cache invalidation with tag-based strategies, new revalidation methods updateTag() and refresh() for more flexible cache management, and enhanced React 19 compatibility with improved Server Actions and form handling. The framework provides synchronous request APIs (cookies(), headers(), draftMode()) that work seamlessly in Server Components and Route Handlers, enabling developers to access request information without additional complexity.

The framework excels at solving common web development challenges through conventions and built-in optimizations. Image optimization with the next/image component, automatic code splitting with dynamic imports, intelligent prefetching with the Link component, and flexible caching strategies through the fetch API and new cacheLife() function are all handled by Next.js out of the box. The new Form component from next/form simplifies form handling with progressive enhancement, automatic loading states, and seamless Server Action integration. Server Actions eliminate the need for separate API endpoints for many use cases, reducing boilerplate and simplifying full-stack development through direct server function calls from Client Components using useFormState and useFormStatus hooks. The Middleware system provides powerful request-time capabilities for authentication, localization, A/B testing, and routing logic at the edge, while the rich ecosystem of configuration options allows fine-tuning for specific deployment scenarios. With Next.js 16, the framework provides stable support for advanced caching strategies with cacheLife() profiles and custom configurations, improved developer experience with better error messages and TypeScript integration, enhanced performance through optimized bundling and tree-shaking, flexible revalidation with updateTag() for targeted cache updates and refresh() for page-level data refreshing, and comprehensive support for modern React patterns including React Server Components and concurrent features. Whether building marketing sites, e-commerce platforms with the Form component, dashboards with real-time data using refresh(), or content-heavy applications with CMS integration through draft mode and cacheTag(), Next.js 16 provides the APIs and patterns to build performant, scalable applications that deliver excellent user experiences while maintaining developer productivity and code quality.

https://platform.openai.com/docs/overview

Install OpenAI CLI

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

Installs the OpenAI CLI tool within a Python virtual environment using pip.

pip install openai-cli

OpenAI CLI Interactive Mode

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

Shows how to start an interactive shell session with the OpenAI CLI for continuous prompt-response interactions. Press Ctrl+C to exit.

$ openai repl
Prompt: Can generative AI replace humans?

No, generative AI cannot replace humans.
While generative AI can be used to automate certain tasks,
it cannot replace the creativity, intuition, and problem-solving
skills that humans possess.
Generative AI can be used to supplement human efforts,
but it cannot replace them.

Prompt: ^C

Generate Fibonacci Python Module and Unit Tests

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

A multi-step example demonstrating how to use the OpenAI CLI to generate a Python Fibonacci function and then create unit tests for it. It involves piping output between commands and using 'black' for formatting and 'pytest' for testing.

$ mkdir examples
$ touch examples/__init__.py
$ echo "Write Python function to calculate Fibonacci numbers" | openai complete - | black - > examples/fib.py
$ (echo 'Write unit tests for this Python module named "fib":
'; cat examples/fib.py) | openai complete - | black - > examples/test_fib.py
$ pytest -v examples/test_fib.py
============================== test session starts ==============================

examples/test_fib.py::TestFibonacci::test_eighth_fibonacci_number PASSED                                 [ 10%]
examples/test_fib.py::TestFibonacci::test_fifth_fibonacci_number PASSED                                  [ 20%]
examples/test_fib.py::TestFibonacci::test_first_fibonacci_number PASSED                                  [ 30%]
examples/test_fib.py::TestFibonacci::test_fourth_fibonacci_number PASSED                                 [ 40%]
examples/test_fib.py::TestFibonacci::test_negative_input PASSED                                          [ 50%]
examples/test_fib.py::TestFibonacci::test_ninth_fibonacci_number PASSED                                  [ 60%]
examples/test_fib.py::TestFibonacci::test_second_fibonacci_number PASSED                                 [ 70%]
examples/test_fib.py::TestFibonacci::test_seventh_fibonacci_number PASSED                                [ 80%]
examples/test_fib.py::TestFibonacci::test_sixth_fibonacci_number PASSED                                  [ 90%]
examples/test_fib.py::TestFibonacci::test_third_fibonacci_number PASSED                                  [100%]

=============================== 10 passed in 0.02s ===============================

Build Standalone Binary

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

Builds a standalone binary for the OpenAI CLI using 'pex' and moves it to the system's PATH for global access.

$ make openai && mv openai ~/bin/
$ openai repl
Prompt:

OpenAI CLI Basic Usage

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

Demonstrates basic usage of the OpenAI CLI for text completion by piping a prompt to the 'complete' command.

$ echo "Are cats faster than dogs?" | openai complete -
It depends on the breed of the cat and dog. Generally,
cats are faster than dogs over short distances,
but dogs are better at sustained running.

OpenAI CLI Help Message

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

Displays the main help message for the OpenAI CLI when run without any arguments, showing available options and commands.

$ openai
Usage: openai [OPTIONS] COMMAND [ARGS]...

Options:
  --help  Show this message and exit.

Commands:
  complete  Return OpenAI completion for a prompt from SOURCE.
  repl      Start interactive shell session for OpenAI completion API.

Complete and Format Python Code

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

This snippet demonstrates using the OpenAI CLI to complete Python code based on a prompt, format it with Black, and save the changes.

$ (echo "Add type annotations for this Python code"; cat examples/fib.py) | openai complete - | black - | tee tmp && mv tmp examples/fib.py

Jaraco-Packaging Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/ci.txt

Dependencies for jaraco-packaging, a packaging utility.

build[virtualenv]==1.2.2.post1
domdf-python-tools==3.9.0
jaraco-packaging==10.2.3
pytest-checkdocs==2.13.0
sphinx==8.1.3

Sphinx Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/ci.txt

Dependencies for Sphinx, a documentation generator.

alabaster==1.0.0
babel==2.17.0
docutils==0.21.2
imagesize==1.4.1
jinja2==3.1.5
markupsafe==3.0.2
pygments==2.19.1
snowballstemmer==2.2.0
sphinx==8.1.3
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0

Base Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/ci.txt

Core dependencies for the OpenAI CLI project.

alabaster==1.0.0
astroid==3.3.8
babel==2.17.0
build[virtualenv]==1.2.2.post1
coverage[toml]==7.6.12
dill==0.3.9
distlib==0.3.9
domdf-python-tools==3.9.0
filelock==3.17.0
imagesize==1.4.1
iniconfig==2.0.0
isort==6.0.0
jaraco-context==6.0.1
jaraco-packaging==10.2.3
jinja2==3.1.5
markupsafe==3.0.2
mccabe==0.7.0
mypy==1.15.0
mypy-extensions==1.0.0
natsort==8.4.0
packaging==24.2
platformdirs==4.3.6
pluggy==1.5.0
pycodestyle==2.12.1
pyflakes==3.2.0
pygments==2.19.1
pyproject-hooks==1.2.0
pytest==8.3.4
snowballstemmer==2.2.0
sphinx==8.1.3
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
tomlkit==0.13.2
typing-extensions==4.12.2
virtualenv==20.29.2

Mypy Static Analysis Output

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

The output from running mypy on the initial Python Fibonacci code, indicating a missing return statement.

$ mypy examples/fib.py
examples/fib.py:1: error: Missing return statement  [return]
Found 1 error in 1 file (checked 1 source file)

Pytest Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/ci.txt

Dependencies for Pytest, a testing framework.

coverage[toml]==7.6.12
iniconfig==2.0.0
packaging==24.2
pluggy==1.5.0
pytest==8.3.4

CI Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/ci.txt

Dependencies specifically for Continuous Integration (CI) processes.

flake8==7.1.1
flake8-pyproject==1.2.3
mypy==1.15.0
pytest==8.3.4
pytest-checkdocs==2.13.0
pytest-cov==6.0.0
types-requests==2.32.0.20241016

Python Fibonacci Function

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

The Python code for the Fibonacci sequence function, including type hints.

def Fibonacci(n: int) -> int:
    if n < 0:
        print("Incorrect input")
    # First Fibonacci number is 0
    elif n == 1:
        return 0
    # Second Fibonacci number is 1
    elif n == 2:
        return 1
    else:
        return Fibonacci(n - 1) + Fibonacci(n - 2)

Pylint Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/ci.txt

Dependencies required for the Pylint static analysis tool.

astroid==3.3.8
dill==0.3.9
isort==6.0.0
mccabe==0.7.0
platformdirs==4.3.6
pylint==3.3.4
tomlkit==0.13.2

Rewrite Tests with Pytest Parametrized

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

This snippet shows how to use the OpenAI CLI to rewrite Python tests to utilize pytest's parametrize decorator for more efficient testing.

$ (echo "Rewrite these tests to use pytest.parametrized"; cat examples/test_fib.py) | openai complete - | black - | tee tmp && mv tmp examples/test_fib.py

Mypy Static Analysis Output (Incompatible Return)

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

The output from running mypy after adding return None, which results in an incompatible return value type error.

$ mypy examples/fib.py
examples/fib.py:12: error: Incompatible return value type (got "None", expected "int")  [return-value]
Found 1 error in 1 file (checked 1 source file)

Pytest Parametrized Fibonacci Tests

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

The refactored Python test code using pytest's parametrize to test the Fibonacci function with multiple inputs and expected outputs.

import pytest
from .fib import Fibonacci


@pytest.mark.parametrize(
    "n, expected",
    [(1, 0), (2, 1), (3, 1), (4, 2), (5, 3), (6, 5), (7, 8), (8, 13), (9, 21), (10, 34)],
)
def test_fibonacci(n, expected):
    assert Fibonacci(n) == expected

Mypy Static Analysis Success

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

The output from running mypy after the final correction, indicating that no issues were found.

$ mypy examples/fib.py
Success: no issues found in 1 source file

Base Python Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/base.txt

Lists the core Python packages required for the OpenAI CLI project. These include libraries for handling SSL certificates, character encoding, internationalized domain names, making HTTP requests, and managing network connections.

certifi==2025.1.31
    # via requests
charset-normalizer==3.4.1
    # via requests
idna==3.10
    # via requests
requests==2.32.3
    # via -r requirements/base.in
urllib3==2.3.0
    # via requests

Fix Mypy Warnings (Return Statement)

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

This snippet shows how to use the OpenAI CLI to fix mypy warnings by adding a return statement to the Python code.

$ (echo "Fix mypy warnings in this Python code"; cat examples/fib.py; mypy examples/fib.py) | openai complete - | black - | tee tmp && mv tmp examples/fib.py

Fix Mypy Warnings (Correct Return Value)

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

This snippet demonstrates using the OpenAI CLI to fix the incompatible return value type error by changing the return statement to return 0.

$ (echo "Fix mypy warnings in this Python code"; cat examples/fib.py; mypy examples/fib.py) | openai complete - | black - | tee tmp && mv tmp examples/fib.py

Flake8 Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/ci.txt

Dependencies for Flake8, a Python code linter.

flake8==7.1.1
mccabe==0.7.0
pycodestyle==2.12.1
pyflakes==3.2.0

Domdf-python-tools Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/ci.txt

Dependencies for domdf-python-tools, a utility library.

domdf-python-tools==3.9.0
natsort==8.4.0
typing-extensions==4.12.2

Python Fibonacci Function with Added Return

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

The Python code for the Fibonacci function after adding a return None statement to address the mypy warning.

def Fibonacci(n: int) -> int:
    if n < 0:
        print("Incorrect input")
    # First Fibonacci number is 0
    elif n == 1:
        return 0
    # Second Fibonacci number is 1
    elif n == 2:
        return 1
    else:
        return Fibonacci(n - 1) + Fibonacci(n - 2)
    return None  # Added return statement

MyPy Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/ci.txt

Dependencies for MyPy, a static type checker for Python.

mypy==1.15.0
mypy-extensions==1.0.0

Generated Fibonacci Unit Tests

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

Python unit tests for the Fibonacci function, generated using the OpenAI CLI. These tests cover various cases including negative input, base cases, and sequential Fibonacci numbers.

import unittest
from .fib import Fibonacci


class TestFibonacci(unittest.TestCase):
    def test_negative_input(self):
        self.assertEqual(Fibonacci(-1), None)

    def test_first_fibonacci_number(self):
        self.assertEqual(Fibonacci(1), 0)

    def test_second_fibonacci_number(self):
        self.assertEqual(Fibonacci(2), 1)

    def test_third_fibonacci_number(self):
        self.assertEqual(Fibonacci(3), 1)

    def test_fourth_fibonacci_number(self):
        self.assertEqual(Fibonacci(4), 2)

    def test_fifth_fibonacci_number(self):
        self.assertEqual(Fibonacci(5), 3)

    def test_sixth_fibonacci_number(self):
        self.assertEqual(Fibonacci(6), 5)

    def test_seventh_fibonacci_number(self):
        self.assertEqual(Fibonacci(7), 8)

Project Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/local.txt

Lists the core Python dependencies for the OpenAI CLI project, including version specifications. These are the packages directly required for the project's functionality and development.

asttokens==3.0.0
black==25.1.0
cfgv==3.4.0
click==8.1.8
decorator==5.1.1
executing==2.2.0
identify==2.6.7
ipdb==0.13.13
ipython==8.32.0
jedi==0.19.2
matplotlib-inline==0.1.7
nodeenv==1.9.1
parso==0.8.4
pathspec==0.12.1
pex==2.33.0
pexpect==4.9.0
pip-compile-multi==2.7.1
pip-tools==7.4.1
pre-commit==4.1.0
prompt-toolkit==3.0.50
ptyprocess==0.7.0
pure-eval==0.2.3
pyyaml==6.0.2
stack-data==0.6.3
toposort==1.10
traitlets==5.14.3
wcwidth==0.2.13
wheel==0.45.1

Generated Fibonacci Python Function

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

The Python code for the Fibonacci function, generated using the OpenAI CLI based on a natural language prompt.

def Fibonacci(n):
    if n < 0:
        print("Incorrect input")
    # First Fibonacci number is 0
    elif n == 1:
        return 0
    # Second Fibonacci number is 1
    elif n == 2:
        return 1
    else:
        return Fibonacci(n - 1) + Fibonacci(n - 2)

Virtualenv Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/ci.txt

Dependencies for virtualenv, a tool for creating isolated Python environments.

distlib==0.3.9
filelock==3.17.0
build[virtualenv]==1.2.2.post1
virtualenv==20.29.2

CI Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/local.txt

Specifies the dependencies required for the Continuous Integration (CI) environment of the OpenAI CLI project. These packages are typically used for building, testing, and deploying the project.

-r ci.txt

Unsafe Dependencies

Source: https://github.com/peterdemin/openai-cli/blob/main/requirements/local.txt

Highlights packages identified as potentially unsafe for inclusion in a requirements file. These might include build tools or packages that could have unintended side effects if not managed carefully.

pip==25.0.1
setuptools==75.8.0

Python Fibonacci Function with Corrected Return

Source: https://github.com/peterdemin/openai-cli/blob/main/README.rst

The Python code for the Fibonacci function after correcting the return statement to return 0 to satisfy mypy.

def Fibonacci(n: int) -> int:
    if n < 0:
        print("Incorrect input")
    # First Fibonacci number is 0
    elif n == 1:
        return 0
    # Second Fibonacci number is 1
    elif n == 2:
        return 1
    else:
        return Fibonacci(n - 1) + Fibonacci(n - 2)
    return 0  # Changed return statement to return 0

=== COMPLETE CONTENT === This response contains all available snippets from this library. No additional content exists. Do not make further requests.

Install OpenAI SDK across Multiple Languages

Source: https://platform.openai.com/docs/quickstart_api-mode=chat

This section provides instructions for installing the official OpenAI SDK or client libraries for various programming languages. These installations are prerequisites for making API calls to the OpenAI service.

npm install openai
pip install openai
dotnet add package OpenAI
<dependency>
  <groupId>com.openai</groupId>
  <artifactId>openai-java</artifactId>
  <version>4.0.0</version>
</dependency>
import (
  "github.com/openai/openai-go" // imported as openai
)

Test Basic API Request - Go

Source: https://platform.openai.com/docs/libraries

Create and execute a simple API request using the OpenAI SDK in Go. This example demonstrates client initialization with API key, parameter setup with union types, and calling the Responses API with error handling.

package main

import (
	"context"
	"fmt"

	"github.com/openai/openai-go/v3"
	"github.com/openai/openai-go/v3/option"
	"github.com/openai/openai-go/v3/responses"
)

func main() {
	client := openai.NewClient(
		option.WithAPIKey("My API Key"), // or set OPENAI_API_KEY in your env
	)

	resp, err := client.Responses.New(context.TODO(), openai.ResponseNewParams{
		Model: "gpt-5-nano",
		Input: responses.ResponseNewParamsInputUnion{OfString: openai.String("Say this is a test")},
	})
	if err != nil {
		panic(err.Error())
	}

	fmt.Println(resp.OutputText())
}

Multi-turn Image Generation Workflow Example

Source: https://platform.openai.com/docs/guides/image-generation_api=responses&gallery=open&galleryItem=paper-sculpture-city

Complete example demonstrating how to perform multi-turn image generation, starting with an initial image request and then refining it in a follow-up turn using the previous response ID.

## Multi-turn Image Generation Workflow

### Description
This workflow demonstrates the complete process of generating an image and then refining it in a follow-up turn by referencing the previous response.

### Step 1: Initial Image Generation Request
Generate an initial image based on a prompt.

### Request
```javascript
const response = await openai.responses.create({
  model: "gpt-5",
  input: "Generate an image of gray tabby cat hugging an otter with an orange scarf",
  tools: [{type: "image_generation"}]
});

Step 2: Extract and Save Generated Image

Filter the response output to get image data and save it to a file.

Request

const imageData = response.output
  .filter((output) => output.type === "image_generation_call")
  .map((output) => output.result);

if (imageData.length > 0) {
  const imageBase64 = imageData[0];
  const fs = await import("fs");
  fs.writeFileSync("cat_and_otter.png", Buffer.from(imageBase64, "base64"));
}

Step 3: Follow-up Request Using Previous Response ID

Refine the image by referencing the previous response.

Request

const response_followup = await openai.responses.create({
  model: "gpt-5",
  previous_response_id: response.id,
  input: "Now make it look realistic",
  tools: [{type: "image_generation"}]
});

Step 4: Extract and Save Refined Image

Process the follow-up response similarly to extract the refined image.

Request

const imageData_followup = response_followup.output
  .filter((output) => output.type === "image_generation_call")
  .map((output) => output.result);

if (imageData_followup.length > 0) {
  const imageBase64 = imageData_followup[0];
  const fs = await import("fs");
  fs.writeFileSync("cat_and_otter_realistic.png", Buffer.from(imageBase64, "base64"));
}

Key Parameters

  • previous_response_id: Links the follow-up request to the initial response, maintaining conversation context
  • input: Updated prompt for refining the image
  • tools: Must include {type: "image_generation"} for image capabilities

--------------------------------

### Make a basic OpenAI API call to generate text

Source: https://platform.openai.com/docs/quickstart

These examples demonstrate how to make a basic API request to the OpenAI `responses` endpoint across various programming languages. Each code block initializes an OpenAI client, configures a model and input prompt, and retrieves the generated text output.

```JavaScript
import OpenAI from "openai";
const client = new OpenAI();

const response = await client.responses.create({
    model: "gpt-5-nano",
    input: "Write a one-sentence bedtime story about a unicorn."
});

console.log(response.output_text);
from openai import OpenAI
client = OpenAI()

response = client.responses.create(
    model="gpt-5-nano",
    input="Write a one-sentence bedtime story about a unicorn."
)

print(response.output_text)
using System;
using System.Threading.Tasks;
using OpenAI;

class Program
{
    static async Task Main()
    {
        var client = new OpenAIClient(
            Environment.GetEnvironmentVariable("OPENAI_API_KEY")
        );

        var response = await client.Responses.CreateAsync(new ResponseCreateRequest
        {
            Model = "gpt-5-nano",
            Input = "Say 'this is a test.'"
        });

        Console.WriteLine($"[ASSISTANT]: {response.OutputText()}");
    }
}
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
import com.openai.models.responses.Response;
import com.openai.models.responses.ResponseCreateParams;

public class Main {
    public static void main(String[] args) {
        OpenAIClient client = OpenAIOkHttpClient.fromEnv();

        ResponseCreateParams params = ResponseCreateParams.builder()
                .input("Say this is a test")
                .model("gpt-5-nano")
                .build();

        Response response = client.responses().create(params);
        System.out.println(response.outputText());
    }
}
package main

import (
	"context"
	"fmt"

	"github.com/openai/openai-go/v3"
	"github.com/openai/openai-go/v3/option"
	"github.com/openai/openai-go/v3/responses"
)

func main() {
	client := openai.NewClient(
		option.WithAPIKey("My API Key"), // or set OPENAI_API_KEY in your env
	)

	resp, err := client.Responses.New(context.TODO(), openai.ResponseNewParams{
		Model: "gpt-5-nano",
		Input: responses.ResponseNewParamsInputUnion{OfString: openai.String("Say this is a test")},
	})
	if err != nil {
		panic(err.Error())
	}

	fmt.Println(resp.OutputText())
}

Client SDK Examples - JavaScript

Source: https://platform.openai.com/docs/guides/latest-model_gallery=open&galleryItem=esports-tournament-landing-page

JavaScript/Node.js SDK examples for creating responses with reasoning effort and verbosity parameters using the OpenAI client library.

## JavaScript Client Library Examples

### Reasoning Effort Example
```javascript
import OpenAI from "openai";
const openai = new OpenAI();

const response = await openai.responses.create({
  model: "gpt-5.1",
  input: "How much gold would it take to coat the Statue of Liberty in a 1mm layer?",
  reasoning: {
    effort: "none"
  }
});

console.log(response);

Verbosity Control Example

import OpenAI from "openai";
const openai = new OpenAI();

const response = await openai.responses.create({
  model: "gpt-5",
  input: "What is the answer to the ultimate question of life, the universe, and everything?",
  text: {
    verbosity: "low"
  }
});

console.log(response);

Configuration Options

  • model: Specify the GPT model version ("gpt-5.1", "gpt-5", "gpt-5.2")
  • input: Your prompt or question
  • reasoning.effort: Set to "none", "medium", or "high"
  • text.verbosity: Set to "low", "medium", or "high"

Installation

npm install openai

--------------------------------

### Client SDK Examples - Python

Source: https://platform.openai.com/docs/guides/latest-model_gallery=open&galleryItem=esports-tournament-landing-page

Python SDK examples for creating responses with reasoning effort and verbosity parameters using the OpenAI client library.

```APIDOC
## Python Client Library Examples

### Reasoning Effort Example
```python
from openai import OpenAI
client = OpenAI()

response = client.responses.create(
    model="gpt-5.1",
    input="How much gold would it take to coat the Statue of Liberty in a 1mm layer?",
    reasoning={
        "effort": "none"
    }
)

print(response)

Verbosity Control Example

from openai import OpenAI
client = OpenAI()

response = client.responses.create(
    model="gpt-5",
    input="What is the answer to the ultimate question of life, the universe, and everything?",
    text={
        "verbosity": "low"
    }
)

print(response)

Configuration Options

  • model: Specify the GPT model version ("gpt-5.1", "gpt-5", "gpt-5.2")
  • input: Your prompt or question
  • reasoning: Dictionary with "effort" key set to "none", "medium", or "high"
  • text: Dictionary with "verbosity" key set to "low", "medium", or "high"

Installation

pip install openai

--------------------------------

### Create OpenAI Response with Function Tool in C#

Source: https://platform.openai.com/docs/quickstart

Initialize an OpenAI response client with a custom function tool for weather queries. The example demonstrates tool definition with strict mode enabled, function parameters validation, and response creation with user input. Requires OPENAI_API_KEY environment variable and uses System.Text.Json for serialization.

```csharp
using System.Text.Json;
using OpenAI.Responses;

string key = Environment.GetEnvironmentVariable("OPENAI_API_KEY")!;
OpenAIResponseClient client = new(model: "gpt-5", apiKey: key);

ResponseCreationOptions options = new();
options.Tools.Add(ResponseTool.CreateFunctionTool(
        functionName: "get_weather",
        functionDescription: "Get current temperature for a given location.",
        functionParameters: BinaryData.FromObjectAsJson(new
        {
            type = "object",
            properties = new
            {
                location = new
                {
                    type = "string",
                    description = "City and country e.g. Bogotá, Colombia"
                }
            },
            required = new[] { "location" },
            additionalProperties = false
        }),
        strictModeEnabled: true
    )
);

OpenAIResponse response = (OpenAIResponse)client.CreateResponse([
    ResponseItem.CreateUserMessageItem([
        ResponseContentPart.CreateInputTextPart("What is the weather like in Paris today?")
    ])
], options);

Console.WriteLine(JsonSerializer.Serialize(response.OutputItems[0]));

Create Language Triage Agent with OpenAI Agents SDK

Source: https://platform.openai.com/docs/quickstart

This example illustrates how to build a language triage agent using the OpenAI Agents SDK. It defines multiple specialized agents (e.g., Spanish, English) and a main triage agent configured to handoff requests to the appropriate sub-agent based on the input language. This allows for dynamic routing and processing of user requests, requiring the @openai/agents or agents library.

import { Agent, run } from '@openai/agents';

const spanishAgent = new Agent({
    name: 'Spanish agent',
    instructions: 'You only speak Spanish.',
});

const englishAgent = new Agent({
    name: 'English agent',
    instructions: 'You only speak English',
});

const triageAgent = new Agent({
    name: 'Triage agent',
    instructions:
        'Handoff to the appropriate agent based on the language of the request.',
    handoffs: [spanishAgent, englishAgent],
});

const result = await run(triageAgent, 'Hola, ¿cómo estás?');
console.log(result.finalOutput);
from agents import Agent, Runner
import asyncio

spanish_agent = Agent(
    name="Spanish agent",
    instructions="You only speak Spanish.",
)

english_agent = Agent(
    name="English agent",
    instructions="You only speak English",
)

triage_agent = Agent(
    name="Triage agent",
    instructions="Handoff to the appropriate agent based on the language of the request.",
    handoffs=[spanish_agent, english_agent],
)


async def main():
    result = await Runner.run(triage_agent, input="Hola, ¿cómo estás?")
    print(result.final_output)


if __name__ == "__main__":
    asyncio.run(main())

Upload PDF from URL and Summarize with OpenAI (C#)

Source: https://platform.openai.com/docs/quickstart

This C# example demonstrates how to upload a PDF file directly from a URL to OpenAI using a stream and then use the uploaded file's ID to prompt a GPT model for a summary of its key points. It showcases streaming a file from an HTTP source and integrating it into an OpenAI model request.

using OpenAI.Files;
using OpenAI.Responses;

string key = Environment.GetEnvironmentVariable("OPENAI_API_KEY")!;
OpenAIResponseClient client = new(model: "gpt-5", apiKey: key);

using HttpClient http = new();
using Stream stream = await http.GetStreamAsync("https://www.berkshirehathaway.com/letters/2024ltr.pdf");
OpenAIFileClient files = new(key);
OpenAIFile file = files.UploadFile(stream, "2024ltr.pdf", FileUploadPurpose.UserData);

OpenAIResponse response = (OpenAIResponse)client.CreateResponse([
    ResponseItem.CreateUserMessageItem([
        ResponseContentPart.CreateInputTextPart("Analyze the letter and provide a summary of the key points."),
        ResponseContentPart.CreateInputFilePart(file.Id),
    ]),
]);

Console.WriteLine(response.GetOutputText());

Send Audio to OpenAI GPT-Audio Model

Source: https://platform.openai.com/docs/guides/audio_api-mode=chat

This demonstrates how to send audio data, along with a text prompt, to the OpenAI GPT-Audio model for processing. Examples include fetching and base64 encoding an audio file in Python, and constructing the corresponding API request using both Python and cURL. It assumes the openai and requests libraries are installed for the Python example and a valid OpenAI API key for both.

url = "https://cdn.openai.com/API/docs/audio/alloy.wav"
response = requests.get(url)
response.raise_for_status()
wav_data = response.content
encoded_string = base64.b64encode(wav_data).decode('utf-8')

completion = client.chat.completions.create(
    model="gpt-audio",
    modalities=["text", "audio"],
    audio={"voice": "alloy", "format": "wav"},
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "What is in this recording?"
                },
                {
                    "type": "input_audio",
                    "input_audio": {
                        "data": encoded_string,
                        "format": "wav"
                    }
                }
            ]
        }
    ]
)

print(completion.choices[0].message)
curl "https://api.openai.com/v1/chat/completions" \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $OPENAI_API_KEY" \
    -d '{
      "model": "gpt-audio",
      "modalities": ["text", "audio"],
      "audio": { "voice": "alloy", "format": "wav" },
      "messages": [
        {
          "role": "user",
          "content": [
            { "type": "text", "text": "What is in this recording?" },
            {
              "type": "input_audio",
              "input_audio": {
                "data": "<base64 bytes here>",
                "format": "wav"
              }
            }
          ]
        }
      ]
    }'

Analyze File from URL using OpenAI API

Source: https://platform.openai.com/docs/quickstart_api-mode=responses

This snippet shows how to send a file URL (e.g., a PDF document) to an OpenAI model (gpt-5) for analysis, such as summarizing key points from a document. Examples are provided for cURL, JavaScript, and Python, illustrating how to construct the API request with a user role, text prompt, and the file URL.

curl "https://api.openai.com/v1/responses" \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $OPENAI_API_KEY" \
    -d '{
        "model": "gpt-5",
        "input": [
            {
                "role": "user",
                "content": [
                    {
                        "type": "input_text",
                        "text": "Analyze the letter and provide a summary of the key points."
                    },
                    {
                        "type": "input_file",
                        "file_url": "https://www.berkshirehathaway.com/letters/2024ltr.pdf"
                    }
                ]
            }
        ]
    }'
import OpenAI from "openai";
const client = new OpenAI();

const response = await client.responses.create({
    model: "gpt-5",
    input: [
        {
            role: "user",
            content: [
                {
                    type: "input_text",
                    text: "Analyze the letter and provide a summary of the key points.",
                },
                {
                    type: "input_file",
                    file_url: "https://www.berkshirehathaway.com/letters/2024ltr.pdf",
                },
            ],
        },
    ],
});

console.log(response.output_text);
from openai import OpenAI
client = OpenAI()

response = client.responses.create(
    model="gpt-5",
    input=[
        {
            "role": "user",
            "content": [
                {
                    "type": "input_text",
                    "text": "Analyze the letter and provide a summary of the key points.",
                },
                {
                    "type": "input_file",
                    "file_url": "https://www.berkshirehathaway.com/letters/2024ltr.pdf",
                },
            ],
        },
    ]
)

print(response.output_text)

Define and Call a Function Tool (OpenAI API)

Source: https://platform.openai.com/docs/quickstart_api-mode=responses

These examples demonstrate how to define a custom function tool, such as 'get_weather', and instruct the OpenAI model to use it. The function schema, including parameter types and descriptions, is provided to the API. This enables the model to respond by suggesting a call to the defined function based on the user's input.

using System.Text.Json;
using OpenAI.Responses;

string key = Environment.GetEnvironmentVariable("OPENAI_API_KEY")!;
OpenAIResponseClient client = new(model: "gpt-5", apiKey: key);

ResponseCreationOptions options = new();
options.Tools.Add(ResponseTool.CreateFunctionTool(
        functionName: "get_weather",
        functionDescription: "Get current temperature for a given location.",
        functionParameters: BinaryData.FromObjectAsJson(new
        {
            type = "object",
            properties = new
            {
                location = new
                {                   
                    type = "string",
                    description = "City and country e.g. Bogotá, Colombia"
                }
            },
            required = new[] { "location" },
            additionalProperties = false
        }),
        strictModeEnabled: true
    )
);

OpenAIResponse response = (OpenAIResponse)client.CreateResponse([
    ResponseItem.CreateUserMessageItem([
        ResponseContentPart.CreateInputTextPart("What is the weather like in Paris today?")
    ])
], options);

Console.WriteLine(JsonSerializer.Serialize(response.OutputItems[0]));
curl -X POST https://api.openai.com/v1/responses \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5",
    "input": [
      {"role": "user", "content": "What is the weather like in Paris today?"}
    ],
    "tools": [
      {
        "type": "function",
        "name": "get_weather",
        "description": "Get current temperature for a given location.",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "City and country e.g. Bogotá, Colombia"
            }
          },
          "required": ["location"],
          "additionalProperties": false
        },
        "strict": true
      }
    ]
  }'

Install OpenAI Agents SDK with npm

Source: https://platform.openai.com/docs/guides/voice-agents

Install the OpenAI Agents package from npm to get started with building voice agents in TypeScript. This package includes Realtime Agents for speech-to-speech voice agent development with built-in transport method selection (WebRTC for browser, WebSocket for server-side).

npm install @openai/agents

Upload Local File and Query OpenAI Model

Source: https://platform.openai.com/docs/quickstart

These examples demonstrate how to upload a local file to the OpenAI API for 'user_data' purposes and subsequently incorporate that file into a model's input for response generation. The file's ID is referenced in the model request to enable the AI to process its content, answering specific questions about the document.

curl https://api.openai.com/v1/files \
    -H "Authorization: Bearer $OPENAI_API_KEY" \
    -F purpose="user_data" \
    -F file="@draconomicon.pdf"

curl "https://api.openai.com/v1/responses" \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $OPENAI_API_KEY" \
    -d '{
        "model": "gpt-5",
        "input": [
            {
                "role": "user",
                "content": [
                    {
                        "type": "input_file",
                        "file_id": "file-6F2ksmvXxt4VdoqmHRw6kL"
                    },
                    {
                        "type": "input_text",
                        "text": "What is the first dragon in the book?"
                    }
                ]
            }
        ]
    }'
import fs from "fs";
import OpenAI from "openai";
const client = new OpenAI();

const file = await client.files.create({
    file: fs.createReadStream("draconomicon.pdf"),
    purpose: "user_data",
});

const response = await client.responses.create({
    model: "gpt-5",
    input: [
        {
            role: "user",
            content: [
                {
                    type: "input_file",
                    file_id: file.id,
                },
                {
                    type: "input_text",
                    text: "What is the first dragon in the book?",
                },
            ],
        },
    ],
});

console.log(response.output_text);
from openai import OpenAI
client = OpenAI()

file = client.files.create(
    file=open("draconomicon.pdf", "rb"),
    purpose="user_data"
)

response = client.responses.create(
    model="gpt-5",
    input=[
        {
            "role": "user",
            "content": [
                {
                    "type": "input_file",
                    "file_id": file.id,
                },
                {
                    "type": "input_text",
                    "text": "What is the first dragon in the book?",
                },
            ]
        }
    ]
)

print(response.output_text)
using OpenAI.Files;
using OpenAI.Responses;

string key = Environment.GetEnvironmentVariable("OPENAI_API_KEY")!;
OpenAIResponseClient client = new(model: "gpt-5", apiKey: key);

OpenAIFileClient files = new(key);
OpenAIFile file = files.UploadFile("draconomicon.pdf", FileUploadPurpose.UserData);

OpenAIResponse response = (OpenAIResponse)client.CreateResponse([
    ResponseItem.CreateUserMessageItem([
        ResponseContentPart.CreateInputFilePart(file.Id),
        ResponseContentPart.CreateInputTextPart("What is the first dragon in the book?"),
    ]),
]);

Console.WriteLine(response.GetOutputText());

Hono Web Server Application Setup

Source: https://platform.openai.com/docs/guides/predicted-outputs

Creates a basic Hono web server with static file serving and API routes. Includes a GET endpoint at '/api' returning 'Hello Hono!' and serves static files from a built UI directory on port 3000. Used as an example for demonstrating Predicted Outputs positioning.

import { serveStatic } from "@hono/node-server/serve-static";
import { serve } from "@hono/node-server";
import { Hono } from "hono";

const app = new Hono();

app.get("/api", (c) => {
  return c.text("Hello Hono!");
});

// You will need to build the client code first `pnpm run ui:build`
app.use(
  "/*",
  serveStatic({
    rewriteRequestPath: (path) => `./dist${path}`,
  })
);

const port = 3000;
console.log(`Server is running on port ${port}`);

serve({
  fetch: app.fetch,
  port,
});

Call Remote MCP Server with C# SDK

Source: https://platform.openai.com/docs/quickstart_api-mode=chat

Initialize OpenAI response client and add MCP tool with server configuration and approval policy. Creates a response with user message and retrieves output text.

using OpenAI.Responses;

string key = Environment.GetEnvironmentVariable("OPENAI_API_KEY")!;
OpenAIResponseClient client = new(model: "gpt-5", apiKey: key);

ResponseCreationOptions options = new();
options.Tools.Add(ResponseTool.CreateMcpTool(
    serverLabel: "dmcp",
    serverUri: new Uri("https://dmcp-server.deno.dev/sse"),
    toolCallApprovalPolicy: new McpToolCallApprovalPolicy(GlobalMcpToolCallApprovalPolicy.NeverRequireApproval)
));

OpenAIResponse response = (OpenAIResponse)client.CreateResponse([
    ResponseItem.CreateUserMessageItem([
        ResponseContentPart.CreateInputTextPart("Roll 2d4+1")
    ])
], options);

Console.WriteLine(response.GetOutputText());

Video Download Implementation Examples

Source: https://platform.openai.com/docs/guides/video-generation_gallery=open&galleryItem=coloring

Complete code examples demonstrating how to poll video generation status and download completed videos in JavaScript and Python, including progress tracking and error handling.

## Video Download - JavaScript Implementation

### Description
Complete example showing video creation, status polling with progress tracking, and MP4 file download in Node.js.

### Code Example
```javascript
import OpenAI from 'openai';

const openai = new OpenAI();

let video = await openai.videos.create({
    model: 'sora-2',
    prompt: "A video of the words 'Thank you' in sparkling letters",
});

console.log('Video generation started: ', video);
let progress = video.progress ?? 0;

while (video.status === 'in_progress' || video.status === 'queued') {
    video = await openai.videos.retrieve(video.id);
    progress = video.progress ?? 0;

    // Display progress bar
    const barLength = 30;
    const filledLength = Math.floor((progress / 100) * barLength);
    const bar = '='.repeat(filledLength) + '-'.repeat(barLength - filledLength);
    const statusText = video.status === 'queued' ? 'Queued' : 'Processing';

    process.stdout.write(`${statusText}: [${bar}] ${progress.toFixed(1)}%`);

    await new Promise((resolve) => setTimeout(resolve, 2000));
}

process.stdout.write('\n');

if (video.status === 'failed') {
    console.error('Video generation failed');
    return;
}

console.log('Video generation completed: ', video);
console.log('Downloading video content...');

const content = await openai.videos.downloadContent(video.id);
const body = content.arrayBuffer();
const buffer = Buffer.from(await body);

require('fs').writeFileSync('video.mp4', buffer);
console.log('Wrote video.mp4');

Key Features

  • Automatic status polling with 2-second intervals
  • Real-time progress bar display
  • Error handling for failed jobs
  • Direct file write to disk after download

--------------------------------

### GET /response.mcp_call.in_progress

Source: https://platform.openai.com/docs/api-reference/realtime-beta-server-events/conversation/item/deleted

Returned when an MCP tool call has started and is in progress.

```APIDOC
## Object: response.mcp_call.in_progress

### Description
Returned when an MCP tool call has started and is in progress.

### Fields
- **event_id** (string) - The unique ID of the server event.
- **item_id** (string) - The ID of the MCP tool call item.
- **output_index** (integer) - The index of the output item in the response.
- **type** (string) - The event type, must be `response.mcp_call.in_progress`.

### Example
{
  "event_id": "event_6301",
  "type": "response.mcp_call.in_progress",
  "output_index": 0,
  "item_id": "mcp_call_001"
}

GET /events/response.mcp_call.in_progress

Source: https://platform.openai.com/docs/api-reference/realtime-beta-server-events/response/mcp_call_arguments/done

Event indicating that an MCP tool call has started and is in progress.

## GET /events/response.mcp_call.in_progress

### Description
Returned when an MCP tool call has started and is in progress.

### Method
GET

### Endpoint
/events/response.mcp_call.in_progress

### Parameters
#### Path Parameters

#### Query Parameters

#### Request Body

### Request Example


### Response
#### Success Response (200)
- **event_id** (string) - The unique ID of the server event.
- **item_id** (string) - The ID of the MCP tool call item.
- **output_index** (integer) - The index of the output item in the response.
- **type** (string) - The event type, must be `response.mcp_call.in_progress`.

#### Response Example
{
  "event_id": "event_6301",
  "type": "response.mcp_call.in_progress",
  "output_index": 0,
  "item_id": "mcp_call_001"
}

Download Video Content - Python Example

Source: https://platform.openai.com/docs/guides/video-generation_gallery=open&galleryItem=Upside-Down-City

Complete Python example demonstrating video creation, status polling with progress visualization, and downloading the generated MP4 file using the OpenAI Python SDK.

## Python: Complete Video Generation & Download Workflow

### Description
Full example showing how to create a video, monitor progress with status updates, and download the final MP4 file.

### Code Example

```python
from openai import OpenAI
import sys
import time

openai = OpenAI()

video = openai.videos.create(
    model="sora-2",
    prompt="A video of a cool cat on a motorcycle in the night",
)

print("Video generation started:", video)

progress = getattr(video, "progress", 0)
bar_length = 30

while video.status in ("in_progress", "queued"):
    # Refresh status
    video = openai.videos.retrieve(video.id)
    progress = getattr(video, "progress", 0)

    filled_length = int((progress / 100) * bar_length)
    bar = "=" * filled_length + "-" * (bar_length - filled_length)
    status_text = "Queued" if video.status == "queued" else "Processing"

    sys.stdout.write(f"\r{status_text}: [{bar}] {progress:.1f}%")
    sys.stdout.flush()
    time.sleep(2)

# Move to next line after progress loop
sys.stdout.write("\n")

if video.status == "failed":
    message = getattr(
        getattr(video, "error", None), "message", "Video generation failed"
    )
    print(message)
    return

print("Video generation completed:", video)
print("Downloading video content...")

content = openai.videos.download_content(video.id, variant="video")
content.write_to_file("video.mp4")

print("Wrote video.mp4")

Workflow Steps

  1. Create video with specified prompt
  2. Poll status every 2 seconds until completion
  3. Display progress bar with current percentage
  4. Check for failed status with error handling
  5. Download content using download_content() method
  6. Write binary data to file using write_to_file()

--------------------------------

### Migration: Move Chats to Conversations and Responses

Source: https://platform.openai.com/docs/assistants

Second step of API migration. This guide demonstrates how to backfill existing threads into the new conversation and response model with code examples for data transformation.

```APIDOC
## Step 2: Move User Chats to Conversations and Responses

### Description
Migrate user threads from the Assistants API to conversations in the Responses API. New user chats should be created directly as conversations.

### Migration Strategy
- Route new user threads to conversations and responses API
- Backfill existing threads as necessary
- No automated migration tool provided; manual conversion recommended

### Backfilling Existing Threads

Retrieve all messages from an existing thread and convert them to conversation format:

```python
thread_id = "thread_EIpHrTAVe0OzoLQg3TXfvrkG"
messages = []

# Step 1: Retrieve all messages from thread
for page in openai.beta.threads.messages.list(thread_id=thread_id, order="asc").iter_pages():
    messages += page.data

# Step 2: Convert messages to conversation format
items = []
for m in messages:
    item = {"role": m.role}
    item_content = []

    for content in m.content:
        match content.type:
            case "text":
                item_content_type = "input_text" if m.role == "user" else "output_text"
                item_content += [{"type": item_content_type, "text": content.text.value}]
            case "image_url":
                item_content += [
                    {
                        "type": "input_image",
                        "image_url": content.image_url.url,
                        "detail": content.image_url.detail
                    }
                ]

    item.update({"content": item_content})
    items.append(item)

# Step 3: Create conversation with converted items
conversation = openai.conversations.create(items=items)

Content Type Mapping

  • User text messages: Convert to "input_text" type
  • Assistant text messages: Convert to "output_text" type
  • Images: Convert to "input_image" type with URL and detail

Post-Migration

  • New user interactions should use the conversations and responses API
  • Backfilled conversations maintain message history
  • Update application code to use new API endpoints

--------------------------------

### Customize Start Screen Text and Placeholder in ChatKit

Source: https://platform.openai.com/docs/guides/chatkit-themes

Customize the composer placeholder text and start screen greeting to guide users on what to ask or input. This helps set expectations and provides context for new conversations.

```typescript
const options: Partial<ChatKitOptions> = {
  composer: {
    placeholder: "Ask anything about your data…",
  },
  startScreen: {
    greeting: "Welcome to FeedbackBot!",
  },
};

Add Starter Prompts to ChatKit Start Screen

Source: https://platform.openai.com/docs/guides/chatkit-themes

Define suggested prompt ideas that appear when users start a new conversation. Each prompt includes a name, the actual prompt text, and an icon to guide user interactions.

const options: Partial<ChatKitOptions> = {
  startScreen: {
    greeting: "What can I help you build today?",
    prompts: [
      { 
        name: "Check on the status of a ticket", 
        prompt: "Can you help me check on the status of a ticket?", 
        icon: "search"
      },
      { 
        name: "Create Ticket", 
        prompt: "Can you help me create a new support ticket?", 
        icon: "write"
      },
    ],
  },
};

Download Video Content - JavaScript Example

Source: https://platform.openai.com/docs/guides/video-generation_gallery=open&galleryItem=Upside-Down-City

Complete JavaScript example demonstrating video creation, status polling with progress tracking, and downloading the generated MP4 file using the OpenAI SDK.

## JavaScript: Complete Video Generation & Download Workflow

### Description
Full example showing how to create a video, monitor progress, and download the final MP4 file.

### Code Example

```javascript
import OpenAI from 'openai';

const openai = new OpenAI();

let video = await openai.videos.create({
    model: 'sora-2',
    prompt: "A video of the words 'Thank you' in sparkling letters",
});

console.log('Video generation started: ', video);
let progress = video.progress ?? 0;

while (video.status === 'in_progress' || video.status === 'queued') {
    video = await openai.videos.retrieve(video.id);
    progress = video.progress ?? 0;

    // Display progress bar
    const barLength = 30;
    const filledLength = Math.floor((progress / 100) * barLength);
    const bar = '='.repeat(filledLength) + '-'.repeat(barLength - filledLength);
    const statusText = video.status === 'queued' ? 'Queued' : 'Processing';

    process.stdout.write(`${statusText}: [${bar}] ${progress.toFixed(1)}%`);

    await new Promise((resolve) => setTimeout(resolve, 2000));
}

// Clear the progress line and show completion
process.stdout.write('\n');

if (video.status === 'failed') {
    console.error('Video generation failed');
    return;
}

console.log('Video generation completed: ', video);
console.log('Downloading video content...');

const content = await openai.videos.downloadContent(video.id);
const body = content.arrayBuffer();
const buffer = Buffer.from(await body);

require('fs').writeFileSync('video.mp4', buffer);
console.log('Wrote video.mp4');

Workflow Steps

  1. Create video with specified prompt
  2. Poll status every 2 seconds until completion
  3. Display ASCII progress bar during generation
  4. Check for failed status
  5. Download content using downloadContent() method
  6. Write binary data to file

--------------------------------

### Upload File and Query with Python

Source: https://platform.openai.com/docs/quickstart_api-mode=responses

Shows how to upload a file and query it using the OpenAI Python SDK. The code opens a PDF file, uploads it with user_data purpose, and creates a response request that combines file content with a text question.

```python
from openai import OpenAI
client = OpenAI()

file = client.files.create(
    file=open("draconomicon.pdf", "rb"),
    purpose="user_data"
)

response = client.responses.create(
    model="gpt-5",
    input=[
        {
            "role": "user",
            "content": [
                {
                    "type": "input_file",
                    "file_id": file.id,
                },
                {
                    "type": "input_text",
                    "text": "What is the first dragon in the book?",
                },
            ]
        }
    ]
)

print(response.output_text)

Define Developer Message for Code Generation with Markdown and XML

Source: https://platform.openai.com/docs/guides/prompt-engineering

This example illustrates how to structure a 'developer' message for a coding assistant using Markdown headers for distinct sections (Identity, Instructions, Examples) and XML tags for example input/output. It guides the model on JavaScript variable naming conventions (snake_case, 'var' keyword) and response formatting.

# Identity

You are coding assistant that helps enforce the use of snake case
variables in JavaScript code, and writing code that will run in
Internet Explorer version 6.

# Instructions

* When defining variables, use snake case names (e.g. my_variable)
  instead of camel case names (e.g. myVariable).
* To support old browsers, declare variables using the older
  "var" keyword.
* Do not give responses with Markdown formatting, just return
  the code as requested.

# Examples

<user_query>
How do I declare a string variable for a first name?
</user_query>

<assistant_response>
var first_name = "Anna";
</assistant_response>

Video Download Implementation - Python

Source: https://platform.openai.com/docs/guides/video-generation_gallery=open&galleryItem=coloring

Complete Python example showing video creation, status polling with progress visualization, and MP4 file download using the OpenAI Python client.

## Video Download - Python Implementation

### Description
Complete example demonstrating video creation, status polling with progress tracking, and MP4 file download in Python.

### Code Example
```python
from openai import OpenAI
import sys
import time

openai = OpenAI()

video = openai.videos.create(
    model="sora-2",
    prompt="A video of a cool cat on a motorcycle in the night",
)

print("Video generation started:", video)

progress = getattr(video, "progress", 0)
bar_length = 30

while video.status in ("in_progress", "queued"):
    # Refresh status
    video = openai.videos.retrieve(video.id)
    progress = getattr(video, "progress", 0)

    filled_length = int((progress / 100) * bar_length)
    bar = "=" * filled_length + "-" * (bar_length - filled_length)
    status_text = "Queued" if video.status == "queued" else "Processing"

    sys.stdout.write(f"\r{status_text}: [{bar}] {progress:.1f}%")
    sys.stdout.flush()
    time.sleep(2)

# Move to next line after progress loop
sys.stdout.write("\n")

if video.status == "failed":
    message = getattr(
        getattr(video, "error", None), "message", "Video generation failed"
    )
    print(message)
    return

print("Video generation completed:", video)
print("Downloading video content...")

content = openai.videos.download_content(video.id, variant="video")
content.write_to_file("video.mp4")

print("Wrote video.mp4")

Key Features

  • Automatic status polling with 2-second intervals
  • Real-time progress bar with carriage return for overwriting output
  • Proper error handling with message extraction
  • File writing using the OpenAI client's built-in method

--------------------------------

### List OpenAI Sora Videos (cURL)

Source: https://platform.openai.com/docs/guides/video-generation_gallery=open&galleryItem=chameleon

These cURL commands demonstrate how to retrieve a list of your generated videos from the OpenAI Sora API. The first example shows a default `GET` request. The second example illustrates how to use query parameters like `limit`, `after`, and `order` for pagination and sorting the video list.

```bash
curl "https://api.openai.com/v1/videos" \
  -H "Authorization: Bearer $OPENAI_API_KEY" | jq .
curl "https://api.openai.com/v1/videos?limit=20&after=video_123&order=asc" \
  -H "Authorization: Bearer $OPENAI_API_KEY" | jq .

Retrieve Specific OpenAI Chat Completion by ID

Source: https://platform.openai.com/docs/api-reference/chat/get

These examples demonstrate how to retrieve a specific OpenAI Chat Completion object using its unique identifier. The cURL command performs a direct HTTP GET request, while the Python SDK example uses the client.chat.completions.retrieve() method. Both methods require an OpenAI API key for authentication.

curl https://api.openai.com/v1/chat/completions/chatcmpl-abc123 \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json"
from openai import OpenAI
client = OpenAI()

completions = client.chat.completions.list()
first_id = completions[0].id
first_completion = client.chat.completions.retrieve(completion_id=first_id)
print(first_completion)

Code Interpreter with Containers - Quick Reference

Source: https://platform.openai.com/docs/guides/tools-code-interpreter

Quick reference guide showing Python and JavaScript examples for creating containers and executing code with the Code Interpreter API.

## Code Interpreter with Containers - Implementation Examples

### Python Implementation
```python
from openai import OpenAI
client = OpenAI()

# Step 1: Create a container
container = client.containers.create(
    name="test-container",
    memory_limit="4g"
)

# Step 2: Execute code in the container
response = client.responses.create(
    model="gpt-4.1",
    tools=[
        {
            "type": "code_interpreter",
            "container": container.id
        }
    ],
    tool_choice="required",
    input="use the python tool to calculate what is 4 * 3.82. and then find its square root and then find the square root of that result"
)

# Step 3: Display results
print(response.output_text)

JavaScript Implementation

import OpenAI from "openai";
const client = new OpenAI();

// Step 1: Create a container
const container = await client.containers.create({
    name: "test-container",
    memory_limit: "4g"
});

// Step 2: Execute code in the container
const resp = await client.responses.create({
    model: "gpt-4.1",
    tools: [
        {
            type: "code_interpreter",
            container: container.id
        }
    ],
    tool_choice: "required",
    input: "use the python tool to calculate what is 4 * 3.82. and then find its square root and then find the square root of that result"
});

// Step 3: Display results
console.log(resp.output_text);

Memory Limit Options

  • 1g - Default, suitable for lightweight operations
  • 4g - Recommended for most use cases
  • 16g - For resource-intensive computations
  • 64g - For large-scale data processing

Container Lifecycle

  1. Create container with specified memory limit
  2. Use container.id in responses API calls
  3. Container automatically expires after 20 minutes of inactivity
  4. Container activity (any operation) refreshes the expiration timer
  5. Expired containers cannot be reactivated; create a new one instead

Vercel AI SDK

The Vercel AI SDK is a comprehensive TypeScript toolkit for building AI-powered applications with language models. It provides a unified interface for interacting with multiple AI providers (OpenAI, Anthropic, Google, and 40+ others) and offers framework-agnostic hooks for React, Vue, Svelte, and Angular. The SDK handles streaming, tool calling, structured output generation, and agentic workflows with built-in support for multi-step reasoning and complex interactions.

The SDK consists of three main layers: the Core AI module (ai) for server-side model interactions, framework-specific UI modules (@ai-sdk/react, @ai-sdk/vue, etc.) for building chat interfaces, and provider packages (@ai-sdk/openai, @ai-sdk/anthropic, etc.) for model access. It supports both streaming and non-streaming generation, automatic tool execution with approval workflows, structured data extraction using Zod schemas, stateful agent systems that can execute multi-step tasks autonomously, Model Context Protocol (MCP) integration for connecting to external tools and services, and multimedia capabilities including image generation, text-to-speech, audio transcription, and document reranking.

generateText - Generate text with tool calls

Generate text responses from language models with automatic tool calling and multi-step execution. Returns complete response after all tool calls are executed.

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4-turbo'),
  prompt: 'What is the weather in San Francisco and what should I wear?',
  tools: {
    getWeather: {
      description: 'Get the weather for a location',
      parameters: z.object({
        city: z.string().describe('The city name')
      }),
      execute: async ({ city }) => {
        // API call to weather service
        return { temperature: 72, condition: 'sunny' };
      }
    }
  },
  maxRetries: 2,
  temperature: 0.7
});

console.log(result.text); // Final text after tool execution
console.log(result.toolCalls); // All tool calls made
console.log(result.usage); // Token usage statistics
console.log(result.steps); // All generation steps

streamText - Stream text with real-time tool execution

Stream text generation with real-time tool calling and event callbacks. Returns stream result with multiple consumption methods.

import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
import { z } from 'zod';

const result = streamText({
  model: anthropic('claude-3-5-sonnet-20241022'),
  system: 'You are a helpful assistant with access to real-time data.',
  prompt: 'Search for recent news about AI and summarize the top 3 articles.',
  tools: {
    searchWeb: {
      description: 'Search the web for information',
      parameters: z.object({
        query: z.string()
      }),
      execute: async ({ query }) => {
        // Perform web search
        return { results: ['Article 1...', 'Article 2...', 'Article 3...'] };
      }
    }
  },
  onChunk: async ({ chunk }) => {
    if (chunk.type === 'text-delta') {
      process.stdout.write(chunk.text);
    }
  },
  onFinish: async ({ text, toolCalls, usage, steps }) => {
    console.log('\n\nGeneration complete');
    console.log('Total steps:', steps.length);
    console.log('Total tokens:', usage.totalTokens);
  }
});

// Multiple ways to consume the stream
for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

// Or get the full result
const { text, toolResults } = await result;

generateObject - Extract structured data

Generate type-safe structured objects from language models using Zod schemas. Automatically validates and parses model output.

import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateObject({
  model: openai('gpt-4-turbo'),
  schema: z.object({
    recipe: z.object({
      name: z.string(),
      ingredients: z.array(z.object({
        name: z.string(),
        amount: z.string(),
        unit: z.string()
      })),
      steps: z.array(z.string()),
      prepTime: z.number().describe('Preparation time in minutes'),
      cookTime: z.number().describe('Cooking time in minutes')
    })
  }),
  prompt: 'Generate a vegetarian lasagna recipe for 4 people.',
  mode: 'json', // 'auto', 'json', or 'tool'
  temperature: 0.3
});

console.log(result.object.recipe.name);
console.log(result.object.recipe.ingredients);
console.log(result.usage);

streamObject - Stream structured data

Stream partial structured objects as they're generated. Enables progressive UI updates while maintaining type safety.

import { streamObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = streamObject({
  model: openai('gpt-4-turbo'),
  schema: z.object({
    characters: z.array(z.object({
      name: z.string(),
      class: z.string(),
      bio: z.string()
    }))
  }),
  prompt: 'Generate 3 fantasy RPG characters with detailed backgrounds.'
});

// Stream partial objects
for await (const partialObject of result.partialObjectStream) {
  console.clear();
  console.log('Current progress:', JSON.stringify(partialObject, null, 2));
}

// Get final validated object
const { object } = await result;
console.log('Final result:', object);

ToolLoopAgent - Autonomous multi-step agents

Create reusable agents that can execute multi-step workflows with tools. ToolLoopAgent automatically handles tool calling loops and can be used across your application.

import { ToolLoopAgent } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const researchAgent = new ToolLoopAgent({
  model: openai('gpt-4-turbo'),
  id: 'research-agent',
  instructions: 'You are a research assistant that can search the web and analyze data.',
  tools: {
    searchWeb: {
      description: 'Search the web for information',
      parameters: z.object({
        query: z.string()
      }),
      execute: async ({ query }) => {
        // Perform search
        return { results: ['...'] };
      }
    },
    analyzeData: {
      description: 'Analyze data and provide insights',
      parameters: z.object({
        data: z.array(z.string())
      }),
      execute: async ({ data }) => {
        // Perform analysis
        return { insights: '...' };
      }
    }
  },
  stopWhen: async ({ steps }) => steps.length >= 10 || steps.at(-1)?.finishReason === 'stop',
  maxOutputTokens: 4096
});

// Use the agent (non-streaming)
const result = await researchAgent.generate({
  prompt: 'Research the latest developments in quantum computing and summarize key breakthroughs.'
});

console.log(result.content);
console.log(result.steps.length, 'steps executed');

// Or stream responses
const stream = researchAgent.stream({
  prompt: 'What are the current applications of quantum computing in cryptography?'
});

for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}

useChat - React chat interface

React hook for building chat UIs with streaming responses and tool invocations. Manages message state and handles user interactions.

'use client';

import { useChat } from '@ai-sdk/react';

export default function ChatComponent() {
  const { messages, status, sendMessage, stop, addToolOutput } = useChat({
    api: '/api/chat',
    initialMessages: [
      { id: '1', role: 'system', content: 'You are a helpful assistant.' }
    ],
    onFinish: (message) => {
      console.log('Message complete:', message);
    },
    onError: (error) => {
      console.error('Chat error:', error);
    }
  });

  const handleSubmit = (e: React.FormEvent<HTMLFormElement>) => {
    e.preventDefault();
    const formData = new FormData(e.currentTarget);
    const input = formData.get('message') as string;

    sendMessage({ text: input });
    e.currentTarget.reset();
  };

  return (
    <div>
      <div className="messages">
        {messages.map(message => (
          <div key={message.id} className={`message message-${message.role}`}>
            {message.parts.map((part, i) => {
              switch (part.type) {
                case 'text':
                  return <p key={i}>{part.text}</p>;
                case 'tool-image_generation':
                  if (part.state === 'output-available') {
                    return <img key={i} src={`data:image/png;base64,${part.output.result}`} />;
                  }
                  return <p key={i}>Generating image...</p>;
                default:
                  return null;
              }
            })}
          </div>
        ))}
      </div>

      <form onSubmit={handleSubmit}>
        <input
          name="message"
          placeholder="Type a message..."
          disabled={status !== 'ready'}
        />
        <button type="submit" disabled={status !== 'ready'}>
          Send
        </button>
        {status === 'in-progress' && (
          <button type="button" onClick={stop}>Stop</button>
        )}
      </form>
    </div>
  );
}

Chat API Route - Next.js App Router

Server-side chat endpoint that streams responses to the client. Uses agents with tool calling for complex interactions.

// app/api/chat/route.ts
import { ToolLoopAgent, createAgentUIStreamResponse } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const chatAgent = new ToolLoopAgent({
  model: openai('gpt-4-turbo'),
  instructions: 'You are a helpful assistant that can search the web and perform calculations.',
  tools: {
    search: {
      description: 'Search for information',
      parameters: z.object({ query: z.string() }),
      execute: async ({ query }) => {
        // Implement search
        return { results: ['...'] };
      }
    },
    calculate: {
      description: 'Perform a calculation',
      parameters: z.object({ expression: z.string() }),
      execute: async ({ expression }) => {
        // Implement calculator
        return { result: eval(expression) };
      }
    }
  }
});

export async function POST(req: Request) {
  const { messages } = await req.json();

  return createAgentUIStreamResponse({
    agent: chatAgent,
    messages,
  });
}

Tool Approval Workflow - User-controlled tool execution

Implement approval flows for sensitive tool operations. User can approve or deny each tool call before execution.

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = streamText({
  model: openai('gpt-4-turbo'),
  prompt: 'Delete the file named old-data.txt',
  tools: {
    deleteFile: {
      description: 'Delete a file from the filesystem',
      parameters: z.object({
        filename: z.string()
      }),
      requiresApproval: async ({ input }) => {
        // Require approval for deletions
        return true;
      },
      execute: async ({ filename }) => {
        // Delete file
        return { success: true };
      }
    }
  }
});

// Handle tool approval requests
for await (const chunk of result.fullStream) {
  if (chunk.type === 'tool-approval-request') {
    const userApproved = await askUser(
      `Approve deletion of ${chunk.toolCall.input.filename}?`
    );

    if (userApproved) {
      await result.addToolApprovalResponse({
        approvalId: chunk.approvalId,
        approved: true
      });
    } else {
      await result.addToolApprovalResponse({
        approvalId: chunk.approvalId,
        approved: false,
        reason: 'User denied permission'
      });
    }
  }
}

MCP Integration - Model Context Protocol

Connect to MCP servers to access external tools and services. The MCP client is available in the dedicated @ai-sdk/mcp package and supports stdio, HTTP, and SSE transports with OAuth authentication.

import { createMCPClient } from '@ai-sdk/mcp';
import { Experimental_StdioMCPTransport } from '@ai-sdk/mcp/mcp-stdio';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

// Create MCP client with stdio transport
const mcpClient = createMCPClient({
  transport: new Experimental_StdioMCPTransport({
    command: 'npx',
    args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
  }),
  capabilities: {
    tools: true,
    prompts: true,
  }
});

// Get tools from MCP server
const tools = await mcpClient.getTools();

// Use MCP tools with generateText
const result = await generateText({
  model: openai('gpt-4-turbo'),
  prompt: 'List files in the current directory',
  tools: tools,
});

console.log(result.text);

// Clean up
await mcpClient.close();

embed - Generate text embeddings

Generate vector embeddings for text using various embedding models. Useful for semantic search and similarity matching.

import { embed } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await embed({
  model: openai.embedding('text-embedding-3-small'),
  value: 'The quick brown fox jumps over the lazy dog.',
  dimensions: 512 // Optional: reduce dimensionality
});

console.log(result.embedding); // Float array of embedding values
console.log(result.usage); // Token usage

embedMany - Batch embedding generation

Generate embeddings for multiple texts efficiently with automatic batching and retry handling.

import { embedMany, cosineSimilarity } from 'ai';
import { openai } from '@ai-sdk/openai';

const texts = [
  'Artificial intelligence is transforming technology.',
  'Machine learning models require large datasets.',
  'Natural language processing enables human-computer interaction.'
];

const result = await embedMany({
  model: openai.embedding('text-embedding-3-small'),
  values: texts,
  maxRetries: 2
});

console.log(result.embeddings); // Array of embedding arrays
console.log(result.usage); // Total token usage

// Calculate cosine similarity between embeddings
const similarity = cosineSimilarity(result.embeddings[0], result.embeddings[1]);
console.log('Similarity:', similarity);

generateImage - Generate images from text

Generate images using image generation models like DALL-E, Stable Diffusion, and others. Supports multiple providers and advanced configuration.

import { generateImage } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateImage({
  model: openai.image('dall-e-3'),
  prompt: 'A serene mountain landscape at sunset with a lake in the foreground',
  n: 1, // Number of images
  size: '1024x1024',
  aspectRatio: '16:9', // Alternative to size
  seed: 12345, // For reproducibility
  providerOptions: {
    openai: {
      style: 'vivid',
      quality: 'hd'
    }
  }
});

console.log(result.images); // Array of generated images
console.log(result.images[0].base64); // Base64-encoded image data
console.log(result.images[0].uint8Array); // Raw image bytes

// Save image to file
import fs from 'fs';
fs.writeFileSync('output.png', result.images[0].uint8Array);

generateSpeech - Convert text to speech

Generate speech audio from text using text-to-speech models. Supports multiple voices, languages, and audio formats.

import { generateSpeech } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateSpeech({
  model: openai.speech('tts-1'),
  text: 'Hello, welcome to the AI SDK. This is a demonstration of text-to-speech capabilities.',
  voice: 'alloy', // Voice selection
  outputFormat: 'mp3', // 'mp3' | 'wav' | 'opus' | 'aac' | 'flac'
  speed: 1.0, // Speech speed (0.25 to 4.0)
  language: 'en', // ISO 639-1 language code
  instructions: 'Speak in a friendly and enthusiastic tone'
});

console.log(result.audio); // Audio file object
console.log(result.audio.uint8Array); // Raw audio bytes
console.log(result.warnings); // Any warnings from generation

// Save audio to file
import fs from 'fs';
fs.writeFileSync('output.mp3', result.audio.uint8Array);

experimental_transcribe - Transcribe audio to text

Transcribe audio files to text using speech recognition models like Whisper. Supports various audio formats and returns detailed transcription data.

import { experimental_transcribe } from 'ai';
import { openai } from '@ai-sdk/openai';
import fs from 'fs';

// Transcribe from file
const audioData = fs.readFileSync('recording.mp3');

const result = await experimental_transcribe({
  model: openai.transcription('whisper-1'),
  audio: audioData,
  language: 'en', // Optional: specify source language
  prompt: 'This is a technical discussion about AI.' // Optional: context hint
});

console.log(result.text); // Full transcription
console.log(result.segments); // Timestamped segments
console.log(result.language); // Detected language
console.log(result.duration); // Audio duration in seconds

// Access timestamped segments
result.segments?.forEach(segment => {
  console.log(`[${segment.start}s - ${segment.end}s]: ${segment.text}`);
});

rerank - Rerank documents by relevance

Rerank a list of documents based on their relevance to a query using specialized reranking models. More accurate than simple embedding similarity for search and retrieval.

import { rerank } from 'ai';
import { cohere } from '@ai-sdk/cohere';

const documents = [
  { id: '1', text: 'Machine learning is a subset of artificial intelligence.' },
  { id: '2', text: 'Paris is the capital city of France.' },
  { id: '3', text: 'Neural networks are inspired by the human brain.' },
  { id: '4', text: 'The Eiffel Tower is located in Paris.' },
  { id: '5', text: 'Deep learning uses multiple layers of neural networks.' }
];

const result = await rerank({
  model: cohere.reranker('rerank-english-v3.0'),
  query: 'What is artificial intelligence and machine learning?',
  documents: documents.map(doc => doc.text),
  topN: 3 // Return top 3 most relevant documents
});

console.log(result.ranking); // Ranked results with relevance scores
result.ranking.forEach(ranked => {
  console.log(`Original Index ${ranked.originalIndex}: Score ${ranked.score}`);
  console.log(`Document: ${ranked.document}\n`);
});

// Access reranked documents directly
console.log(result.rerankedDocuments);

Provider Configuration - Multiple AI providers

Configure and use multiple AI providers in the same application. The SDK provides unified interfaces across all providers.

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';
import { createOpenAI } from '@ai-sdk/openai-compatible';

// OpenAI
const openaiResult = await generateText({
  model: openai('gpt-4-turbo', {
    apiKey: process.env.OPENAI_API_KEY
  }),
  prompt: 'Explain quantum computing.'
});

// Anthropic
const anthropicResult = await generateText({
  model: anthropic('claude-3-5-sonnet-20241022', {
    apiKey: process.env.ANTHROPIC_API_KEY
  }),
  prompt: 'Explain quantum computing.'
});

// Google Gemini
const googleResult = await generateText({
  model: google('gemini-1.5-pro', {
    apiKey: process.env.GOOGLE_API_KEY
  }),
  prompt: 'Explain quantum computing.'
});

// OpenAI-compatible providers (Groq, Together, etc.)
const groq = createOpenAI({
  apiKey: process.env.GROQ_API_KEY,
  baseURL: 'https://api.groq.com/openai/v1'
});

const groqResult = await generateText({
  model: groq('llama-3.1-70b-versatile'),
  prompt: 'Explain quantum computing.'
});

// Or use Vercel AI Gateway for unified access
const gatewayResult = await generateText({
  model: 'openai/gpt-4-turbo', // Gateway handles routing
  prompt: 'Explain quantum computing.'
});

Multi-step Reasoning with Callbacks

Track and control multi-step generation processes with detailed callbacks for each step. Useful for debugging and monitoring agent behavior.

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4-turbo'),
  prompt: 'Research the history of the internet and create a timeline.',
  tools: {
    search: {
      description: 'Search for information',
      parameters: z.object({ query: z.string() }),
      execute: async ({ query }) => ({ results: ['...'] })
    }
  },
  stopWhen: ({ steps }) => steps.length >= 5,
  onStepFinish: async (stepResult) => {
    console.log(`\n--- Step ${stepResult.response.messages.length} ---`);
    console.log('Finish reason:', stepResult.finishReason);
    console.log('Tool calls:', stepResult.toolCalls?.length || 0);
    console.log('Tokens used:', stepResult.usage.totalTokens);

    if (stepResult.toolCalls) {
      stepResult.toolCalls.forEach(call => {
        console.log(`Tool: ${call.toolName}`, call.input);
      });
    }
  },
  onFinish: async ({ steps, totalUsage, text }) => {
    console.log('\n=== Generation Complete ===');
    console.log('Total steps:', steps.length);
    console.log('Total tokens:', totalUsage.totalTokens);
    console.log('Final output length:', text.length);
  }
});

console.log('\nFinal result:', result.text);

Structured Output with Output Helpers

Generate structured outputs with helper functions for arrays, choices, and unstructured JSON. Simplifies schema definition and provides better type safety.

import { generateText, Output, stepCountIs } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

// Generate an array of objects
const arrayResult = await generateText({
  model: openai('gpt-4o-mini'),
  output: Output.array({
    element: z.object({
      name: z.string(),
      email: z.string().email(),
      role: z.enum(['admin', 'user', 'guest'])
    })
  }),
  stopWhen: stepCountIs(5),
  prompt: 'Generate 5 sample user profiles.'
});

console.log(arrayResult.output); // Array of user objects

// Generate an enum/choice
const choiceResult = await generateText({
  model: openai('gpt-4o-mini'),
  output: Output.choice({
    options: ['positive', 'negative', 'neutral']
  }),
  prompt: 'Analyze the sentiment of: "This product is amazing!"'
});

console.log(choiceResult.output); // 'positive' | 'negative' | 'neutral'

// Generate unstructured JSON (no schema required)
const jsonResult = await generateText({
  model: openai('gpt-4o-mini'),
  output: Output.json(),
  system: 'Return JSON only, no other text.',
  prompt: 'Generate a flexible JSON object with user data and metadata.'
});

console.log(jsonResult.output); // Any JSON value

// Or use Output.object() with generateText for type-safe objects
const objectResult = await generateText({
  model: openai('gpt-4o-mini'),
  output: Output.object({
    schema: z.object({
      users: z.array(z.object({
        name: z.string(),
        email: z.string().email(),
        age: z.number().min(0).max(120)
      }))
    })
  }),
  prompt: 'Generate 3 user profiles.'
});

console.log(objectResult.output.users);

The Vercel AI SDK provides comprehensive tools for building production-ready AI applications with type safety, streaming support, and multi-provider compatibility. The core generateText and streamText functions handle text generation with automatic tool calling and multi-step reasoning, enabling complex agentic workflows. For structured data extraction, generateObject and streamObject parse LLM outputs into type-safe objects using Zod schemas with validation, while generateText can also produce structured outputs using the output parameter with helpers like Output.object(), Output.array(), Output.choice(), and Output.json(). The ToolLoopAgent class encapsulates reusable AI behaviors with tools and instructions, making it easy to create specialized assistants that can execute multi-step workflows autonomously and be integrated into chat interfaces via createAgentUIStreamResponse.

Framework integration is seamless through UI hooks like useChat for React from @ai-sdk/react, plus similar hooks for @ai-sdk/vue and @ai-sdk/svelte that manage message state with parts-based message structure, handle streaming, and provide loading indicators. The SDK supports 40+ AI providers through a unified interface, including OpenAI, Anthropic, Google, Azure, AWS Bedrock, xAI, Deepgram, AssemblyAI, ElevenLabs, and open-source models via OpenAI-compatible endpoints. Advanced features include Model Context Protocol (MCP) integration via @ai-sdk/mcp for connecting to external tool servers with stdio, HTTP, and SSE transports plus OAuth support, tool approval workflows with requiresApproval for sensitive operations, embedding generation for semantic search with embed and embedMany plus cosineSimilarity for similarity calculations, document reranking with the rerank function for improved search relevance, and multimedia capabilities with generateImage for DALL-E and Stable Diffusion models, generateSpeech for text-to-speech audio generation with providers like OpenAI and Deepgram, and transcribe for audio-to-text conversion with timestamped segments. The SDK offers reasoning output tracking for advanced models, stopWhen conditions for controlling multi-step execution, onStepFinish callbacks for monitoring agent progress, custom retry logic with exponential backoff, telemetry integration with OpenTelemetry, and comprehensive error handling with typed exceptions. Every function returns detailed usage statistics including token counts, finish reasons, and provider metadata for observability and cost tracking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment