Skip to content

Instantly share code, notes, and snippets.

@decagondev
Created February 24, 2026 18:02
Show Gist options
  • Select an option

  • Save decagondev/1bb2af4ec5558beedfcaae40ab757f61 to your computer and use it in GitHub Desktop.

Select an option

Save decagondev/1bb2af4ec5558beedfcaae40ab757f61 to your computer and use it in GitHub Desktop.

Integrating Langfuse into a Legacy Stripes Framework Java Application

Introduction

This guide provides a comprehensive, step-by-step walkthrough for integrating Langfuse—an open-source observability platform for LLM applications—into a legacy Java application built on the Stripes Framework. Langfuse helps with tracing AI interactions, managing prompts, and collecting evaluations (e.g., user feedback or AI scores).

The integration combines:

  • Langfuse Java SDK: Primarily for prompt management and scoring.
  • OpenTelemetry (OTel): For automated tracing of AI calls and observations.

This setup allows you to monitor AI-powered features (e.g., chatbots or knowledge retrieval) without major refactoring. Traces in Langfuse capture "observations" like function executions, LLM calls, and metadata, enabling debugging, performance analysis, and iterative improvements.

Why Use This Integration?

  • Decouple prompts from code for easy updates via the Langfuse UI.
  • Automatically trace AI interactions using OTel annotations.
  • Collect human or AI evaluations to measure response quality.
  • Minimal boilerplate: Leverage OTel's agent for global hooks.

Assumptions:

  • You're familiar with Java, Maven (for pom.xml), and Stripes ActionBeans.
  • Your app uses an LLM service (e.g., via libraries like LangChain4j or direct API calls).
  • You have Langfuse credentials (public key pk-lf-... and secret key sk-lf-...) from your Langfuse account.

If you're new to Langfuse, sign up at cloud.langfuse.com and create a project to get your keys.

Prerequisites

  • Java 8+ (compatible with Stripes and the SDKs).
  • Maven for dependency management.
  • An application server (e.g., Tomcat, Jetty) running your Stripes app.
  • Access to an LLM provider (e.g., OpenAI, Anthropic) for AI calls.
  • Optional: OpenTelemetry Java Agent for auto-instrumentation (detailed in a later section).

Step 1: Project Setup

Add the necessary dependencies to your pom.xml file. The Langfuse SDK handles prompts and scores, while OpenTelemetry provides the tracing backbone.

<dependencies>
    <!-- Langfuse Java SDK for Prompts & Scores -->
    <dependency>
        <groupId>com.langfuse</groupId>
        <artifactId>langfuse-java</artifactId>
        <version>0.1.0</version>  <!-- Check for the latest version on Maven Central -->
    </dependency>
    <!-- OpenTelemetry for Tracing AI calls -->
    <dependency>
        <groupId>io.opentelemetry</groupId>
        <artifactId>opentelemetry-api</artifactId>
        <version>1.31.0</version>  <!-- Use the latest stable version -->
    </dependency>
</dependencies>

Tips:

  • Run mvn clean install to fetch dependencies.
  • If your app uses other OTel modules (e.g., for exporters), ensure compatibility.
  • Warning: The Langfuse Java SDK is focused on management features; tracing relies on OTel integration with Langfuse as the exporter.

Step 2: Understanding the "Observation" Pattern

Langfuse organizes data into Traces (top-level sessions, e.g., a user chat) and Observations (child events like LLM calls or function executions). In a Stripes app, integrate this into your ActionBeans or a dedicated AI service class.

  • Traces: Automatically created or manually via OTel.
  • Observations: Hooked via annotations or manual spans.

This pattern "hooks" into your code to capture AI flows without disrupting legacy logic.

Step 3: Initializing the Langfuse Client and Basic Integration

Initialize the Langfuse client once (e.g., in a base ActionBean or singleton). Use it to fetch prompts and execute traced AI calls.

Sample Code: AI-Powered ActionBean

Create or extend a Stripes ActionBean (e.g., ChatActionBean) to handle AI interactions.

import com.langfuse.client.LangfuseClient;
import com.langfuse.client.model.PromptResponse;

public class ChatActionBean extends BaseActionBean {
    // 1. Initialize Langfuse Client (do this once, e.g., in a static block or init method)
    private static final LangfuseClient langfuse = LangfuseClient.builder()
        .credentials("pk-lf-YOUR_PUBLIC_KEY", "sk-lf-YOUR_SECRET_KEY")  // Replace with your keys
        .url("https://cloud.langfuse.com")  // Or self-hosted URL if applicable
        .build();

    @DefaultHandler
    public Resolution handleChat() {
        // 2. Fetch versioned prompt from Langfuse (decouples prompt from code)
        PromptResponse promptResponse = langfuse.prompts().get("support-bot");  // "support-bot" is your prompt name in Langfuse
        String promptTemplate = promptResponse.getPrompt();  // Use .getCompiledPrompt() if variables are needed
        
        // 3. Execute AI call (traced via OTel in later steps)
        String aiResult = aiService.callLLM(promptTemplate, getContext());  // aiService is your LLM wrapper
        
        // Render result (e.g., pass to JSP)
        getContext().getRequest().setAttribute("aiResult", aiResult);
        return new ForwardResolution("/chat.jsp");
    }
}

Explanations:

  • Prompt Fetching: Store prompts in Langfuse UI for version control. Update them without redeploying code.
  • AI Call: Replace aiService.callLLM with your LLM logic (e.g., using OkHttp for API calls). Tracing happens automatically if OTel is configured.
  • Error Handling: Wrap in try-catch; log errors to Langfuse via spans.

Step 4: Using Annotations for Function Hooks

Leverage OpenTelemetry annotations to create observations without manual tracing code. This is ideal for hooking into utility methods or services.

  • @WithSpan: Automatically starts a span (observation) when the method is called.
  • Span API: Add custom attributes (metadata) visible in Langfuse.

Example: Hooking into a Knowledge Retrieval Service

import io.opentelemetry.instrumentation.annotations.WithSpan;
import io.opentelemetry.api.trace.Span;

public class KnowledgeService {

    @WithSpan("knowledge_retrieval")  // Span name appears in Langfuse as an observation
    public String findContext(String query) {
        // Add custom metadata to the span
        Span.current().setAttribute("query_length", query.length());
        Span.current().setAttribute("user_id", getCurrentUserId());  // Example: Add context
        
        // Your retrieval logic (e.g., database query or vector search)
        return "Relevant context based on query: " + query;
    }
}

How It Works:

  • When findContext is called (e.g., from your ActionBean), OTel creates a span linked to the parent trace.
  • View traces in Langfuse dashboard: Observations show timings, inputs, outputs, and attributes.
  • Pro Tip: Use this for any method involving AI (e.g., embedding generation, RAG retrieval).

Step 5: Configuring OpenTelemetry for Auto-Tracing

For @WithSpan to work, configure the OpenTelemetry Java Agent. This enables auto-instrumentation for libraries (e.g., OkHttp for LLM calls) and exports traces to Langfuse.

  1. Download the Agent: Get opentelemetry-javaagent.jar from GitHub releases.

  2. Run Your App with the Agent:

    • For Tomcat/Jetty: Add to JVM args: -javaagent:/path/to/opentelemetry-javaagent.jar
    • Configure exporter to Langfuse:
      -Dotel.exporter.otlp.endpoint=https://cloud.langfuse.com/api/public/ingest
      -Dotel.exporter.otlp.headers=Authorization=Basic <base64(pk-lf-...:sk-lf-...)>
      
      (Base64 encode your keys as pk-lf-...:sk-lf-....)
  3. Test: Run your app, trigger an AI call, and check Langfuse for traces.

Advanced Config: For custom traces, use OTel's Tracer API manually in code.

Step 6: Implementing Evaluations and Scoring

Langfuse supports "annotations" (scores) for traces, enabling feedback loops. Collect from users (e.g., thumbs up/down) or AI evaluators.

Example: User Feedback Action

Add a Stripes action to submit scores.

public class FeedbackActionBean extends BaseActionBean {

    @DefaultHandler
    public Resolution rateResponse() {
        // Assume getLastTraceId() retrieves the trace ID from session or context
        langfuse.scores().create(ScoreRequest.builder()
            .traceId(getLastTraceId())  // Link to the specific trace
            .name("user-helpfulness")   // Score category
            .value(1.0)                 // 0.0 to 1.0 scale (or custom)
            .comment("User liked this answer")  // Optional details
            .build());
        
        return new JavaScriptResolution("Success");  // For AJAX feedback
    }
}

Integration Ideas:

  • UI: Add thumbs icons in your JSP that POST to this action.
  • AI Evals: Use another LLM to score responses programmatically.
  • View in Langfuse: Scores appear in traces for analytics (e.g., average helpfulness).

Summary of Key Hooking Mechanisms

Feature Implementation Method Purpose
Tracing Hooks @WithSpan (OpenTelemetry) Auto-track function execution and metadata in Langfuse traces.
Prompt Hooks langfuse.prompts().get() Fetch and use versioned prompts from Langfuse for decoupling.
Eval Hooks langfuse.scores().create() Log user feedback or AI judgments for quality monitoring.
Global Hooks OTel Java Agent Auto-trace third-party libraries (e.g., LangChain4j, HTTP clients).

Best Practices and Troubleshooting

  • Start Small: Integrate into one ActionBean first, then expand.
  • Security: Never expose keys in code; use environment variables.
  • Performance: Traces add minimal overhead; sample if needed via OTel config.
  • Debugging: If traces don't appear, check OTel logs and Langfuse ingest endpoint.
  • Limitations: Langfuse Java SDK is evolving; for full tracing, rely on OTel.
  • Resources: Langfuse Docs, OTel Java Docs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment