Skip to content

Instantly share code, notes, and snippets.

@bllchmbrs
Created December 3, 2025 17:31
Show Gist options
  • Select an option

  • Save bllchmbrs/e86460ff9ac0c7bce517d3b1513856ec to your computer and use it in GitHub Desktop.

Select an option

Save bllchmbrs/e86460ff9ac0c7bce517d3b1513856ec to your computer and use it in GitHub Desktop.

Direct Streaming

Great for: Development workflow - Live, real-time debugging and development

Simple to set up, provides immediate visual feedback. Limited to a single viewer, constrained by Wi-Fi bandwidth.

On the Laptop (Viewer)

First, spawn the Rerun viewer:

# Spawn the Rerun viewer
rerun

# Or specify a port
rerun --port 9876

On the Robot (Data Source)

import rerun as rr

# Initialize the recording
rr.init("robot_data", recording_id="my-robot-session")

# Connect to the laptop's IP address
# Replace with your laptop's actual IP (find with ifconfig/ipconfig)
rr.connect_grpc("rerun+http://192.168.1.100:9876/proxy")

# Now log data as normal
rr.set_time("frame", sequence=0)
rr.log("robot/position", rr.Points3D([[0, 0, 0]]))
rr.log("robot/camera", rr.Image(camera_image))

# Data streams directly to the laptop viewer in real-time

Connection String Format: rerun+http://<LAPTOP_IP>:9876/proxy

Local File Logging

Great for: Bad connections - Capturing data during competitions when live network connections are forbidden

Works without any network connection, ensures a complete record. No real-time visibility; requires manual file transfer.

On the Robot

import rerun as rr

# Initialize the recording
rr.init("robot_data", recording_id="competition-run-1")

# Save to a local file (e.g., on a USB drive or robot's storage)
rr.save("/mnt/usb/robot_data.rrd")

# Log data as normal - it will be saved to the file
rr.set_time("frame", sequence=0)
rr.log("robot/position", rr.Points3D([[0, 0, 0]]))
rr.log("robot/camera", rr.Image(camera_image))

# Important: Flush data before shutdown
rr.disconnect()
rr.flush()

The copy it to the laptop/USB drive/Over the network.

On the Laptop (Later, After Retrieving the File)

# Open the recorded file in the Rerun viewer
rerun /path/to/robot_data.rrd

(Centralized) Relay Server

Great for: Teams - Allowing multiple people to debug simultaneously

Enables multi-user viewing, offloads network burden from the robot. Adds a layer of complexity to infrastructure.

On the Server (Relay)

rerun serve_grpc --grpc-port 9876 --server-memory-limit 2GB

Save this to a file with --save.

You can also serve a web viewer --serve-web. This will host a web-viewer over HTTP, and a gRPC server, unless one or more URIs are provided that can be viewed directly in the web viewer.

On the robot

import rerun as rr

rr.init("team_viewer")

# Connect to the relay server
rr.connect_grpc("rerun+http://192.168.1.50:9876/proxy")

rr.flush(timeout_sec=1)
rr.disconnect()

On Each Laptop (Viewer)

From command line:

# Connect to the relay server
rerun rerun+http://192.168.1.50:9876/proxy

(Centralized) Compute & Logging Server

Great for: Advanced use cases - Offloading heavy processing from the robot

Drastically reduces robot's computational load and minimizes required bandwidth. Most complex architecture, potentially introduces latency in robot's decision-making loop.

This setup works similar to running ROS on a robot, and also on your laptop. Where your robot will broadcast messages over the network, and your laptop can receive and respond to them.

On the Server (Compute + Logging)

You'll run two separate processes on the server:

Process 1: Rerun Server

rerun --serve-grpc --port 9876 --server-memory-limit 4GB

Process 2: Python Compute Server

import rerun as rr
import numpy as np
from flask import Flask, request, jsonify
import threading

# Initialize Rerun
rr.init("robot_compute_server")

# Connect to the local Rerun server
rr.connect_grpc("rerun+http://localhost:9876/proxy")


def receive_sensor_data():
    ...

def run_object_detection(image):
    ...

On the Robot (Raw Data Sender)

import requests
import numpy as np
import json

# Server configuration
SERVER_IP = "192.168.1.50"
SERVER_PORT = 5000


# also potentially log to the Rerun Server as well directly
def send_sensor_data(camera_image, lidar_points, timestamp):
    ...

# Main robot loop
frame_count = 0
while robot_is_running:
    # Get raw sensor data
    camera_image = get_camera_image()
    lidar_points = get_lidar_data()

    # Send to server for processing (non-blocking)
    detections = send_sensor_data(camera_image, lidar_points, frame_count)

    # Continue with robot control
    # The robot doesn't need to wait for the visualization
    control_robot(detections)

    frame_count += 1

On Each Laptop (Viewer)

# Connect to the compute server's Rerun stream
rerun rerun+http://192.168.1.50:9876/proxy

On-Robot Viewer

Great for: Quick and dirty - Getting things started without additional infrastructure

Self-contained on the robot, no separate infrastructure needed. Puts extra computational load on the robot; can easily saturate the robot's network card if multiple people connect.

On the Robot (All-in-One)

Serve the gRPC server and web viewer from the robot.

import rerun as rr

# Initialize
rr.init("robot_data")

# Start gRPC server
server_uri = rr.serve_grpc(
    grpc_port=9876,
    server_memory_limit="512MB"  # Be conservative on robot
)

# Start web viewer server
rr.serve_web_viewer(
    web_port=9090,
    open_browser=False,  # Don't try to open browser on robot
    connect_to=server_uri
)

print(f"Web viewer available at: http://<ROBOT_IP>:9090")
print(f"Native viewer can connect to: {server_uri}")

# Now log your data
rr.set_time("frame", sequence=0)
rr.log("robot/position", rr.Points3D([[0, 0, 0]]))

# Keep running
import time
while True:
    # Your robot code
    time.sleep(0.1)

On Each Laptop (Viewer)

Web Browser Access:

http://192.168.1.42:9090

Just open this URL in any modern web browser. Replace 192.168.1.42 with your robot's actual IP address.

Native Viewer Access (Better Performance):

# Connect native Rerun viewer to robot
rerun rerun+http://192.168.1.42:9876/proxy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment