Achieving synchronized audio and video playback is a complex but crucial task in multimedia programming. With a framework like PipeWire, which is designed to handle all types of media streams, synchronization is a core concept. The key to this is understanding the role of Presentation Timestamps (PTS) and a shared clock.
Here’s a breakdown of the concepts and a step-by-step guide on how to approach A/V sync when creating a C++ player with PipeWire.
Imagine you have two separate players: one for video frames and one for audio samples. To keep them in sync, you can't just play them as fast as possible. Instead, you need a shared "wall clock" that both players can look at.
-
The Clock: PipeWire provides a global clock for the entire media graph. This clock is typically driven by an audio device (like your sound card) because audio playback is very sensitive to timing errors. If audio samples aren't delivered at a precise, steady rate, you get pops, clicks, and distorted sound (an "underrun" or "overrun"). Video is more forgiving; dropping or displaying a frame a few milliseconds late is often unnoticeable.
-
Presentation Timestamps (PTS): Every single audio buffer and video frame that you decode from a media file (like an MP4) has a timestamp attached to it. This PTS value says, "According to the timeline of the media file, this piece of data should be presented (heard or seen) at exactly this moment."
The synchronization logic is then straightforward:
- The application gives PipeWire an audio buffer with a PTS.
- The application gives PipeWire a video frame with a PTS.
- PipeWire's internal clock advances.
- When PipeWire's clock time matches the PTS of a buffer or frame, it releases that data to the hardware (the sound card or the display server/GPU).
Let's expand on the previous audio-only example. A full A/V player would require a demuxing and decoding library (like FFmpeg), but we can outline the logic for handling the PipeWire side.
You would need to create two separate PipeWire streams:
- One
pw_streamfor audio playback. - One
pw_streamfor video playback.
Here are the essential steps:
Before touching PipeWire, you need to read the media file. A library like FFmpeg is standard for this.
- Open the Media File: Use FFmpeg to open the video file. This will give you access to its various streams (audio, video, subtitles).
- Find Streams and Codecs: Identify the audio and video streams and initialize the appropriate decoders.
- Get Time Base: Crucially, get the
time_basefor each stream. This is a rational number (like 1/90000) that tells you the unit of the PTS values in the stream. You will need this to convert the stream's PTS into nanoseconds, which is what PipeWire's clock uses.
You will create two streams, much like the audio example, but with different properties.
Audio Stream Creation:
// (Inside your main function)
pw_stream *audio_stream = pw_stream_new_simple(
loop,
"my-player-audio",
pw_properties_new(
PW_KEY_MEDIA_TYPE, "Audio",
PW_KEY_MEDIA_CATEGORY, "Playback",
// ... other properties
nullptr),
&audio_stream_events, // A struct with your audio callbacks
&app_data);Video Stream Creation:
The key difference is the PW_KEY_MEDIA_TYPE.
pw_stream *video_stream = pw_stream_new_simple(
loop,
"my-player-video",
pw_properties_new(
PW_KEY_MEDIA_TYPE, "Video", // This is the important part
PW_KEY_MEDIA_CATEGORY, "Playback",
// ... other properties
nullptr),
&video_stream_events, // A separate struct for video callbacks
&app_data);When connecting each stream, you must provide the format decoded from the media file.
- For Audio: This would be
SPA_AUDIO_FORMAT_S16,SPA_AUDIO_FORMAT_F32P(planar float), etc., along with the sample rate and channels. - For Video: This would be the pixel format, like
SPA_VIDEO_FORMAT_RGBorSPA_VIDEO_FORMAT_YV12, along with the video's width and height.
This is where synchronization happens. You'll have two on_process functions: one for audio and one for video.
-
Read and Decode a Packet: In your main application loop (outside the callbacks), continuously read packets from the media file using FFmpeg. A packet can be either audio or video.
-
Store Decoded Data: When you decode a packet, you get raw audio samples or a raw video frame, each with its PTS. Store these in thread-safe queues.
-
Inside
on_audio_process:- Dequeue a buffer from the audio stream:
pw_stream_dequeue_buffer(audio_stream). - Pop decoded audio data from your audio queue.
- Set the PTS on the PipeWire buffer: This is the most critical step. Convert the frame's PTS from its
time_baseto nanoseconds.struct pw_buffer *pw_buf = pw_stream_dequeue_buffer(audio_stream); struct spa_buffer *spa_buf = pw_buf->buffer; // FFMpegFrame *frame = your_audio_queue.pop(); // int64_t pts_ns = av_rescale_q(frame->pts, ffmpeg_stream->time_base, {1, 1000000000}); // The time for this buffer is now set spa_buf->datas[0].chunk->offset = 0; spa_buf->datas[0].chunk->size = /* size of audio data */; // Copy your audio samples into spa_buf->datas[0].data // Associate the timestamp with this buffer pw_buf->time = pts_ns; pw_stream_queue_buffer(audio_stream, pw_buf);
- Dequeue a buffer from the audio stream:
-
Inside
on_video_process:- Do the exact same thing for video: dequeue a video buffer, get the decoded video frame from your video queue, convert its PTS to nanoseconds, set
pw_buf->time, copy the pixel data, and queue the buffer.
- Do the exact same thing for video: dequeue a video buffer, get the decoded video frame from your video queue, convert its PTS to nanoseconds, set
Once you are feeding both streams with correctly timestamped buffers, PipeWire handles the rest.
- PipeWire's scheduler looks at the PTS of the buffers you've queued.
- It monitors its internal master clock (driven by the audio sink).
- It will only release a video buffer to be rendered when the clock time is greater than or equal to that buffer's
time(PTS). - If the video rendering is falling behind, PipeWire's rate-matching mechanism might cause it to automatically drop video frames to catch up to the audio clock. If audio is behind, you might hear a glitch as it tries to catch up.
- Setup: Use a library like FFmpeg to open a file and prepare decoders.
- PipeWire Init: Create two streams:
AudioandVideo. - Main Loop:
- Continuously read A/V packets from the file.
- Decode them into raw frames/samples, keeping their PTS.
- Push the decoded data into separate, thread-safe audio and video queues.
- PipeWire Audio Callback (
on_process):- When PipeWire needs audio, pull from your audio queue.
- Copy the audio samples into the dequeued
pw_buffer. - Set the
pw_buffer->timefield to the frame's PTS (converted to nanoseconds). - Queue the buffer.
- PipeWire Video Callback (
on_process):- When PipeWire needs a video frame, pull from your video queue.
- Copy the pixel data into the dequeued
pw_buffer. - Set the
pw_buffer->timefield to the frame's PTS (converted to nanoseconds). - Queue the buffer.
By providing accurate timestamps, you are no longer just pushing data blindly; you are telling PipeWire when each piece of data should be presented, allowing its internal clock and scheduling mechanisms to ensure perfect A/V sync.
Of course. Here is a complete, compilable, and fully-implemented C++ example that demonstrates the principles discussed.
This program creates a proper multithreaded application:
Synchronization between the Decoder Thread and the Main Thread is handled correctly using a
std::mutexand astd::condition_variableto prevent race conditions and wasted CPU cycles.full_player_example.cppHow to Compile and Run
Save the code: Save the code above into a file named
full_player_example.cpp.Install Dependencies: Make sure you have the PipeWire development libraries installed.
sudo apt-get install libpipewire-0.3-devsudo dnf install pipewire-develCompile: Open a terminal and run the following command. The
pkg-configtool finds the correct compiler flags, and-pthreadis necessary forstd::thread.g++ full_player_example.cpp -o full_player $(pkg-config --cflags --libs libpipewire-0.3) -pthreadRun: Execute the compiled program.
You will see output messages from the different threads, and you should hear a continuous 440 Hz (A4) tone from your speakers. Press Ctrl+C to stop the program, and you will see the cleanup messages as it shuts down gracefully.