[libav-api] Remuxing audio/video data from a UDP stream.

wl2776 wl2776 at gmail.com
Mon Nov 7 12:46:14 CET 2011

I'm developing a player, which is able to play some live streams and also is
able to record them to a local disk.

When incoming stream contains only video, recording is fine.
However, when a second stream adds, with sound, I sometimes run into

The problem is that when I try to write a packet with
av_interleaved_write_frame(), it doesn't write it and complains about
non-monotonically increasing timestamps.

My player doesn't use LibAV demuxers, but instead uses self-written ones.
When the record starts, my player sets up one or two more queues (depending
on audio presence), where parsed data go.
Then sets up one or two custom IO contexts and uses av_read_frame to read
raw parsed video or audio data from those queues.

Therefore, two different AVpackets came from different queues, have
completely different timestamps and timebases.

Currently I set time_base of the output streams to 1/frame_rate and
monotonically increase PTS/DTA by 1 (no B-frames).

How should I interleave reading from different queues? 
How should I calculate timestamps in case of two streams (queues)?

View this message in context: http://libav-api.1054858.n5.nabble.com/Remuxing-audio-video-data-from-a-UDP-stream-tp4970947p4970947.html
Sent from the libav-api mailing list archive at Nabble.com.

More information about the libav-api mailing list