[libav-api] Remuxing audio/video data from a UDP stream.
wl2776 at gmail.com
Tue Nov 8 10:39:07 CET 2011
Ok, I've developed some principles, but sound and video are in sync only a
few first seconds, sound is terribly late then.
Please, could anyone read my explanations and point to my mistakes?
I have the RTSP video stream, SDP is the following:
FFplay writes the following on that stream:
My player plays the streams in sync.
Then I start recording.
When assigning AVFormatContext, AVStream and AVCodecContext, I set up
time_base for a video stream to 1/25 and during record increase PTS/DTS by 1
in packets, going to av_interleaved_write_frame().
And also, I set up time_base for an audio stream to the same as input,
1/8000, and increase pts/dts by packet.duration.
av_dump_format() for my output AVFormatContext writes the following
So, the sequence of pts/dts in outgoing video packets is 0, 1, 2, 3, 4, 5,
The same for the outgoing audio packets is 0, 1024, 2048, 3072, etc.
When av_read_frame() reads audio packets, it always returns packets of 1024
I calculate audio packet duration in seconds as
This gives 1024/8000 = 0.128 sec.
This should be enough for 0.128 * 25 = 3.2 video frames, given video frame
rate is 25 fps.
Therefore I read 4 video frames and write them using
av_interleaved_write_frame() to the output and then again read one audio
packet of 1024 bytes, recalculate audio duration and number of video frames,
and also write it with av_interleaved_write_frame().
The sound in the resulting file is in sync only ~20 seconds in the
beginning, then it is late.
During recording I observe that video frames are slowly accumulating in an
input queue, while audio input queue is almost always empty.
Thanks for the long reading.
So, where am I wrong?
View this message in context: http://libav-api.1054858.n5.nabble.com/Remuxing-audio-video-data-from-a-UDP-stream-tp4970947p4973735.html
Sent from the libav-api mailing list archive at Nabble.com.
More information about the libav-api