[libav-api] How to output color (.ppm) images using api-exmaple.c

Michael B. Darling mdarling at calpoly.edu
Wed Sep 18 00:42:51 CEST 2013


Thanks for the quick reply Luca,

> The code doesn't seem attached

I've copied and pasted the relevant code (in its original form) below from api-example.c for your convenience.

> you might read about the img2 muxer and use AV_CODEC_ID_PPM as encoder.

Are you implying that after decoding the .mpg video using AV_CODEC_ID_MPEG1VIDEO, it needs to be re-encoded using AV_CODEC_ID_PPM?  My end-goal is to adapt this api-example example to decode MJPEG video (captured from a USB webcam) in real-time and write the frames to cv::Mat objects that OpenCV can understand. In other words, if I know how avcodec_decode_video2() arranges the RGB data into the AVFrame->data buffer, I should be able to create the cv::Mat object myself.

>> I believe this is because the call to len = avcodec_decode_video2(c, 
>> picture, &got_picture, &avpkt) is only storing the gray image data.
> Hardly possible, check all the pointers in the AVFrame data =)

Do you mean that AVFrame->data[i] points to the ith channel? (It appears that data is an array of pointers with size=AV_NUM_DATA_POINTERS=8, each pointing to a different channel of image data)

i.e. something like:

     AVFrame->data[0] = RED
     AVFrame->data[1] = GREEN
     AVFrame->data[2] = BLUE

I tried replacing the call to pgm_save(picture->data[i], picture->linesize[i], c->width, c->height, buf) in the api-example with i=1,2,3 instead of i=0, but end up with a garbled last frame rather than a gray representation of either the R, G, or B channel. If AVFrame->data somehow contains RGB data, how do I access it?

Sorry for the confusion--my thesis has taken me on quite a detour outside of my wheelhouse. For future reference, Is there a better place for me to find help besides the handfull of examples, libav mailing list, and the source code itself?  (Is there any documentation where I can read about things like the "img2 muxer", AV_CODEC_IDs, etc?)


Thanks a ton! =)
-Mike

static void pgm_save(unsigned char *buf, int wrap, int xsize, int ysize,
                     char *filename)
{
    FILE *f;
    int i;

    f=fopen(filename,"w");
    fprintf(f,"P5\n%d %d\n%d\n",xsize,ysize,255);
    for(i=0;i<ysize;i++)
        fwrite(buf + i * wrap,1,xsize,f);
    fclose(f);
}


static void video_decode_example(const char *outfilename, const char *filename)
{
    AVCodec *codec;
    AVCodecContext *c= NULL;
    int frame, got_picture, len;
    FILE *f;
    AVFrame *picture;
    uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
    char buf[1024];
    AVPacket avpkt;

    av_init_packet(&avpkt);

    /* set end of buffer to 0 (this ensures that no overreading happens for damaged mpeg streams) */
    memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);

    printf("Video decoding\n");

    /* find the mpeg1 video decoder */
    codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO);
    if (!codec) {
        fprintf(stderr, "codec not found\n");
        exit(1);
    }

    c = avcodec_alloc_context3(codec);
    picture= avcodec_alloc_frame();

    if(codec->capabilities&CODEC_CAP_TRUNCATED)
        c->flags|= CODEC_FLAG_TRUNCATED; /* we do not send complete frames */

    /* For some codecs, such as msmpeg4 and mpeg4, width and height
       MUST be initialized there because this information is not
       available in the bitstream. */

    /* open it */
    if (avcodec_open2(c, codec, NULL) < 0) {
        fprintf(stderr, "could not open codec\n");
        exit(1);
    }

    /* the codec gives us the frame size, in samples */

    f = fopen(filename, "rb");
    if (!f) {
        fprintf(stderr, "could not open %s\n", filename);
        exit(1);
    }

    frame = 0;
    for(;;) {
        avpkt.size = fread(inbuf, 1, INBUF_SIZE, f);
        if (avpkt.size == 0)
            break;

        /* NOTE1: some codecs are stream based (mpegvideo, mpegaudio)
           and this is the only method to use them because you cannot
           know the compressed data size before analysing it.

           BUT some other codecs (msmpeg4, mpeg4) are inherently frame
           based, so you must call them with all the data for one
           frame exactly. You must also initialize 'width' and
           'height' before initializing them. */

        /* NOTE2: some codecs allow the raw parameters (frame size,
           sample rate) to be changed at any frame. We handle this, so
           you should also take care of it */

        /* here, we use a stream based decoder (mpeg1video), so we
           feed decoder and see if it could decode a frame */
        avpkt.data = inbuf;
        while (avpkt.size > 0) {
            len = avcodec_decode_video2(c, picture, &got_picture, &avpkt);
            if (len < 0) {
                fprintf(stderr, "Error while decoding frame %d\n", frame);
                exit(1);
            }
            if (got_picture) {
                printf("saving frame %3d\n", frame);
                fflush(stdout);

                /* the picture is allocated by the decoder. no need to
                   free it */
                snprintf(buf, sizeof(buf), outfilename, frame);
                pgm_save(picture->data[0], picture->linesize[0],
                         c->width, c->height, buf);
                frame++;
            }
            avpkt.size -= len;
            avpkt.data += len;
        }
    }

    /* some codecs, such as MPEG, transmit the I and P frame with a
       latency of one frame. You must do the following to have a
       chance to get the last frame of the video */
    avpkt.data = NULL;
    avpkt.size = 0;
    len = avcodec_decode_video2(c, picture, &got_picture, &avpkt);
    if (got_picture) {
        printf("saving last frame %3d\n", frame);
        fflush(stdout);

        /* the picture is allocated by the decoder. no need to
           free it */
        snprintf(buf, sizeof(buf), outfilename, frame);
        pgm_save(picture->data[0], picture->linesize[0],
                 c->width, c->height, buf);
        frame++;
    }

    fclose(f);

    avcodec_close(c);
    av_free(c);
    avcodec_free_frame(&picture);
    printf("\n");
}

int main(int argc, char **argv)
{
    const char *filename;

    /* register all the codecs */
    avcodec_register_all();

    if (argc <= 1) {
        //audio_encode_example("/tmp/test.mp2");
        //audio_decode_example("/tmp/test.sw", "/tmp/test.mp2");

        video_encode_example("/tmp/test.mpg");
        filename = "/tmp/test.mpg";
    } else {
        filename = argv[1];
    }

    //    audio_decode_example("/tmp/test.sw", filename);
    video_decode_example("/tmp/test%d.pgm", filename);

    return 0;
}



----- Original Message -----
From: "Luca Barbato" <lu_zero at gentoo.org>
To: libav-api at libav.org
Sent: Tuesday, September 17, 2013 3:21:17 AM
Subject: Re: [libav-api] How to output color (.ppm) images using	api-exmaple.c

On 17/09/13 04:27, Michael B. Darling wrote:

> I have read about the format of .pgm and .ppm files from 
> (http://paulbourke.net/dataformats/ppm/), but have not been able to 
> generate .ppm images.  I have tried creating a new function 
> ppm_save() (modified code attached) and replacing all calls to 
> pgm_save() with my new function, but I essentially get three 
> subsequent frames tiled in the same image file.

The code doesn't seem attached, you might read about the img2 muxer and
use AV_CODEC_ID_PPM as encoder.

> I believe this is because the call to len = avcodec_decode_video2(c, 
> picture, &got_picture, &avpkt) is only storing the gray image data.

Hardly possible, check all the pointers in the AVFrame data =)

lu
_______________________________________________
libav-api mailing list
libav-api at libav.org
https://lists.libav.org/mailman/listinfo/libav-api


More information about the libav-api mailing list