Using custom I/O callbacks with ffmpeg

ffmpeg

Video playback in Conspiracies 2 is based on ffmpeg, which is a very fast and easy to use library. I got better results than using DirectShow with less effort and not to mention my video playback code is going to compile everywhere!

Following this tutorial got me started immediately. After an afternoon of coding I had a decent video player running. This tutorial shows you how to play videos from a file in the filesystem. What I needed was a little bit different though. All the assets of the game (including videos) exist in a “virtual file system” in archives. So I need to provide my own I/O functions for reading them. ffmpeg provides this functionality but it is a little bit tricky to figure out, so I thought I’d write this guide here for future reference. Thanks to the kind people at libav-user mailing list who helped me sort this out!

Implementing the I/O routines.

There are 3 routines you need to impement:

int ReadFunc(void *opaque, uint8_t *buf, int buf_size);

This function must read “buf_size” number of bytes from your file handle (which is the “opaque” parameter) and store the data into “buf”. The return value is the number of bytes actually read from your file handle. If the function fails you must return <0.

int WriteFunc(void *opaque, uint8_t *buf, int buf_size);

This is optional and behaves like the ReadFunc.

int64_t SeekFunc(void *opaque, int64_t offset, int whence) ;

This function is like the fseek() C stdio function. “whence” can be either one of the standard C values (SEEK_SET, SEEK_CUR, SEEK_END) or one more value: AVSEEK_SIZE.

When “whence” has this value, your seek function must not seek but return the size of your file handle in bytes. This is also optional and if you don’t implement it you must return <0.

Otherwise you must return the current position of your streamย  in bytes (that is, after the seeking is performed). If the seek has failed you must return <0.

Opening your custom stream.

So when you need to open a custom stream for reading you must do the following:

  • Allocate a buffer for I/O operations with your custom stream. The buffer’s size must be (n + FF_INPUT_BUFFER_PADDING_SIZE), where n is the actual useful buffer size.
  • Allocate a new ByteIOContext object and initialize it by calling init_put_byte():
    int init_put_byte ( ByteIOContext * s,
    unsigned char * buffer,
    int buffer_size,
    int write_flag,
    void * opaque,
    int(*)(void *opaque, uint8_t *buf, int buf_size) read_packet,
    int(*)(void *opaque, uint8_t *buf, int buf_size) write_packet,
    int64_t(*)(void *opaque, int64_t offset, int whence) seek
    )

    s is a pointer to your ByteIOContext object.
    buffer is a pointer to your allocated buffer
    buffer_size is “n” (the useful portion of your allocated buffer)
    write_flag must be zero if your stream does not support writing
    opaque is a pointer to your custom file stream. This is going to be passed to your custom routines.
    read_packet, write_packet and seek are pointers to your custom routines. write_packet is optional (it can be NULL)

  • Use av_open_input_stream() instead of av_open_input_file() to open your stream:
    int av_open_input_stream ( AVFormatContext ** ic_ptr,
    ByteIOContext * pb,
    const char * filename,
    AVInputFormat * fmt,
    AVFormatParameters * ap
    )

    ic_ptr, filename, fmt and ap are the same parameters you use with av_open_input_file().
    pb is your initialized ByteIOContext object.

Closing your custom stream.

  • When you are done with the stream call av_close_input_stream() instead of av_close input_file().
  • Deallocate your ByteIOContext object.
  • Deallocate your buffer.

That’s it! Everything else is the same as with av_open_input_file().

Note: If you found this article helpful, please check out this game of mine. It’s free and I could use the install counts. Thank you!

Edit: Actually there is more to it. ย av_open_input_file automatically probes the input format. With av_open_input_stream you must do this yourself and pass av_open_input_stream a valid AVInputFormat pointer. ย Probing the input format is easy: You just fill a buffer with some bytes from the beginning of your stream and you pass it to av_probe_input_format() which hopefully recognizes the input format and returns the pointer needed. Here is my probing code:

AVProbeData probe_data;
probe_data.filename = filename;
probe_data.buf_size = 4096; // av_open_input_file tries this many times with progressively larger buffers each time, but this must be enough
// allocate memory to read the first bytes
probe_data.buf = (unsigned char *) malloc(probe_data.buf_size);
// read first 4096 bytes into buffer
// …
// probe
AVInputFormat *ret = av_probe_input_format(&probe_data, 1);
// cleanup
free(probe_data.buf);
probe_data.buf = NULL;
return ret;
Advertisements

52 thoughts on “Using custom I/O callbacks with ffmpeg

  1. Gamo, kai pano pou skeytomoun na grapso ena programma gia linux pou na me afinei na dialego videos apo makrya me to tilekontrol tou macbook, san to “front row” tou macosx.

    Tha dokimaso na xrisimopoiiso ffmpeg gia na mou dixnei screenshots apo to kathe video prin kanei spawn ton mplayer ๐Ÿ™‚

  2. xexe ๐Ÿ™‚ ffmpeg rules! Isws 8a htan pio eykolo pantws na xrhsimopoihseis to ffmpeg command line tool. eimai sigouros oti 8a exei tetoio option. Nomizw kai o mplayer exei “save screenshot at frame xxx”

  3. Telika molis xrisimopoiisa to ffmpeg gia na grapso enan poly rudimentary video player, pou paizei stereoskopika videos opos ayta sto youtube pou exoun to left kai right image dipla dipla. opos ayto: http://www.youtube.com/watch?v=BP7MVYwn4_w

    Pairno kathe frame kai syndiazo tis dyo eikones se mia pou tin vlepeis me xromatista 3D gyalia ๐Ÿ™‚

    Ontos to ffmpeg einai entyposiaka aplo. An kai mexri stigmis den asxolithika me ixo.

  4. Pws einai encoded ayta? Einai se 2 diaforetika video files h se 2 streams sto idio container?

    ekanes swsto synchronisation? me pts kai dts? ayto einai ligo dyskolo.

    Exw thn entypwsh (xwris na to exw psaksei) oti kai audio alla kai encoding einai eksisou apla me ffmpeg ๐Ÿ™‚

    Gia audio nomizw apla pairneis ena audio “frame” pou periexei ena kommati tou hxou pou antistoixei sto duration enos video frame kai to taizeis sto audio callback sou.

  5. Einai ena video stream me 2 eikones dipla dipla se kathe frame (des to link sto youtube parapano). Gia perisotera peri tou pos kaneis combine tis 2 eikones se ena “anaglyph” (etsi legetai i eikona pou vlepeis 3D me colored glasses) des to prosfato blog post mou: http://codelab.wordpress.com/2009/10/30/opengl-stereoscopic-anaglyphs/

    Sync den ekana me ton ixo, giati den paizo ixo mexri stigmis, to playback ton frames to kano poly apla me nanosleep oso einai to frame interval (pou to pairno apo to fps tou video) meion ton xrono pou ekana gia to read/decode tou frame.

    Profanos ayto to poly aplo scheme kataligei se kathysteriseis otan den diabazeis frames poly grigora apo ton disko. Poio sosta tha eprepe na kano buffering sta frames, alla barethika na to synexiso pros to paron.

    Oriste o kodikas mou: svn://nuclear.dnsalias.com/pub/stereoplay

    Oso gia ton ixo, kai ego diabazontas ayto katalaba, oti sou dinei se kathe “audio frame” ton ixo pou prepei na paiksei sto kathe frame, alla kati elege oti analogos ti audio stream exeis mporei se kathe audio packet na exei N audio “frames”.

  6. I’m so glad to have found your web page. My pal mentioned it to me before, yet never got around to checking it out until now. I must express, I’m floored. I really enjoyed reading through your posts and will absolutely be back to get more.

  7. Custom IO for ffmpeg post is very useful.

    Just had a doubt. what kind of file handle are you taking about? Can this read from a h.264 movie file in the local hard disk? (i know that av_open_input_file() can read file directly but i need this custom reading of file to test something)

    So do i have to open the movie file like a simple file in the custom read function and feed it into the buffer?

  8. Thanks for the comment ๐Ÿ™‚

    I haven’t yet tested h264 files but i suspect these might work the same way. Anyway if you have written code that works with av_open_input_file() , it must work with the custom IO routines.

    The file handle I talk about is anything you want: a pointer to a buffer or a pointer to a custom structure. It depends on how you want to implement your IO routines.

    For example if you intend to load the whole video file in memory (outch!) your handle could be a pointer to that buffer, and your read handler would just return bytes from that buffer.

  9. Thanks for your reply. I have successfully can decode the h.264 video using av_open_input_file(). But I badly wanted to get this custom IO thing working.

    I wrote the following code:

    unsigned char buffer[2000];
    unsigned int bufferSize = 1;

    /* Initialize the input format to H264 */
    AVInputFormat * pAVInputFormat =

    av_find_input_format(“h264”);

    if(!pAVInputFormat){
    printf(“Couldn’t initialize AVInputFormat”);
    return -1;
    }

    pAVInputFormat->flags |= AVFMT_NOFILE;

    /* Define how the data will be read from the connection (or file) */
    ByteIOContext * ByteIOCtx;

    if (!(ByteIOCtx = av_alloc_put_byte(pDataBuffer, lSize, 0, pDataBuffer, read_data, NULL, NULL))) {

    printf(“Couldn’t initialize ByteIOContext”);
    return -1;
    }

    /* Open stream for reading */

    if(av_open_input_stream(&MyAVFormatContext, ByteIOCtx,”decoder”, pAVInputFormat, NULL) < 0){

    printf("Couldn't open input stream");
    return -1;
    }

    In custom read_data function :

    int read_data(void *nal_queue, uint8_t *buf, int size){

    here I copy the H.264 network packets to
    the "Buf" (code lengthy ; hence not shown )

    (Here, I actually copy the NAL packets from the network – which contains video data )

    Return size; // returns copied data size
    }

    Problem : ffmpeg calls my custom read function many times, but my main loop gets hanged inside av_read_frame() function . Ideally I would expect that as soon as it gets enough data to make frame, it should come out of av_read_frame() and decode the frame.

    What may be going wrong?

  10. The video stream might contain B frames, which need future video packets to be available. You should buffer the data from the network. If the buffer’s length is sufficient the problem must be solved. You could separate the network stuff in another thread that reads data as quickly as possible and store it in the buffer, and in the read function just return the stored bytes.

    Hope this helps

  11. This is what I am exactly planning to do, A different thread reading and queuing data. I thought I will simulate the stuff before doing this , by just copying few NAl packets to this buffer first. But you may be right, that we need to send enough data to be sure that it get at least first Key frame.

    Hope rest of the code required to do just the custom IO feature looks fine to you?

    Hope it starts decoding once I implement the thread for reading and buffering

  12. The code looks ok. You might want to substitute your opaque handle which is now a pointer to your buffer (and probably not used inside your read routine because its global anyway) with a nice struct, ex:

    struct MyOpaque
    {
    char *buffer;
    int current_pos;
    NetworkPacket packet_buffer[1024];
    // etc …
    };

    This way your read handler will have access to all the data it needs through the opaque handle and you won’t have to store the data as global variables.

  13. This is a great thread as I am also trying to do buffered input in my streaming application. I can successfully stream the data in for initialization. I pre-fill the buffer to allow av_probe_input_format to determine the stream format. This function does not use the read_packet callback which is why I need to pre-fill. The subsequent calls to av_open_input_stream and av_find_stream_info do invoke the read_packet callback to get data as needed.

    Problem is, in my main processing thread I call av_read_frame assuming it would invoke the read_packet callback function as well. Seems that this is not the case and I am unable to read in any content from the stream. Is av_read_frame supposed to invoke the callback? If not, what should I be doing to get this working?

    • If initialization went well, av_read_frame must call the read routine. I don’t know if it is guaranteed to call it every time though. For some frames it might have the data already in buffer so it might not call the read routine again. Of course this is a speculation. You can see the source code for more details.

      Now, I don’t think that the “buffer” argument to init_put_byte is a “buffer” as in “Buffering network stream”. I suppose its a read buffer only. ie: you give ffmpeg some space to store read bytes in. Thats all. And probably this is not a buffer you can mess around with (since ffmpeg reads data in by itself), so I would suggest another approach for buffered streams. Just give ffmpeg some space to play with using your ByteIOContext, and implement buffering in a different thread. Have a separate buffer (10x the buffer you gave to ffmpeg or so depending on your needs) and fill the buffer with data as quickly as possible in another thread. Then whenever your read routine is called, just copy data from that buffer to the ByteIOContext’s buffer instead of actually performing the read.

      Hope this helps

  14. First off, thanks for the quick reply and the initial thread.

    Yes, the buffer for the ByteIOContext is separate from the buffer for the stream input. I am using a std::stringbuf for the stream input doing a sputn for the arriving data. The read_packet callback does an sgetn to read from the input stream into the buffer allocated for the ByteIOContext.

    With some debug I added I see that the first few calls to av_read_frame do return some data. However, the last call to the read_packet callback (which did an sgetn) was in response to av_find_stream_info. I assume it consumed the data from the ByteIOContext buffer.

    Here’s the sequence I am seeing (this is a trace representation using pseudo-code) . . .

    pMyBuffer = new char [9400]
    offset = 0
    pMyStringBuf->sputn(pInputStreamChunk, 315)
    //Copy to pMyBuffer to enable av_probe_input_format since it does not use the callback.
    mempy (pMyBuffer+offset, 315)
    offset+=315
    pMyStringBuf->sputn(pInputStreamChunk, 5152)
    mempy (pMyBuffer+offset, 5152)
    offset+=5152
    pMyStringBuf->sputn(pInputStreamChunk, 2576)
    mempy (pMyBuffer+offset, 2576)
    offset+=315
    pMyStringBuf->sputn(pInputStreamChunk, 5152)
    //Fill the remaining available in pMyBuffer
    mempy (pMyBuffer+offset, 9400-offset)
    //pMyBuffer is full
    init_put_byte(&MyByteIOContext,pMyBuffer,9400,0,this,read_packet,NULL,NULL)
    MyAVProbeData.buf = pMyBuffer
    MyAVProbeData.buf_size = 9400
    av_probe_input_format(&MyAVProbeData,1)
    //No callback to read_packet, av_probe_input_format seems to use the pMyBuffer contents directly
    av_open_input_stream
    //this does invoke the read_packet callback
    sgetn(pMyBuffer,9400) //puts 9400 bytes into pMYBuffer
    av_find_stream_info
    //this does invoke the read_packet callback
    av_find_stream_info
    sgetn(pMyBuffer,9400) //puts only the available 3795 bytes into pMYBuffer
    sgetn(pMyBuffer,9400) // nothing available

    dump_format // shows the correct h264 / aac information from the flv stream

    //input and output are opened fine
    //new pthread started to read frames, decode, encode, write frames

    //more media chunks arrive
    pMyStringBuf->sputn(pInputStreamChunk, 16384)
    pMyStringBuf->sputn(pInputStreamChunk, 16384)
    pMyStringBuf->sputn(pInputStreamChunk, 16384)

    //We start reading frames
    av_read_frame //AVPacket.size is 9, return code 0
    //there was no read_packet callback, where did it get the data from?
    av_read_frame //AVPacket.size is 10197, return code 0
    //there was no read_packet callback, where did it get the data from?
    av_read_frame //AVPacket.size is 417, return code 0
    //there was no read_packet callback, where did it get the data from?
    av_read_frame //AVPacket.size is 629, return code 0
    //there was no read_packet callback, where did it get the data from?
    av_read_frame //AVPacket.size is 690, return code 0
    //there was no read_packet callback, where did it get the data from?
    av_read_frame //AVPacket.size is 197, return code 0
    //there was no read_packet callback, where did it get the data from?
    av_read_frame //AVPacket.size is 0, return code -32
    //the -32 return code tells the thread there was a read error and terminates.

  15. Yup, should be +=2576 for that case. Sorry, I am translating the debug output and combining pseudo-code. Been looking at this for a while and my head is spinning!

    Essentially, the ‘Open’ functionality is putting the arriving data into the input stringbuf using sputn. In parallel, it is copying the arriving data into the ByeIOContext buffer. Once the ByteIOContext buffer is full, the buffer is passed to the av_probe_input_format to determine the input format. Next, the av_open_input_stream creates the AVInputContext and uses the ByteIOContext to invoke the read_packet callback and populates the ByteIOContext buffer using sgetn. In this case it is using the same initial 9400 bytes already provided to av_probe_input_format.

    I am beginning to wonder if the issue is somehow related to the thread I created to do the work (read, decode, encode, write). I am passing the context to it correctly and most things seem to work fine, only the read_packet callback is not invoked when av_read_frame is used.

  16. Strange indeed. You could try to get rid of the threads at first. For example fill your stringbuff with the contents of the entire flv and then run your decoding/encoding code in the main thread. When that works you can move the code to a different thread.

    Of course it goes without saying that you lock a mutex or something before sputn/sgetn?

  17. I removed the second thread by having the PutContent method (writes chunk of stream into streambuf) also call the TranscodeFrame method. So now, we write the chunk to the stringbuf, call av_read_frame to get a packet, decode, encode and write the frame. It still indicates that the av_read_frame does not trigger the read_packet callback. However, it does indicate av_read_frame is filling the packet with some data for a short time. I am still unclear where it is getting this data from.

  18. Yes, when it had data to populate into the packet it is 0, else it is -32.

    I think I may have found something in my initialization code. I am now finally seeing the callback being invoked as expected.

    Now, when the read_packet callback is invoked, does ffmpeg use the same buffer I allocated at initialization or does it allocate a new one? Reason I ask is the I am seeing some requests which exceed the allocated buffer size of 9400. Do I need to add some limit checking and only invoke sgetn up to that limit?

    • I suspect that read_packet() can get any valid pointer as a “buffer” argument. Not only the one you allocated during initialization.

      Also from my experience you don’t need to check if the amount of data ffmpeg requests is too much. Since it requests that many bytes, it must have its reasons ๐Ÿ™‚ I only say this because in my implementation I don’t do such a check and everything works perfectly. So I might be wrong.

      Still these I/O handlers seem like they are meant to be generic, like fread/fwrite for example. So it is easily imaginable that internally ffmpeg can allocate its own buffers and read in them using your read routine.

    • Hi BobK,
      I am also facing the same problem you had mentioned. the custom read function is getting invoked so may times when av_open_input_stream is called. And after when av_read_frame is called it does not invoke the function for that many number of times(it had read earlier). Then it invokes properly after that.
      Did you find chnages in initialization or somewhere else?
      Thank You.

      • Been a while, I’ll do my best to recall my solution. I delay initialization until there’s a minimum number of bytes in the buffer. When there’s enough in the buffer, I copy (I do not remove from the buffer) to a separate buffer. This buffer is then passed to init_put_byte and then call av_probe_input_format. After this completes I make the call to av_open_input_stream.

        Hope this helps.
        Bob

  19. Yeah, I tend to agree. I started printing the pointers to where the read_packet callback puts the data and they seem to be distinct buffers.

    With some other changes I think I am finally understanding the main issue. I found is that if the input to sputn ‘falls behind’ the read_packet callbacks to sgetn, sgetn returns 0 and I never see any further calls to the read_packet callback. Initially, I was only waiting for a single buffer before continuing with the av_open_input_stream initialization. I saw that there were only a few callbacks made, the last one returned 0 and I never saw any more in response to the subsequent av_read_frame calls. I increased the initial amount of data to wait for before proceeding with av_open_input_stream initialization. Now, I see more read_packet callbacks including those associated with av_read_frame. Problem is, the callbacks stop at some point well before the stream completes. Certainly I could increase the initial amount I buffer but seems like that is more of a hack. There must be a better way to pace things through or to prevent the read_packet callback returning 0 from avoiding retrying read_packet callbacks in case more data arrives.

  20. Thank you for writing a very good tutorial…!!

    I am facing one problem
    I have used “av_open_stream” (after probing for format) with Initialized ByteIOContext (with the read function).

    The call makes “av_find_stream_info” the custom Read function called a particular number of times (N) around 200. After that when I repeatedly call “av_read_frame” as and when the packet arrives. When i call “av_read_frame” it crashes exactly in the (N+1)th call (201st) call (though it does not call my custom IO Read function).
    “av_read_frame” crashing after the number of calls made by “av_find_stream_info” should have a relation..?

    • Hi, I don’t really know. This must be internal. I’d take a look at ffmpeg’s code near the two functions you mention.

      Also, does your read function do anything that could mess up ffmpeg’s state? I would check that first before going to the code.

      • Thank You very much for the reply.

        The custom read function memcpy’s the recently received packet to the buffer given in args(by ffmpeg).
        I have some more info on the issue. “av_read_frame” does not call custom read function for the first N times, It calls
        after that [May be ffmpeg internally buffers so much data]. when av_read_frame calls custom IO function the first time it asks for MORE amount of data. As we know we get TS in packets;we can give only the most recently received packet’s worth(size of)
        data.
        In the earlier post, its discussed that ffmpeg may need more(may request to custom read function) data to create a frame. How to map that with incomming floww of TS Packets?

  21. Hey.. a very good tutorial !! But can we use this with .mp4 files?? I am strugging to make it work for mp4 files. I am able to use the function for .ts files.I am seeing an error:[ffmpeg:mov,mp4,m4a,3gp,3g2,mj2] moov atom not found

    • Thanks for the comment. Yes, you can definitely use it with mp4 files, I just added a mp4 video to the game engine and it works fine, which is logical since the method described here has nothing to do with the format/container/etc of the video. It is just a substitute for ffmpeg to get bytes from somewhere else other than the filesystem.

      I suggest you first try to play the mp4 file using the av_open_file function and then when that succeeds, convert it to the stram version.

  22. Hi! I will appreciate your advice on the following. I’am developing a video converter which based on FFmpeg, and I need to implement an accurate seeking API. First of all, I developed an indexer of video stream which just saves a presentation timestamps(PTS) of every packet. And then my encoder uses this index to seek the video file. Before this operations, I remux file to mp4 container, for example. Remux is needed for videos which have no correct index inside, or video has no index at all. I need to implement seeking by bytes, of course with previously built index. I tried many ways to implement this, but without any success. Maybe you know how to implement an accurate seeking by bytes in FFmpeg? Best regards.

  23. thanks a lot for your post, it works perfectly.
    just a word on probe_data.buf, it is said: Buffer must have AVPROBE_PADDING_SIZE of extra allocated bytes filled with zero.

  24. Hello!
    Your post is one of the few that even comes close to describing how to use an internal buffer with ffmpeg, so hopefully you still have some of this expertise fresh in your mind.

    I’m attempting to decode video that I have on an internal buffer(actually a circular mutex, as other things are also using this data). My internal buffer is a one to one copy of what I’m receiving over UDP from VLC streaming an mpeg2 file.

    I have created a function: ReadData which I pass to avio_alloc_context, and it gets called properly when I do that. Then, I continue along just like how I did on my test app(which actually opened “UDP://127.0.0.1:4567” and worked successfully). However, when I get to av_read_frame() it becomes apparent that my AVFormatContext is not getting fully populated by avformat_open_input(). I’m not sure which piece specifically I’m missing, but when I call av_read_frame() the program exits.

    My code(keep in mind that this is a work in progress and a lot of this will get cleaned up when I get it working):

    pCodecCtx = avcodec_alloc_context();
    pAVFormatCtx = avformat_alloc_context();
    AVProbeData probeData;
    probeData.filename = “”;
    int bufSize = 1024*8;
    unsigned char *buffer = (unsigned char*)av_malloc(bufSize*sizeof(unsigned char));
    bool probed = false;
    InternalReadMethod(buffer, bufSize);
    probeData.buf = buffer;
    probeData.buf_size = bufSize;
    pAVInputFormat = av_probe_input_format(&probeData, 1);
    pAVInputFormat->flags |= AVFMT_NOFILE;
    pAVInputFormat->read_header = NULL;
    pAVFormatCtx->iformat = pAVInputFormat;

    pAVIOCtx = avio_alloc_context(buffer, bufSize, NULL, this, ReadData, NULL, NULL);

    pAVIOCtx->is_streamed = 1;
    pAVFormatCtx->pb = pAVIOCtx;

    inputOpen = avformat_open_input(&pAVFormatCtx, “”, pAVInputFormat, NULL);

    if(inputOpen == -1)
    {
    printf(“couldn’t open input buffer”);
    return -1;
    }

    int videoStream = -1;
    for(int i = 0; i nb_streams; i++)
    {
    if(pAVFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
    {
    videoStream = i;
    break;
    }
    }
    if(videoStream > -1)
    {
    //Never happens. pAVFormatCtx->nb_streams is always zero
    pCodecCtx = pAVFormatCtx->streams[videoStream]->codec;
    pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
    }
    else
    {
    //Assume MPEG2 for now
    pCodec = avcodec_find_decoder(CODEC_ID_MPEG2VIDEO);
    }

    AVPacket packet;

    if(avcodec_open(pCodecCtx, pCodec)=0)
    {

    }

    And then my read function:

    int ImageInterface::ReadData(void *opaque, uint8_t *buf, int bufSize)
    {
    if(((ImageInterface*)opaque)->InternalReadMethod(buf, bufSize) >= 0)
    {
    return bufSize;
    }
    else
    {
    return -1;
    }
    }

    int ImageInterface::InternalReadMethod(uint8_t *buf, int bufSize)
    {
    return(myCircBuff->peekData(buf, bufSize));//returns 1 on success, -1 on fail
    }

    • Hello, thanks for the comment. Unfortunately nothing is that “fresh” anymore ๐Ÿ™‚ That project is finished for a long time now so I can’t exactly remember how I got it to work.

      Since a lot of people are interested in this post, I’m going to upload the module (.cpp/.h) that did this and hopefully you can figure it out, provided that ffmpeg hasn’t changed that much since the version I used.

  25. Hi, thank you for sharing your code but unfortunately the link you provided is no longer valid. Could you please upload it somewhere else.

  26. My completely working code (Linux debian squeezy, video – mpeg-ps):
    My completely working code

    #include

    #include
    #include
    #include

    #include “libavcodec/avcodec.h”
    #include “libavcodec/opt.h”
    #include “libavformat/avformat.h”
    #include “libavutil/mathematics.h”
    #include “libswscale/swscale.h”

    #ifdef HAVE_AV_CONFIG_H
    #undef HAVE_AV_CONFIG_H
    #endif

    #define BUFFER_SIZE FF_MIN_BUFFER_SIZE*2

    int ReadFunc(void *f, uint8_t *buffer, int bufSize)
    {
    if ((buffer==NULL) || (f==NULL))
    {
    printf(“Buffer or file = 0x0”);
    return -1;
    }
    int i = fread(buffer, 1, bufSize, f);
    if (iflags |= AVFMT_NOFILE;

    formatCtx = avformat_alloc_context();
    formatCtx->iformat = inputFormat;

    if (av_open_input_stream(&formatCtx, byteCtx, “”, inputFormat, NULL) < 0)
    {
    perror("input_stream fail:");
    exit(1);
    }

    /*if(av_find_stream_info(formatCtx)nb_streams == 0)
    {
    printf(“No video stream found! Exit”);
    exit(2);
    }
    c = &formatCtx->streams[videoStream]->codec;
    formatCtx->streams[videoStream]->codec->codec_type==CODEC_TYPE_VIDEO;
    */

    codec = avcodec_find_decoder(CODEC_ID_MPEG2VIDEO);
    if (!codec)
    {
    fprintf(stderr, “codec not found\n”);
    exit(1);
    }

    c = avcodec_alloc_context3(codec);

    if(codec->capabilities & CODEC_CAP_TRUNCATED) c->flags|=CODEC_FLAG_TRUNCATED;

    if(avcodec_open(c, codec)frame_rate>1000 && c->frame_rate_base==1)
    // c->frame_rate_base=1000;

    picture=avcodec_alloc_frame();
    picture_RGB=avcodec_alloc_frame();
    buffer = malloc (avpicture_get_size(PIX_FMT_RGB24, 720, 576));
    avpicture_fill((AVPicture *)picture_RGB, buffer, PIX_FMT_RGB24, 720, 576);

    i=0;
    packet = avcodec_alloc_frame();

    while(av_read_frame(formatCtx, packet) >= 0)
    {
    if (packet->stream_index == 0)
    {
    oldSize = packet->size;
    oldData = packet->data;
    while(packet->size > 0)
    {
    bytesDecoded = avcodec_decode_video2(c, picture, &frameFinished, packet);

    if (bytesDecoded >= 0)
    {
    lastTime: packet->size -= bytesDecoded;
    packet->data += bytesDecoded;

    if (frameFinished)
    {
    sws_context = sws_getContext(c->width, c->height, c->pix_fmt, c->width, c->height, PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);

    sws_scale(sws_context, picture->data, picture->linesize, 0, c->height, picture_RGB->data, picture_RGB->linesize);

    pixbuf = gdk_pixbuf_new_from_data(picture_RGB->data[0], GDK_COLORSPACE_RGB, 0, 8, c->width, c->height, picture_RGB->linesize[0], NULL, NULL);
    gtk_image_set_from_pixbuf(image, pixbuf);
    g_main_iteration(NULL);
    }
    }
    else
    {
    perror(“Error while decoding frame\n”);
    }
    }
    packet->size = oldSize;
    packet->data = oldData;
    }
    av_free_packet(packet);
    }

    goto lastTime;

    free(buffer);
    av_free(picture_RGB);
    av_free(picture);

    avcodec_close(c);
    av_close_input_stream(formatCtx);

    return 0;
    }

  27. //this is working code with latest ffmpeg
    #include
    #include
    #include
    #include

    struct
    {
    AVFormatContext *ic;
    AVIOContext *pb;
    AVInputFormat *ifmt;
    int ifd;

    }things;

    int read_io(void *opaque,uint8_t *buf,int buf_size);
    int64_t seek_io(void *opaque,int64_t offset,int whence);

    int main(int argc,char **argv)
    {
    uint8_t * iobuf = av_malloc(1024 * 4096);

    //initialize things
    bzero(&things,sizeof(things));
    //register all availavle file format and codec
    av_register_all();

    //we have to open the file ourself
    things.ifd = open(“input.ts”,O_RDONLY);
    //registering read and seeek function given write function as null
    //since i didn’t wanted to change my input file
    things.pb = avio_alloc_context(iobuf,1024 * 4096,0,&things.ifd,read_io,NULL,seek_io);
    things.ifmt = av_find_input_format(“mpegts”);

    // we can not write the below statement because they will neglect my AVIOContext
    // things.ifmt->flags |= AVFMT_NOFILE;ddd

    things.ic = avformat_alloc_context();
    // we dont have to write below statement explicitly it is done while avformat_open_input (third param)
    // things.ic->iformat = things.ifmt;

    things.ic->pb = things.pb;

    // in second param we can not give NULL they dont chelk whether it is NULL or not they just start coping data
    // in ic->filename so just gone for awork around

    if( avformat_open_input(&things.ic,””,things.ifmt,NULL) < 0)
    printf("opening input stream fail\n");
    //retrive stream information since open input only see the header
    if(avformat_find_stream_info(things.ic,NULL))
    {
    printf("Stream info missing\n");
    }

    //gud function whose documentation is not provided and used print the information of stream
    //4th param is_output = 1 ;is _input = 0
    //
    av_dump_format(things.ic,0,"stram info",0);

    // i have not done clean part hope the person might do it himself
    return 0;
    }

    int read_io(void *opaque,uint8_t *buf,int buf_size)
    {
    return read(*(int *)opaque,buf,buf_size);

    }
    int64_t seek_io(void *opaque,int64_t offset,int whence)
    {
    return lseek(*(int *)opaque,offset,whence);

    }

  28. Hi, I understand this is 5 years old now but I would just like to express how useful I found this post. I’ve delved into the depths of the internet looking for an answer and this post and all the comments below are by far the most useful I’ve found in guiding me to finding an answer to this IO from memory problem.

    Thank you for your posts!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s