I'm currently setting up my output context for creating .avi like this:
avformat_alloc_output_context2(&outContext, NULL, NULL, "out.avi");
if (!outContext)
die("Could not allocate output context");
However, the resulting video quality is very unpleasant. As such, I'd like to be able to fetch the installed codecs on the system and use one of them in avformat_alloc_output_context2. Similar to below:
So I guess my two questions are:
How do I create a list (array) containing the installed codecs (as above)?
How do I use one of them in the output container?
If possible, I'd also like to be able to modify output quality (0%-100%) and open the codec configuration window.
First, make your map with string(or whatever) with AVCodecID, like this :
std::map<string, AVCodecID> _codecList;
_codecList["h264"] = AV_CODEC_ID_H264;
_codecList["mpeg4"] = AV_CODEC_ID_MPEG4;
....
Note that FFmpeg does not provide information that which codec is available in what container so you should validate yourself. but you can reference following link(at least it is officlal) : https://en.wikipedia.org/wiki/Comparison_of_video_container_formats
Next thing to do is that find encoder by name, or AVCodecID in following code :
avcodec_find_encoder_by_name("libx264");
avcodec_find_encoder(AV_CODEC_ID_H264);
Both are return AVCodec* so you can use this when calling avformat_new_stream(), like this :
AVCodecID codec_id = (_codecList.find("h264") != _codecList.end()) ?
_codecList.find("h264") : AV_CODEC_ID_NONE;
if(codec_id == AV_CODEC_ID_NONE)
{
return -1;
}
AVCodec* encoder = avcodec_find_encoder(codec_id);
// or you can just get it from avcodec_find_encoder_by_name("libx264");
AVStream* newStream = avformat_new_stream(avFormatContext, encoder);
Thre are so many things when determining video quality. x264, especially has more options. In this case, you can list it by crf value or bitrate things(you can't use both option). You can determine it with AVCodecContext.
AVCodecContex* codec_ctx = newStream->codec;
codec_ctx->bitrate = 1000000 // 1MB
// codec_ctx->qmin = 18;
// codec_ctx->qmin = 31;
Once you done, open it with avcodec_open2
avcodec_open2(avFormatContext, encoder, NULL);
And Don't forget to close when you release it.
avcodec_close(avFormatContext);
There is much to do when you creating your own output stream. If you have deeper experience with it, i think that this answer will be enough.
But If you don't have much experience with FFmpeg, you can find my full example in here(https://github.com/sorrowhill/FFmpegTutorial)
Related
I am new to DirectShow API.
I want to decode a media file and get uncompressed RGB video frames using DirectShow.
I noted that all such operations should be completed through a GraphBuilder. Also, every the processing block is called a filter and there are many different filters for different media files. For example, for decoding H264 we should use "Microsoft MPEG-2 Video Decoder", for AVI files "AVI Splitter Filter" etc.
I would like to know if there is a general way (decoder) that can handle all those file types?
I would really appreciate if someone can point out an example that goes from importing a local file to decoding it into uncompressed RGB frames. All the examples I found are dealing with window handles and they just configure it and call pGraph->run(). I have also surfed through Windows SDK samples, but couldn't find useful samples.
Thanks very much in advance.
Universal DirectShow decoder in general is against the concept of DirectShow API. The whole idea is that individual filters are responsible for individual task (esp. decoding certain encoding or demultiplexing certain container format). The registry of the filters and Intelligent Connect let one to have the filters built in chain to do certain requested processing, in particular decoding from compressed format to 24-bit RGB for video.
From this standpoint you don't need a universal decoder and it is not expected that such decoder exists. However, such decoder (or close) does exist and it's a ffdshow or one of its derivatives. Presently, you might want to look at LAVFilters, for example. They wrap FFmpeg, which itself can handle many formats, and connect it to DirectShow API so that, as as filter, ffdshow could handle many formats/encodings.
There is no general rule to use or not use such codec pack, in most cases you take into consideration various factors and decide what to do. If your application handles various scenarios, a good starting point into graph building would be Overview of Graph Building.
My goal is to accomplish the task using DirectShow in order to have no external dependencies. Do you know a particular example that does uncompressing frames for some file type?
Your request is too broad and in the same time typical and, to some extent, fairy simple. If you spend some time playing with GraphEdit SDK tool, or rather GraphStudioNext, which is a more powerful version of the former, you will be able to build filter graph interactively, also render media files of different types and see what filters participate in rendering. You can accomplish the very same programmatically too, since the interactive actions basically all have matching API calls individually.
You will be able to see that specific formats are handled by different filters and Intelligent Connect mentioned above is building chains of filters in combinations in order to satisfy the requests and get the pipeline together.
Default use case is playback, and if you want to get video rendered to 24/32-bit RGB, your course of actions is pretty much similar: you are to build a graph, which just needs to terminate with something else. More flexible, sophisticated and typical for advanced development approach is to supply a custom video renderer filter and accept decompressed RGB frames on it.
A simple and so much popular version of the solution is to use Sample Grabber filter, initialize it to accept RGB, setup a callback on it so that your SampleCB callback method is called every time RGB frame is decompressed, and use Sample Grabber in the graph. (You will find really a lot of attempts to accomplish that if you search open source code and/or web for keywords ISampleGrabber, ISampleGrabberCB, SampleCB or BufferCB, MEDIASUBTYPE_RGB24).
Using the Sample Grabber
DirectShow: Examples for Using SampleGrabber for Grabbing a Frame and Building a VU Meter
Another more or less popular approach is to setup a playback pipeline, play a file, and read back frames from video presenter. This is suggested in another answer to the question, is relatively easy to do, and does the job if you don't have performance requirement and requirements to extract every single frame. That is, it is a good way to get a random RGB frame from the feed but not every/all frames. See related on this:
Different approaches on getting captured video frames in DirectShow
You are looking for vmr9 example in DirectShow library.
In your Windows SDK's install, look for this example:
Microsoft SDKs\Windows\v7.0\Samples\multimedia\directshow\vmr9\windowless\windowless.sln
And search this function: CaptureImage, in this method, see IVMRWindowlessControl9::GetCurrentImage, is exactly what you want.
This method captures a video frame in bitmap format (RGB).
Next, this is a copy of CaptureImage code:
BOOL CaptureImage(LPCTSTR szFile)
{
HRESULT hr;
if(pWC && !g_bAudioOnly)
{
BYTE* lpCurrImage = NULL;
// Read the current video frame into a byte buffer. The information
// will be returned in a packed Windows DIB and will be allocated
// by the VMR.
if(SUCCEEDED(hr = pWC->GetCurrentImage(&lpCurrImage)))
{
BITMAPFILEHEADER hdr;
DWORD dwSize, dwWritten;
LPBITMAPINFOHEADER pdib = (LPBITMAPINFOHEADER) lpCurrImage;
// Create a new file to store the bitmap data
HANDLE hFile = CreateFile(szFile, GENERIC_WRITE, FILE_SHARE_READ, NULL,
CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, 0);
if (hFile == INVALID_HANDLE_VALUE)
return FALSE;
// Initialize the bitmap header
dwSize = DibSize(pdib);
hdr.bfType = BFT_BITMAP;
hdr.bfSize = dwSize + sizeof(BITMAPFILEHEADER);
hdr.bfReserved1 = 0;
hdr.bfReserved2 = 0;
hdr.bfOffBits = (DWORD)sizeof(BITMAPFILEHEADER) + pdib->biSize +
DibPaletteSize(pdib);
// Write the bitmap header and bitmap bits to the file
WriteFile(hFile, (LPCVOID) &hdr, sizeof(BITMAPFILEHEADER), &dwWritten, 0);
WriteFile(hFile, (LPCVOID) pdib, dwSize, &dwWritten, 0);
// Close the file
CloseHandle(hFile);
// The app must free the image data returned from GetCurrentImage()
CoTaskMemFree(lpCurrImage);
// Give user feedback that the write has completed
TCHAR szDir[MAX_PATH];
GetCurrentDirectory(MAX_PATH, szDir);
// Strip off the trailing slash, if it exists
int nLength = (int) _tcslen(szDir);
if (szDir[nLength-1] == TEXT('\\'))
szDir[nLength-1] = TEXT('\0');
Msg(TEXT("Captured current image to %s\\%s."), szDir, szFile);
return TRUE;
}
else
{
Msg(TEXT("Failed to capture image! hr=0x%x"), hr);
return FALSE;
}
}
return FALSE;
}
I have a stl queue of pointers to decoded frames on the heap
(e.g. decode_queue<AVFrame *>)
and I want to take those frames from the stl queue and encode them in WAV format. I tried using the examples that come with ffmpeg they break when I try to change the encoder to pcm_s32le/pcm_s16le. For example for decoding_encoding.c example, I just tried to change some parameters and all of a sudden I am getting a floating point exception.
Is there a something I can do? I am really lost.
UPDATE
In decoding_encoding.c i changed the line:
codec = avcodec_find_encoder(AV_CODEC_ID_MP2);
to
codec = avcodec_find_encoder(AV_CODEC_ID_PCM_S16LE);
this changed ended up making the c->frame_size == 0 So the buffer is never created:
buffer_size = av_samples_get_buffer_size(NULL, c->channels, c->frame_size,
c->sample_fmt, 0);
In that case instead of the line:
frame->nb_samples = c->frame_size;
I changed it to
frame->nb_samples = 10000
and
buffer_size = av_samples_get_buffer_size(NULL, c->channels, 10000,
c->sample_fmt, 0);
just to see what happens(basically substituted c->frame_size for 10000). It compiles but it creates an output file but the player cannot find a stream for it or it is filled with garbage. At this point I am still stuck not sure what to do to get the output.WAV file.
Also, is a RIFF header automatically added or do I have to manually add it in as well?
I try to convert videos for playback on Android using H264, AAC codecs and mp4 container. Video plays normaly with non-system players. But system player shows error "Can't play this video".
I found out that the problem is in moov atom, which is writed in the end of the file.
When I use "-movflags +faststart" ffmeg flag to convert video, it plays normal, but when I try to do that programmatically, it gives no result. I use following code:
av_dict_set( &dict, "movflags", "faststart", 0 );
ret = avformat_write_header( ofmt_ctx, &dict );
This code works fine:
av_dict_set( &dict, "movflags", "faststart", 0 );
ret = avformat_write_header( ofmt_ctx, &dict );
But problem is not solved. I still can't play converted videos on Android devices.
I assume that this answer is very late, but still, for anyone who might still be facing the same issue: this might be caused by the AV_CODEC_FLAG_GLOBAL_HEADER not being set in the audio/video AVCodecContext. A lot of guides show that it needs to be set in the AVFormatContext, but it needs to be set in the AVCodecContext before opening it using avcodec_open2.
if (format_context->oformat->flags & AVFMT_GLOBALHEADER) {
video_codec_context->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
avcodec_open2(video_codec_context, video_codec, nullptr);
Maybe the video is not compatible with your android phone? Try to convert with h264 baseline profile.
TL;DR
Set url field of AVFormatContext before avformat_write_header.
Why
I hit same issue today, and I found there is a log when calling av_write_trailer:
Unable to re-open output file for the second pass (faststart)
In the movenc.c implementation, we can see it needs s->url to re-open the file:
avio_flush(s->pb);
ret = s->io_open(s, &read_pb, s->url, AVIO_FLAG_READ, NULL);
if (ret < 0) {
av_log(s, AV_LOG_ERROR, "Unable to re-open %s output file for "
"the second pass (faststart)\n", s->url);
goto end;
}
Is there any way to capture frames from as many camera types as DirectShow do on Windows platform using Libav? I need to capture a camera output without using DirectShow filters and I want my application to work with many camera devices types.
I have searched the Internet about this capability of libav and found that it can be done via libav using special input format "vfwcap". Something like that (don't sure about code correctness - I wrote it by myself):
AVFormatParameters formatParams = NULL;
AVInputFormat* pInfmt = NULL;
pInFormatCtx* pInFormatCtx = NULL;
av_register_all();
//formatParams.device = NULL; //this was probably deprecated and then removed
formatParams.channel = 0;
formatParams.standard = "ntsc"; //deprecated too but still available
formatParams.width = 640;
formatParams.height = 480;
formatParams.time_base.num = 1000;
formatParams.time_base.den = 30000; //so we want 30000/1000 = 30 frames per second
formatParams.prealloced_context = 0;
pInfmt = av_find_input_format("vfwcap");
if( !pInfmt )
{
fprintf(stderr,"Unknown input format\n");
return -1;
}
// Open video file (formatParams can be NULL for autodetecting probably)
if (av_open_input_file(&pInFormatCtx, 0, pInfmt, 0, formatParams) < 0)
return -1; // Couldn't open device
/* Same as video4linux code*/
So another question is: how many devices are supported by Libav? All I have found about capture cameras output with libav on windows is advice to use DirectShow for this purpose because libav supports too few devices. Maybe situation has already changed now and it does support enough devices to use it in production applications?
If this isn't possible.. Well I hope my question won't be useless and this composed from different sources piece of code will help someone interested in this theme 'coz there are really too few information about it in the whole internet.
FFMPEG cannot capture video on Windows. Once I had to implement this myself, using DirectShow capturing
I am trying to seek the given frame in the video using ffmpeg library. I knew that there is av_seek_frame() function but it was recommended to use avformat_seek_file()instead. Something similar mentioned here.
I know that avformat_seek_file() can't always take you to exact frame you want, but this is ok for me. I just want to jump to the nearest keyframe. So i open the video, find videostream and calling it like this:
avformat_seek_file( formatContext, streamId, 0, frameNumber, frameNumber, AVSEEK_FLAG_FRAME )
It always returns 0, so i understand it as correct finish. However, it doesn't work as it should to. I check byte position like here before and after calling avformat_seek_file(). Actually it changes, but it changes always in the same way whenever i'm trying to put different target frame numbers! I mean that byteposition after this call is always same even with different frameNumber values. Obviously, i'm doing something wrong but i don't know what exactly. I don't know if it does matter but i'm using .h264 files for that. I tried different flags, different files, using timestamps instead of frames, flushing buffers before and after and so on but it doesn't work for me. I will be very grateful if someone could show me what is wrong with it.
I had the same issue, see the code bellow (it works for me):
...
checkPosition(input_files[file_index].ctx);
...
void checkPosition(AVFormatContext *is) {
int stream_index = av_find_default_stream_index(is);
//Convert ts to frame
tm = av_rescale(tm, is->streams[stream_index]->time_base.den, is->streams[stream_index]->time_base.num);
tm /= 1000;
//SEEK
if (avformat_seek_file(is, stream_index, INT64_MIN, tm, INT64_MAX, 0) < 0) {
av_log(NULL, AV_LOG_ERROR, "ERROR av_seek_frame: %u\n", tm);
} else {
av_log(NULL, AV_LOG_ERROR, "SUCCEEDED av_seek_frame: %u newPos:%d\n", tm, is->pb->pos);
avcodec_flush_buffers(is->streams[stream_index]->codec);
}
}
your problem may be related to the fact that your input is raw .h264. try using e.g. mp4box to mux it into a .mp4 file, then load the mp4 file with ffmpeg and try to seek to a keyframe again. e.g.:
mp4box -new -add my_file.h264 my_file.mp4