FFmpeg: How to control console output while reading from RTSP? - c++

So I created simple Consol app: FFmpeg RTSP Video stream reader (using only general FFmpeg C API) But while ffmpeg reads from RTSP it shows lots of info. I did not asked for if... At least not all of it... So how can I filter what ffmpeg is outputing? I mean in all he talls user-developer there is only one important line something like: missing picture in acsess unit so how to put some filter mechanism for ffmpeg not to output all it wants and for me developer to catch the moment when message I want appeares? (In my project I write in C++ under visual studio using Boost libs)

Use av_log_set_callback, to set your function as callback:
static void avlog_cb(void *, int level, const char * szFmt, va_list varg) {
//do nothing...
}
av_log_set_callback(avlog_cb);
or, you may also use
av_log_set_level(AV_LOG_ERROR);
to print error messages only.

Related

How to extract audio form video using ffmpeg in C++?

I'm developing a C++ app that uses FFmpeg to play audio/video. Now I want to enhance the application to allow the users to extract audio from video. How can FFmpeg can be used for this? I searched a lot about this but I was not able to find a tutorial regarding it.
You need to
Open the input context [ avformat_open_input ]
Get the stream information [ avformat_find_stream_info ]
Get the audio stream:
if (inputFormatContext->streams[index]->codec->codec_type ==
AVMEDIA_TYPE_AUDIO) {
inputAudioStream = inputFormatContext->streams[index];
}
Read each packet.
AVPacket packet;
int ret = av_read_frame(inputFormatContext, &packet);
if (ret == 0) {
if (packet.stream_index == inputAudioStream->index) {
// packet.data will have encoded audio data.
}
}
This seems like a simple scripting task... why do you want to use the heavy artillery (C/C++) to swat a fly?
I use Applescript to build/run an ffmpeg command line via a Bash shell. The only reason I involve Applescript is so I can invoke it as a droplet (ie drag-and-drop the file(s) onto the app and have it run without interaction.)
I get that you're probably on Windows, meaning no Applescript and no Bash. But surely something lighter than C can build/run an ffmpeg command line. It's really as simple as:
ffmpeg -i infile.mp4 -b 160k outfile.mp3

Read H.265 and VP9 frame?

I'm trying to compare 3 videos that are encoded by H.264, H.265, and VP9.
All of them are made by a same YUV video.
I want to use OpenCV's function to read each frame of the video and do some comparison:
VideoCapture vCap1, vCap2, vCap3;
vCap1.open("h264.mp4");
vCap2.open("h265.mp4");
vCap3.open("vp9.webm");
Mat frame1, frame2, frame3;
while (vCap1.read(frame1) && vCap2.read(frame2) && vCap3.read(frame3))
{
//do something
}
The vCap1 opened successfully, but vCap2 and vCap3 won't open.
Did I miss something to include to make it work?
Or OpenCV even not support the other 2 formats?
After using google :-) I found that
http://answers.opencv.org/question/10741/videocapture-format-supported-by-opencv/
Especially you have the needed codecs installed on your system. You can visit also
http://www.fourcc.org/codecs.php
for codecs.
The documentation from OpenCV is indeed not very helpful. :-)
What I would try if you are running under linux:
strace -xfo dump
and take a look in the system calls. Maybe you can find some hints of missing codec files, used configuration files and or other failed system function calls. If so, you have a startpoint.

QT phonon playback is failing when a QFILE is used for mediaSource, works fine when a string is passed

Below is the code I am using to play a video
QFile* file =new QFile(“C:\\Video\\test.avi”);
media->setCurrentSource(Phonon::MediaSource(file));
media->play();
Using this code the playback fails -what I see is the play bar at the bottom but the video never starts.
If I change the code to the following everything works as expected
media->setCurrentSource(Phonon::MediaSource(“C:\\Video\\test.avi”));
media->play();
Are there additional initialization steps required when using an iodevice? Ultimately my code will be using a custom iodevice which is not working as well.
This is an old post, but I wanted to clear up any confusion out in case it will help someone in the future.
QT does allow you to pass Phonon::MediaSource() a QIODevice. We successfully deployed our solution by creating our own subclass of QIODevice.
The reason it was not working for me was QT was having an issue with the codec I was using. When you use the QIO device you don't get the same format support as you would if you pass a string.
One other thing to note, while this solution works great on windows. On a mac when using the QIO device the entire file will be loaded into memory before it plays. In my case this was a deal breaker. Having an encrypted file is usless if the first thing you do is de-crypt the entire file and load it into memory.
From the Phonon::MediaSource documentation:
Warning: On Windows, we only support QIODevices containing the avi,
mp3, or mpg formats. Use the constructor that takes a file name to
open files (the Qt backend does not use a QFile internally).
I think that the last line should answer your question. Instead of a QFile, you can use a QString, or call the function QFile::fileName like this:
QFile* file =new QFile(“C:\\Video\\test.avi”);
media->setCurrentSource(Phonon::MediaSource(file->fileName()));
media->play();
If you take a careful look in the [Phonon Module docu][1], you will see that MediaSource cannot be constructed with QFile*.
By the way I don't see in your code any phonon paths. At least you should create audio sink and connect it with the mediaobject:
Phonon::AudioOutput *audioOut = new PhononAudioOutpu(Phonon::MusicCategory);//or the category you need
Phonon::createPath(mediaObject, audioOutput);
mediaObject->play();
Works fine with QFile

FTPClient in MFC :GetFile(Download) issue

I am using CFtpConnection class for creating my FTPClient Library using MFC.
I am using GetFile to download file from Server.
MY requirement is like if i am downloading 100 MB video from server when 50-60 MB video is downloaded and in between if i play that while it should play upto that particular location what it has downloaded uptil that time .
Is that way i can do it any additional parameters i need to pass or something like that?
My FTPlibrary download method is as follows:
CFtpConnection* m_pConnect;
bool CFTPClient::Download(LPCTSTR pstrRemoteFile, LPCTSTR pstrLocalFile,
DWORD dwFlags)
{
m_pConnect->GetFile(pstrRemoteFile,pstrLocalFile,dwFlags);
return true;
}
And while calling in my application i am doing like this :
CFTPClient m_objftpclient ;
m_objftpclient.Download("MVI_2884_1.avi","D:\\MVI_2884_1.avi",FTP_TRANSFER_TYPE_BINARY);
You can't do that easily or even do it at all. The GetFile method of CFtpConnection is blocking which means it will exit only when the file is downloaded. So even if you thread it, the only way you can monitor the download is to get the size of the file on disk.
If you're about to implement video streaming, you should go down a level and work at the socket level. If you really want to use CFtpConnection, you should use the method OpenFile which returns a CInternetFile which can be read by chunks allowing you to monitor the download and share the buffer in which the file is downloaded for playback.

C++ : What's the easiest library to open video file

I would like to open a small video file and map every frames in memory (to apply some custom filter). I don't want to handle the video codec, I would rather let the library handle that for me.
I've tried to use Direct Show with the SampleGrabber filter (using this sample http://msdn.microsoft.com/en-us/library/ms787867(VS.85).aspx), but I only managed to grab some frames (not every frames!). I'm quite new in video software programming, maybe I'm not using the best library, or I'm doing it wrong.
I've pasted a part of my code (mainly a modified copy/paste from the msdn example), unfortunately it doesn't grabb the 25 first frames as expected...
[...]
hr = pGrabber->SetOneShot(TRUE);
hr = pGrabber->SetBufferSamples(TRUE);
pControl->Run(); // Run the graph.
pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done.
// Find the required buffer size.
long cbBuffer = 0;
hr = pGrabber->GetCurrentBuffer(&cbBuffer, NULL);
for( int i = 0 ; i < 25 ; ++i )
{
pControl->Run(); // Run the graph.
pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done.
char *pBuffer = new char[cbBuffer];
hr = pGrabber->GetCurrentBuffer(&cbBuffer, (long*)pBuffer);
AM_MEDIA_TYPE mt;
hr = pGrabber->GetConnectedMediaType(&mt);
VIDEOINFOHEADER *pVih;
pVih = (VIDEOINFOHEADER*)mt.pbFormat;
[...]
}
[...]
Is there somebody, with video software experience, who can advise me about code or other simpler library?
Thanks
Edit:
Msdn links seems not to work (see the bug)
Currently these are the most popular video frameworks available on Win32 platforms:
Video for Windows: old windows framework coming from the age of Win95 but still widely used because it is very simple to use. Unfortunately it supports only AVI files for which the proper VFW codec has been installed.
DirectShow: standard WinXP framework, it can basically load all formats you can play with Windows Media Player. Rather difficult to use.
Ffmpeg: more precisely libavcodec and libavformat that comes with Ffmpeg open- source multimedia utility. It is extremely powerful and can read a lot of formats (almost everything you can play with VLC) even if you don't have the codec installed on the system. It's quite complicated to use but you can always get inspired by the code of ffplay that comes shipped with it or by other implementations in open-source software. Anyway I think it's still much easier to use than DS (and much faster). It needs to be comipled by MinGW on Windows, but all the steps are explained very well here (in this moment the link is down, hope not dead).
QuickTime: the Apple framework is not the best solution for Windows platform, since it needs QuickTime app to be installed and also the proper QuickTime codec for every format; it does not support many formats, but its quite common in professional field (so some codec are actually only for QuickTime). Shouldn't be too difficult to implement.
Gstreamer: latest open source framework. I don't know much about it, I guess it wraps over some of the other systems (but I'm not sure).
All of this frameworks have been implemented as backend in OpenCv Highgui, except for DirectShow. The default framework for Win32 OpenCV is using VFW (and thus able only to open some AVI files), if you want to use the others you must download the CVS instead of the official release and still do some hacking on the code and it's anyway not too complete, for example FFMPEG backend doesn't allow to seek in the stream.
If you want to use QuickTime with OpenCV this can help you.
I have used OpenCV to load video files and process them. It's also handy for many types of video processing including those useful for computer vision.
Using the "Callback" model of SampleGrabber may give you better results. See the example in Samples\C++\DirectShow\Editing\GrabBitmaps.
There's also a lot of info in Samples\C++\DirectShow\Filters\Grabber2\grabber_text.txt and readme.txt.
I know it is very tempting in C++ to get a proper breakdown of the video files and just do it yourself. But although the information is out there, it is such a long winded process building classes to hand each file format, and make it easily alterable to take future structure changes into account, that frankly it just is not worth the effort.
Instead I recommend ffmpeg. It got a mention above, but says it is difficult, it isn't difficult. There are a lot more options than most people would need which makes it look more difficult than it is. For the majority of operations you can just let ffmpeg work it out for itself.
For example a file conversion
ffmpeg -i inputFile.mp4 outputFile.avi
Decide right from the start that you will have ffmpeg operations run in a thread, or more precisely a thread library. But have your own thread class wrap it so that you can have your own EventAgs and methods of checking the thread is finished. Something like :-
ThreadLibManager()
{
List<MyThreads> listOfActiveThreads;
public AddThread(MyThreads);
}
Your thread class is something like:-
class MyThread
{
public Thread threadForThisInstance { get; set; }
public MyFFMpegTools mpegTools { get; set; }
}
MyFFMpegTools performs many different video operations, so you want your own event
args to tell your parent code precisely what type of operation has just raised and
event.
enum MyFmpegArgs
{
public int thisThreadID { get; set; } //Set as a new MyThread is added to the List<>
public MyFfmpegType operationType {get; set;}
//output paths etc that the parent handler will need to find output files
}
enum MyFfmpegType
{
FF_CONVERTFILE = 0, FF_CREATETHUMBNAIL, FF_EXTRACTFRAMES ...
}
Here is a small snippet of my ffmpeg tool class, this part collecting information about a video.
I put FFmpeg in a particular location, and at the start of the software running it makes sure that it is there. For this version I have moved it to the Desktop, I am fairly sure I have written the path correctly for you (I really hate MS's special folders system, so I ignore it as much as I can).
Anyway, it is an example of using windowless ffmpeg.
public string GetVideoInfo(FileInfo fi)
{
outputBuilder.Clear();
string strCommand = string.Concat(" -i \"", fi.FullName, "\"");
string ffPath =
System.Environment.GetFolderPath(Environment.SpecialFolder.Desktop) + "\\ffmpeg.exe";
string oStr = "";
try
{
Process build = new Process();
//build.StartInfo.WorkingDirectory = #"dir";
build.StartInfo.Arguments = strCommand;
build.StartInfo.FileName = ffPath;
build.StartInfo.UseShellExecute = false;
build.StartInfo.RedirectStandardOutput = true;
build.StartInfo.RedirectStandardError = true;
build.StartInfo.CreateNoWindow = true;
build.ErrorDataReceived += build_ErrorDataReceived;
build.OutputDataReceived += build_ErrorDataReceived;
build.EnableRaisingEvents = true;
build.Start();
build.BeginOutputReadLine();
build.BeginErrorReadLine();
build.WaitForExit();
string findThis = "start";
int offset = 0;
foreach (string str in outputBuilder)
{
if (str.Contains("Duration"))
{
offset = str.IndexOf(findThis);
oStr = str.Substring(0, offset);
}
}
}
catch
{
oStr = "Error collecting file information";
}
return oStr;
}
private void build_ErrorDataReceived(object sender, DataReceivedEventArgs e)
{
string strMessage = e.Data;
if (outputBuilder != null && strMessage != null)
{
outputBuilder.Add(string.Concat(strMessage, "\n"));
}
}
Try using the OpenCV library. It definitely has the capabilities you require.
This guide has a section about accessing frames from a video file.
If it's for AVI files I'd read the data from the AVI file myself and extract the frames. Now use the video compression manager to decompress it.
The AVI file format is very simple, see: http://msdn.microsoft.com/en-us/library/dd318187(VS.85).aspx (and use google).
Once you have the file open you just extract each frame and pass it to ICDecompress() to decompress it.
It seems like a lot of work but it's the most reliable way.
If that's too much work, or if you want more than AVI files then use ffmpeg.
OpenCV is the best solution if video in your case only needs to lead to a sequence of pictures. If you're willing to do real video processing, so ViDeo equals "Visual Audio", you need to keep up track with the ones offered by "martjno". New windows solutions also for Win7 include 3 new possibilities additionally:
Windows Media Foundation: Successor of DirectShow; cleaned-up interface
Windows Media Encoder 9: It does not only include the programm, it also ships libraries for coding
Windows Expression 4: Successor of 2.
Last 2 are commercial-only solutions, but the first one is free. To code WMF, you need to install the Windows SDK.
I would recommend FFMPEG or GStreamer. Try and stay away from openCV unless you plan to utilize some other functionality than just streaming video. The library is a beefy build and a pain to install from source to configure FFMPEG/+GStreamer options.