c/c++ program for ffmpeg command line - c++

I want to find the corresponding c/c++ program for the ffmpeg command line:
ffmpeg -i sample.mp4 -an -vcodec libx264 -crf 23 outfile.h264
which convert a sample.mp4 file to outfile.h264, and
ffplay outfile.h264
What I am doing is to combine the ffmpeg program with my udp socket program to do a real-time video transmission. beacause it is 'real-time', so I want to find the ffmpeg program cut it at the frame encoding step instead of writing the frames into output file, and send frame by frame and also read frame by frame at the server side.
My questions are:
1.What c/c++ program is actually running when I use the above command line?
2.where can i find the c/c++ program?

ffmpeg is running. Same as any other program, there's an actual file on "the path" called ffmpeg. Use which ffmpeg or where ffmpeg to find it. On the systems at my university it is in /usr/pkg/bin/ffmpeg, but /usr/bin/ffmpeg is probably more typical.
The program itself is the ffmpeg file. If you mean the source code for the program, that is most likely not installed on your computer - you'll need to download it from somewhere (such as from the official FFmpeg website).
Note that FFmpeg is both a program and a set of libraries (libavformat, libavcodec, libavutil, etc). Most of the work is done in the libraries; the ffmpeg program itself just glues them together depending on the command line options.
As such, a more useful approach might be to learn how to use libavformat and libavcodec (and whichever other FFmpeg libraries) to do what you want. The documentation at http://ffmpeg.org/documentation.html doesn't seem to go into much detail; you might want to search for some tutorials.

Related

MP3 decoder using IPP

I need an mp3 decoder (no need for encoding), and since I already own Intel IPP, I wonder if I can use it instead searching for other libraries such as mpg123 or ffmpeg. This page contains IPP sample archive:
https://software.intel.com/en-us/articles/code-samples-for-intel-integrated-performance-primitives-intel-ipp-v61-library
There is something called UMC, which should be generalized codec system, but the documentation is virtually nonexistent, it is spread between many subprojects and the downloads are different for Windows & Mac.
Is UMC the right way to go to get an MP3 decoder? Does it work on both Windows & Mac?

QNX Microphone sampling and speaker playback

I am using QNX neutrino RTOS, I am new to QNX. I have setup my first project with some IPC messaging between two threads.
What I want to do is have one thread as a microphone "driver" that samples input from the microphone and stores / sends it as PCM packets to another thread which plays it out of the speaker.
So, are there any audio support libraries?, what is the best way to achieve recording microphone input and speaker output?
Yes, QNX comes with an audio library.
The audio library is documented starting at this location (6.5 SP1 version):
http://www.qnx.com/developers/docs/6.5.0_sp1/index.jsp?topic=%2Fcom.qnx.doc.neutrino_audio%2Fabout.html&cp=13_1
Your qnx system includes a utility (command) called "wave" for playing back a .wav file and "waverec" for recording audio from the microphone and saving it to a .wav file.
You can use the "use wave" and "use waverec" commands for getting information about the supported command line options.
The documentation includes the complete source of the wave and waverec utilities:
wave.c:
http://www.qnx.com/developers/docs/6.5.0_sp1/index.jsp?topic=%2Fcom.qnx.doc.neutrino_audio%2Fwavec.html
waverec.c:
http://www.qnx.com/developers/docs/6.5.0_sp1/index.jsp?topic=%2Fcom.qnx.doc.neutrino_audio%2Fwaverec.html
The recommended way to start with audio recording and playback is to first have the wave and waverec binaries shipped with the system working. After that build the supplied source, have it working again, then understand it and embed in your application, possibly after stripping it down. (Because the sample is generic and perhaps you want to hard-code certain features that are dynamically configured in the sample).
You need to link against the libasound.so library in order to build the samples.
A minimal command-line example (tested) to build wave.c for armlev7 and x86:
ntoarmv7-gcc wave.c -o wave -l asound
ntox86-gcc wave.c -o wave -l asound
If you are building via the IDE then you need to add the library in the appropriate setting.
You are welcome to post here any questions you may have while working with the samples.

Using Standard .so files in GStreamer

I'm a beginner in GStreamer programming.
I created a pipeline for playing media in a file.
I would like to use libogg.so for decoding the ogg format files rather than libgstogg.so.
How can i use the this decoder in my pipeline?
Everybody is beginner in gstreamer programming it seems :)
You will have to write a program, may be C program and link it against libogg.so rather than libgstogg.so
You can achieve above easily using cmake
To be confirm what lib the program picked for linking,
You may try renaming libgstogg or
Running strace to see which location the binary picked the file
Create a shared library of your program and run ldd
you seem to miss some basic concepts here:
gstreamer is a framework to build media pipelines from elements
elements are defined via a plugin-system. the plugin-files are shared-objects (.so) files
you cannot use just any shared-object as a plugin (gstreamer will not recognize e.g. libc.so as a plugin - simply because libc.so does not provide a gstreamer plugin)
so in your special case you want to exchange libgstogg.so by libogg.so.
now libgstogg.so is a shared object, that provides the ogg family of gstreamer plugins.
on the other side libogg.so is an implementation of an ogg muxer/demuxer that has nothing to do with gstreamer.
if you want to use libogg.so you will have to create your own gstreamer-plugin that uses libogg.so. check the documentation on how to do that.
before you do so, check what libgstogg.so actually does. you will disover, that it provides a gstreamer plugin that uses libogg.so to do the actual muxing/demuxing. it is really just the bridge that you are looking for:
$ ldd /usr/lib/x86_64-linux-gnu/gstreamer-0.10/libgstogg.so | grep ogg
libogg.so.0 => /usr/lib/x86_64-linux-gnu/libogg.so.0 (0x00007fc3f9ae7000)

Real-time Video capturing c++ lib

Can you advice me any lib, which can help me to capture rtsp stream from ip camers. I have already used ffmpeg and openCV for this task, but ffmpeg has problems with working with AXIS IP-cameras, and openCV can't give me compressed data befor decompressing it (but i have to keep them cmpressed in archive). I develop on windows and Qt, if there are some ready binary lib files, it will be great, becouse, lots of libs if so complicated to build. Thank you for help!
Try your hand with the Red5 server.You will get what you want.Cheers
Read more at: http://red5wiki.com/wiki/SteamStream

How to convert flv video to mp3 using ffmpeg programmatically?

I need to create application on C++ for video conversion, and I can't use ffmpeg.exe.
I need to do it programmatically, but I don't know how can I do it, and I didn't find any examples on Internet.
May be somebody know something about my task? Thank you.
ffmpeg is an open source project. The transcoding engine used by ffmpeg is in the libraries libavcodec (for the codecs) and libavformat (for the containers). You can write your conversion as calls into these libraries, without the ffmpeg command line application.
Here is a tutorial on using these libraries.
Good luck.
Here's another good ffmpeg tutorial. Looking at the actual source code for ffmpeg would also help.
An updated version of the tutorial source is here.