Video compression - DirectShow - c++

I'm generating a video (.AVI) that last about 1 minutes for 150MB of size on 320x240 mode.The size is really big, and I can't upload it efficiently.
After the recording application has finished How could I compress the video without displaying a window?
I recently installed FFMPEG and with this command:
ffmpeg -i input.avi -s 320x240 -vcodec msmpeg4v2 output.avi
I can take the video to 3MB! I must say fantastic!
But... How could I do this from the inside of my application?
It would really be better doing this while the application is recording, whiutout installing ffmpeg, and not after.
I'm now reading http://msdn.microsoft.com/en-us/library/windows/desktop/dd374572(v=vs.85).aspx
Is this the right page to read?

Related

ffmpeg reduce memory consumption

I'm developing an application, capturing some ip camera with RTSP protocols and send them to one server.
everything is going well when the resolution of ip cameras is low but when they're increased the memory consumption of my program goes up suddenly.
I've realized that ffmpeg stores a sequence of frames which we can seek to.
1.is there any way to reduce the len of that?
2.is it possible to reduce the frame size when the ffmpeg reads the frames from the input(=>ip camera)? the size 400x400 is enough for my app but currently it's 2048
3.or any other way to help me reduce memory usage
1) To reduce memory :
i) Reduce frame rate : use -framerate
ii) Increase compression by selecting longer GOP : use -g
2) To scale your input : use -s switch
ffmpeg -framerate 25 -g 14 -i input.mp4 -vcodec libx264 -s 400x400 http://localhost:1234/out.ffm
Edited:
For integrating ffmpeg to your c++ project, these are some of the solutions:
Using system(ffmpeg command line); // Easy
Use CreatePrcess and pipes to hide console window and show progress in your GUI. // Medium
Use ffmpeg distributed include files and libraries to integrate in your project. // Need high learning curve

How can I make all frames in my video become i-frames?

Now I use ffmpeg to encode my video in c++. I need to decode a h264 frame without other frames. So I need to make all my frames in my video become i-frames. But I don't know how to set parameters in order to do this. What should I do if I need to make all my video frame i-frames?
ffmpeg -i yourfile -c:v libx264 -x264opts keyint=1 out.mp4
-x264opts keyint=1 sets the keyframe interval to 1 (I believe you can also use -g 1). You probably want to set other rate control parameters also, e.g. -crf 10 (for quality) and -preset veryslow (for speed), see this page.

Create video vom rendered OpenGL frames

i am searching for a way to create a video from a row of frames i have rendered with OpenGL and transfered to ram as int array using the glGetTexImage function. is it possible to achieve this directly in ram (~10 secs video) or do i have to save each frame to the harddisk and encode the video afterwards?
i have found this sample http://cekirdek.pardus.org.tr/~ismail/ffmpeg-docs/api-example_8c-source.html in an other SO question but is this still the best way to do it?
today i got a hint to use this example http://git.videolan.org/?p=ffmpeg.git;a=blob;f=doc/examples/decoding_encoding.c;h=cb63294b142f007cf78436c5d81c08a49f3124be;hb=HEAD to see how h264 could be achieved. the problem when you want to encode frames rendered by opengl is that they are in RGB(A) and most codecs require YUV. for the convertion you can use swscale (ffmpeg) - an example how to use it can be found here http://web.me.com/dhoerl/Home/Tech_Blog/Entries/2009/1/22_Revised_avcodec_sample.c.html
as ananthonline stated the direct encoding of the frames is very cpu intensive but you can also write your frames with ffmpeg as rawvideo format, which supports the rgb24 pixelformat, and convert it offline with the cmd commands of ffmpeg.
If you want a cross-platform way of doing it, you're probably going to have to use ffmpeg/libavcodec but on Windows, you can write an AVI quite simply using the resources here.

How do i use libavfilter to deinterlace frames in my video player software

I'm using libavformat/libavcodec/libswscale/libavutil/libavfilter (ffmpeg related libraries) to make a video player.
I'v gotten into issues with interlaced videos, it just pairs them incorrectly... It always draws the previous bottom frame with the current top frame. Which results in things I don't want. And i'v tried messing about with the variables around this, it just won't work. (I haven't found a player which would play the videos I have correctly, no you can't have them, i'm sorry)
I managed to find a way around this, by re-encoding the video with the following command:
ffmpeg -i video.mp4 -filter:v yadif -vcodec mpeg4 out.avi
Now what i'd need is directions on how to do this with c++ code, inside my video player.
I haven't found any tutorials on the matter and the ffmpeg.c source code is just too alien to me.
A link to a tutorial would be fine, i just haven't found it..
Edit:
Also this example was worth checking out:
https://github.com/krieger-od/imgs2video/blob/master/imgs2video.c
It's by a gentleman named Andrey Utkin
See doc/examples/filtering.c from FFMPEG source.

ffmpeg image to video frame length

hi i am useing ffmpeg to convert images into videos but images loads very fast and movie end very shortly how can i add a delay in images
i am useing this command
ffmpeg -r 10 -b 1800 -i %03d.jpg -vframes 100 abc.avi
Lower the -r value, you have it currently set to 10 frames per second.