hi i am useing ffmpeg to convert images into videos but images loads very fast and movie end very shortly how can i add a delay in images
i am useing this command
ffmpeg -r 10 -b 1800 -i %03d.jpg -vframes 100 abc.avi
Lower the -r value, you have it currently set to 10 frames per second.
Related
I'm developing an application, capturing some ip camera with RTSP protocols and send them to one server.
everything is going well when the resolution of ip cameras is low but when they're increased the memory consumption of my program goes up suddenly.
I've realized that ffmpeg stores a sequence of frames which we can seek to.
1.is there any way to reduce the len of that?
2.is it possible to reduce the frame size when the ffmpeg reads the frames from the input(=>ip camera)? the size 400x400 is enough for my app but currently it's 2048
3.or any other way to help me reduce memory usage
1) To reduce memory :
i) Reduce frame rate : use -framerate
ii) Increase compression by selecting longer GOP : use -g
2) To scale your input : use -s switch
ffmpeg -framerate 25 -g 14 -i input.mp4 -vcodec libx264 -s 400x400 http://localhost:1234/out.ffm
Edited:
For integrating ffmpeg to your c++ project, these are some of the solutions:
Using system(ffmpeg command line); // Easy
Use CreatePrcess and pipes to hide console window and show progress in your GUI. // Medium
Use ffmpeg distributed include files and libraries to integrate in your project. // Need high learning curve
Context
I used a C++ program to write raw bytes to a file (image.raw) in RGB32 format:
R G B A R G B A R G B A ...
and I want to be able to view it in some way. I have the dimensions of the image.
My tools are limited to command line commands (e.g. ffmpeg). I have visited the ffmpeg website for instructions, but it deals more with converting videos to images.
Questions
Is it possible to turn this file into a viewable file type (e.g. .jpeg, .png) using ffmpeg. If so, how would I do it?
If it's not possible, is there a way I can use another command?
It that's still not viable, is there any way I can manipulate the RGB32 bytes inside a C++ program to make it more suitable without the use of external libraries? I also don't want to encode .jpeg myself like this.
Use the rawvideo demuxer:
ffmpeg -f rawvideo -pixel_format rgba -video_size 320x240 -i input.raw output.png
Since there is no header specifying the assumed video parameters you must specify them, as shown above, in order to be able to decode the data correctly.
See ffmpeg -pix_fmts for a list of supported input pixel formats which may help you choose the appropriate -pixel_format.
get a single frame from raw RGBA data
ffmpeg -y -f rawvideo -pix_fmt rgba -ss 00:01 -r 1 -s 320x240 -i input.raw -frames:v 1 output.png
-y overwrite output
-r input framerate (placed before -i)
-ss skip to this time
-frames:v number of frames ot output
Now I use ffmpeg to encode my video in c++. I need to decode a h264 frame without other frames. So I need to make all my frames in my video become i-frames. But I don't know how to set parameters in order to do this. What should I do if I need to make all my video frame i-frames?
ffmpeg -i yourfile -c:v libx264 -x264opts keyint=1 out.mp4
-x264opts keyint=1 sets the keyframe interval to 1 (I believe you can also use -g 1). You probably want to set other rate control parameters also, e.g. -crf 10 (for quality) and -preset veryslow (for speed), see this page.
I had the commands for exporting a video stream to an mpeg file working correctly with the following code:
ffmpeg -r 24 -pix_fmt rgba -s 1280x720 -f rawvideo -y -i -vf vflip -vcodec mpeg1video -qscale 4 -bufsize 500KB -maxrate 5000KB OUTPUT_FILE
Now, I wanted to add the commands so that audio can be used as well since there's no option for that right now.
I've edited the previous command to the next one:
ffmpeg -r 24 -pix_fmt rgba -s 1280x720 -f rawvideo -y -i -f s16le -ac 1 -ar 44100 -i - -acodec pcm_s16le -ac 1 -b:a 320k -ar 44100 -vf vflip -vcodec mpeg1video -qscale 4 -bufsize 500KB -maxrate 5000KB OUTPUT_FILE
So as you can see I added a new input with the settings for the audio I'm going to be inputting (I'm going to test this with the values of a sine wave).
I'm writing the data to the file like this:
// Write a frame to the ffmpeg stream
fwrite(frame, sizeof(unsigned char*) * frameWidth * frameHeight, 1, ffmpeg);
// Write multiple sound samples per written frame
for (int t = 0; t < 44100/24; ++t)
fwrite(&audio, sizeof(short int), 1, ffmpeg);
The first line is the one that only writes the video (where the frame object is a render to texture from the video I'm inputting)
After that I'm trying to add the audio. I'm using a for-loop so I can use multiple samples per video frame (because otherwise you would only have 24 audio samples per second)
This does render with a couple of issues:
The rendered video shows green flashes
The video slides across the screen. For example, if it slides 200 pixels to the right those pixels get rendered at the other side. Also a bit of the frame that should be at the bottom is rendered at the top (so the frame also slides down but this is a constant, it doesn't move over time)
I can't figure out where my mistake is. I've tried multiple codecs and tried different orders for the commands but it stays the same or gets worse.
Thanks in advance
I'm generating a video (.AVI) that last about 1 minutes for 150MB of size on 320x240 mode.The size is really big, and I can't upload it efficiently.
After the recording application has finished How could I compress the video without displaying a window?
I recently installed FFMPEG and with this command:
ffmpeg -i input.avi -s 320x240 -vcodec msmpeg4v2 output.avi
I can take the video to 3MB! I must say fantastic!
But... How could I do this from the inside of my application?
It would really be better doing this while the application is recording, whiutout installing ffmpeg, and not after.
I'm now reading http://msdn.microsoft.com/en-us/library/windows/desktop/dd374572(v=vs.85).aspx
Is this the right page to read?