I'm developing an application, capturing some ip camera with RTSP protocols and send them to one server.
everything is going well when the resolution of ip cameras is low but when they're increased the memory consumption of my program goes up suddenly.
I've realized that ffmpeg stores a sequence of frames which we can seek to.
1.is there any way to reduce the len of that?
2.is it possible to reduce the frame size when the ffmpeg reads the frames from the input(=>ip camera)? the size 400x400 is enough for my app but currently it's 2048
3.or any other way to help me reduce memory usage
1) To reduce memory :
i) Reduce frame rate : use -framerate
ii) Increase compression by selecting longer GOP : use -g
2) To scale your input : use -s switch
ffmpeg -framerate 25 -g 14 -i input.mp4 -vcodec libx264 -s 400x400 http://localhost:1234/out.ffm
Edited:
For integrating ffmpeg to your c++ project, these are some of the solutions:
Using system(ffmpeg command line); // Easy
Use CreatePrcess and pipes to hide console window and show progress in your GUI. // Medium
Use ffmpeg distributed include files and libraries to integrate in your project. // Need high learning curve
Related
I am using a Basler camera and want to write images that I grabbed at 1000x1000px to disk as fast as possible.
Keeping a movie container open and saving as .avi doesn't really help (slows down 2x) and saving single images is even slower.
I am currently using openCV cvVideoCreator and imwrite functions.
My computer should be fast enough (Intel Xeon 3.6 GHz, 64GB RAM, SSD).
Any ideas?
There are multiple factors (steps) you may want to consider in order to reach your goal (saving color images (1K x 1K x 3) with 50 FPS or 150 MB/s).
Image encoding: most of well-known image formats such as png, tif, jpg takes time to encode image (e.g., 5.7 ms for png and 11.5 ms for tif in OpenCV with 3.6 GHz 4-core CPU) to the specific format even before saving the encoded format data to a disk.
File opening: Independent of file size, this may take time (e.g., 1.5 ms on 3.5" WD Black)
File writing: Dependent of file size, this may take time (e.g., 2.0 ms on 3.5" WD Black)
File closing: Dependent of file size, this may take a lot of time (e.g., 14.0 ms on 3.5" WD Black)
This means you have to finish all of the steps in 20 ms per image for your goal, and as I gave examples of timing, you may not be able to achieve your goal with OpenCV imwrite in JPG because the library sequentially does all the steps above in a single thread.
I think you have a couple of options
imwrite into BMP format on SSD as its encoding time is virtually zero (e.g., less than 0.3 ms)
do some of steps above (e.g., encoding or file closing) in a separate thread.
Especially, file closing is a good candidate to be run in a separate thread because it can be asynchronously done with the other steps. I was able to reach 400 MB/s bandwidth of saving with the second option, BMP file format, and a better hard disk.
Hope this helps you and others with similar goals.
The specs you state in your question are related to your ability to process and buffer the data, but not about the speed you can dump to disk.
You're trying to write (some numbers assumed, just replace with your own)
1000*1000 (size) * 4 (data/pixel) * 25 (frame rate) bytes per second.
(or 100M/s)
This is around abouts the limit of a traditional HDD, but if the disk is fragmented or full at all it's unlikely to keep up.
As a result you must find a way to either speed up your write time (switch to SSD for example); reduce the data being written (compress, reduction in colour depth / quality / frame rate) or buffer what you want to write while a background thread saves to disk.
The question you must ask is how long do you plan to record for. If it's not long enough to fill up your RAM, then you have all the options available to you. If however you plan to record for extended periods of time then you will have to pick one of the other 2.
I am trying to use OpenCV to read and display a 4K video file. The same program, a very simple one shown in appendix A, works fine when displaying 1080 videos but there is noticeable lag when upgrading to the 4K video.
Obviously there is now 16x more pixels in any operation.
Now I am running generally on a PC with not great specifications, inbuilt graphics, 4Gb RAM & i3 CPU and a HDD (not SSD). I have tested this on a PC with 8GB RAM, i5 & SSD and although 3.XGb of RAM is used it seems mainly a CPU intensive program and is maxing all my cores out at 100% even on the better PC.
My questions are: (to make this post specific)
Is this something that would be helped by using the GPU operations?
Is this a problem that would be solved by upgrading to a PC with a better CPU? Practically this application can only be run on an i7 as I don't imagine we are going to be buying a server CPU...
Is it the drawing to the screen operation or simply reading it from the disk that is causing the slow down?
If anyone has any past experience on using 4K with OpenCV that would also be useful information.
Appendix A
int main()
{
VideoCapture cap(m_selected_video);
if (!cap.isOpened()) // check if we succeeded
{
std:cout << "Video ERROR";
}
while (_continue)
{
Mat window1;
cap >> window1; // get a new frame from camera
imshow("Window1", window1);
if (waitKey(30) >= 0) break;
}
}
The answer to this question is an interesting one, but I think boils down to codecs or encoding of videos.
THe first video I was using was this one (although that might not be the exact download I used) which doesn't seem to play very well in VLC or OpenCV but does play fine in windows media player. I think this is because it is encoded in MPEG AAC Audio.
I then downloaded an elysium 4k trailer which is h264 encoded and seems to work fine in both VLC & OpenCV. so Hooray 4K isn't a problem overall in OpenCV!
SO I thought it might be file sizes. I paid for and downloaded a 7Gb 6 minute 4K video. This plays fine in both OpenCV & VLC with no lag even if drawing it to the screen three times. This is a .mov file and I don't currently have access to the codec (ill update this bit when I do).
So TL:DR: It's not file sizes or container types that causes issues but there does seem to be an issue with certain codecs. This is only a small exploration and there might be differing issues.
addendum: Thanks to the help of cornstalks in the comments who pointed out that WMP might have built in GPU support and to do any testing in VLC which was very helpful
Now I use ffmpeg to encode my video in c++. I need to decode a h264 frame without other frames. So I need to make all my frames in my video become i-frames. But I don't know how to set parameters in order to do this. What should I do if I need to make all my video frame i-frames?
ffmpeg -i yourfile -c:v libx264 -x264opts keyint=1 out.mp4
-x264opts keyint=1 sets the keyframe interval to 1 (I believe you can also use -g 1). You probably want to set other rate control parameters also, e.g. -crf 10 (for quality) and -preset veryslow (for speed), see this page.
I am a student currently working on my final project. Our project is focusing on new type network coding research. Now my task is to do a real-time video transmission to test the network coding. I have learned something of ffmepg and opencv and have finished a c++ program which can divide the video into frames and send it frame by frame. However, by this way, the transmission data (the frames)size are quite much more than the original video file size. My prof advise me try to find the keyframe and inter frame diff of the video (mjpeg format), so that transmit the keyframe and interframe diff only instead of all the frames with large amount of redundancy, and therefore reduce the transmission data. I have no idea in how to do this in c++ and ffmpeg or opencv. Can any one give any advice?
For my old program, please refer to here. C++ Video streaming and transimisson
I would recommend against using ffmpeg/libav* at all. I would recommend using libx264 directly. By using x264 you can have greater control of NALU slice sizes as well as lower encoder latency by utilizing callbacks.
Two questions which already may help yourself:
How are you interfacing from c++ to ffmpeg? ffmpeg generally refers to the command line tool, from c++ you generally use the individual libs which are part of ffmpeg. You should use libavcodec to encode your frames and possibly libavformat to packetize them into a container format.
Which codec do you use?
I'm generating a video (.AVI) that last about 1 minutes for 150MB of size on 320x240 mode.The size is really big, and I can't upload it efficiently.
After the recording application has finished How could I compress the video without displaying a window?
I recently installed FFMPEG and with this command:
ffmpeg -i input.avi -s 320x240 -vcodec msmpeg4v2 output.avi
I can take the video to 3MB! I must say fantastic!
But... How could I do this from the inside of my application?
It would really be better doing this while the application is recording, whiutout installing ffmpeg, and not after.
I'm now reading http://msdn.microsoft.com/en-us/library/windows/desktop/dd374572(v=vs.85).aspx
Is this the right page to read?