OpenCV and 4K Videos - c++

I am trying to use OpenCV to read and display a 4K video file. The same program, a very simple one shown in appendix A, works fine when displaying 1080 videos but there is noticeable lag when upgrading to the 4K video.
Obviously there is now 16x more pixels in any operation.
Now I am running generally on a PC with not great specifications, inbuilt graphics, 4Gb RAM & i3 CPU and a HDD (not SSD). I have tested this on a PC with 8GB RAM, i5 & SSD and although 3.XGb of RAM is used it seems mainly a CPU intensive program and is maxing all my cores out at 100% even on the better PC.
My questions are: (to make this post specific)
Is this something that would be helped by using the GPU operations?
Is this a problem that would be solved by upgrading to a PC with a better CPU? Practically this application can only be run on an i7 as I don't imagine we are going to be buying a server CPU...
Is it the drawing to the screen operation or simply reading it from the disk that is causing the slow down?
If anyone has any past experience on using 4K with OpenCV that would also be useful information.
Appendix A
int main()
{
VideoCapture cap(m_selected_video);
if (!cap.isOpened()) // check if we succeeded
{
std:cout << "Video ERROR";
}
while (_continue)
{
Mat window1;
cap >> window1; // get a new frame from camera
imshow("Window1", window1);
if (waitKey(30) >= 0) break;
}
}

The answer to this question is an interesting one, but I think boils down to codecs or encoding of videos.
THe first video I was using was this one (although that might not be the exact download I used) which doesn't seem to play very well in VLC or OpenCV but does play fine in windows media player. I think this is because it is encoded in MPEG AAC Audio.
I then downloaded an elysium 4k trailer which is h264 encoded and seems to work fine in both VLC & OpenCV. so Hooray 4K isn't a problem overall in OpenCV!
SO I thought it might be file sizes. I paid for and downloaded a 7Gb 6 minute 4K video. This plays fine in both OpenCV & VLC with no lag even if drawing it to the screen three times. This is a .mov file and I don't currently have access to the codec (ill update this bit when I do).
So TL:DR: It's not file sizes or container types that causes issues but there does seem to be an issue with certain codecs. This is only a small exploration and there might be differing issues.
addendum: Thanks to the help of cornstalks in the comments who pointed out that WMP might have built in GPU support and to do any testing in VLC which was very helpful

Related

phytec phyBOARD iMX-6 performed poorly when running qt5 opengles application from flash instead of sd card (fps halved)

I'm developing a graphics application(Racing Game) on phytec phyBOARD iMX-6, with Qt 5.9 and
OpenGLESv2. I create OpenGL context through Qt modules. My problem is my game gets 40 fps when running on sd card. And gets 20 fps when running on Flash. Why opengles frame rate is so low on flash? The operating systems in the flash and sd card are identical.
My first thought was that the performance decreased due to the low read / write ability of the flash. But my game only reads data from disk during the boot phase. In the remaining stages, it exchanges data with the disk in a very limited way. Therefore, It isn't very likely that low performance is caused by disk read and write speeds.
Have you ever encountered such a problem where the opengles frame rate is low when application working on flash? Maybe a similar solution can contribute to me.
I managed to solve it with pure luck. I added the line
PREFERRED_VERSION_mesa = "git"
in the Local.conf file. And now I get the same fps on flash(40 fps) and sd card(40 fps).

QOpenGLWidget video rendering perfomance in multiple processes

My problem may seem vague without code, but it actually isn't.
So, there I've got an almost properly-working widget, which renders video frames.
Qt 5.10 and QOpenGLWidget subclassing worked fine, I didn't make any sophisticated optimizations -- there are two textures and a couple of shaders, converting YUV pixel format to RGB -- glTexImage2D() + shaders, no buffers.
Video frames are obtained from FFMPEG, it shows great performance due to hardware acceleration... when there is only one video window.
The piece of software is a "video wall" -- multiple independent video windows on the same screen. Of course, multi-threading would be the preferred solution, but legacy holds for now, I can't change it.
So, 1 window with Full HD video consumes ~2% CPU & 8-10% GPU regardless of the size of the window. But 7-10 similar windows, launched from the same executable at the same time consume almost all the CPU. My math says that 2 x 8 != 100...
My best guesses are:
This is a ffmpeg decoder issue, hardware acceleration still is not magic, some hardware pipeline stalls
7-8-9 independent OpenGL contexts cost a lot more than 1 cost x N
I'm not using PUBO or some other complex techniques to improve OpenGL rendering. It still explains nothing, but at least it is a guess
The behavior is the same on Ubuntu, where decoding uses different codec (I mean that using GPU accelerated or CPU accelerated codecs makes no difference!), so, it makes more probable that I'm missing something about OpenGL... or not, because launching 6-7 Qt examples with dynamic textures shows normal growth of CPU usages -- it is approximately a sum for the number of windows.
Anyway, it becomes quite tricky for me to profile the case, so I hope someone could have been solving the similar problem before and could share his experience with me. I'd be appreciated for any ideas, how to deal with the described riddle.
I can add any pieces of code if that helps.

Too high bandwidth in capturing multiple STILL images from multiple webcams with OpenCV

Now, I'm doing a project in which many webcams are used for capturing a still image for each webcam using OpenCV in C++.
Like other questions, multiple HD webcams may use too much bandwidth and exceed the limit.
Unlike the others, what I need is only a still image (only one frame) from each webcam. Let's say I have 15 webcams connecting to PC and every 10 seconds I would like to get still images (one image per webcam (total 15 images)) within 5 seconds. The images are then analysed and send a result to an arduino.
Approach 1: Open all webcams all the time and capture images every 10 seconds.
Problem: The bandwidth of USB is not enough.
Approach 2: Open all webcams but only one webcam at a time, and then close it and open the next one.
Problem: Switching the webcams from one to another takes at least 5 seconds for each switching.
What I need is only a single frame of an image from each webcam and not a video.
Is there any suggestions for this problem, besides loadbalancing of USB bus and adding USB PCI cards?
Thank you.
In opencv you can deal with the WebCam as stream which mean you have run as video. However, I think this kind of problem should be solved using the Webcam API if it is available. There should a way or another to take still image and return it to your program as data. So, you may search for this in website of the Camera.

on-line recording with ffmpeg

Is this possible? Someone tried to do on-line recording of audio and video(of the screen) with ffmpeg? I read everything google can find about ffmpeg in the net. The variant of recording I deed load CPU to 100%, but it still can't convert frames with appr. speed relevant to how fast frames are recording, audio go good, but video lost frames..
Recording audio/video of the screen is possible with ffmpeg. People do this for the purposes of screen casting. Performance of this depends on the hardware in use, the codecs used and various other factors.
See this post (or this one) for some further advice and command line use.
This pretty much depends on the codec used, the frame size/complexity and obviously the capabilities of the computer doing the compression. You can try a low complexity codec like MJPEG, which might improve your experience.

encoding camera with audio source in realtime with WMAsfWriter - jitter problem

I build a DirectShow graph consisting of my video capture filter
(grabbing the screen), default audio input filter both connected
through spliiter to WM Asf Writter output filter and to VMR9 renderer.
This means I want to have realtime audio/video encoding to disk
together with preview. The problem is that no matter what WM profile I
choose (even very low resolution profile) the output video file is
always "jitter" - every few frames there is a delay. The audio is ok -
there is no jitter in audio. The CPU usage is low < 10% so I believe
this is not a problem of lack of CPU resources. I think I'm time-
stamping my frames correctly.
What could be the reason?
Below is a link to recorder video explaining the problem:
http://www.youtube.com/watch?v=b71iK-wG0zU
Thanks
Dominik Tomczak
I have had this problem in the past. Your problem is the volume of data being written to disk. Writing to a faster drive is a great and simple solution to this problem. The other thing I've done is placing a video compressor into the graph. You need to make sure both input streams are using the same reference clock. I have had a lot of problems using this compressor scheme and keeping a good preview. My preview's frame rate dies even if i use an infinite Tee rather than a Smart Tee, the result written to disk was fine though. Its also worth noting that the more of a beast the machine i was running it on was the less of an issue so it may not actually provide much of a win if you need both over sticking a new faster hard disk in the machine.
I don't think this is an issue. The volume of data written is less than 1MB/s (average compression ratio during encoding). I found the reason - when I build the graph without audio input (WM ASF writer has only video input pint) and my video capture pin is connected through Smart Tree to preview pin and to WM ASF writer input video pin then there is no glitch in the output movie. I reckon this is the problem with audio to video synchronization in my graph. The same happens when I build the graph in GraphEdit. Without audio, no glitch. With audio, there is a constant glitch every 1s. I wonder whether I time stamp my frames wrongly bu I think I'm doing it correctly. How is the general solution for audio to video synchronization in DirectShow graphs?