I have a relatively simple application which currently utilizes OpenCV to grab an image from a camera using cv::VideoCapture and view the resulting image in a window using imshow() running on OS X El Capitan.
In between I'm doing some basic image modification but this is not crucial to my problem.
Since the GUI implemented by OpenCV is pretty basic I decided to redo it using wxWidgets. I got it basically running similarly to the implementation linked in the tutorial section of wxWidgets. (Updated it to C++11 etc. but the idea is pretty much identical. Code is located on github.)
Now heres my problem: In best cases I get half the framerate as I get with the OpenCV only solution. OpenCV uses qt underneath. But when I look into the stack trace it comes down to similar function calls using CoreGraphics.
So my question boils down to: What is the best way do draw an image to a window with a framerate > 20fps using wxWidgets on OS X? Currently I use the DrawBitmap() function.
Bonus question: when I have the window on my Macbooks internal Retina screen the framerate gets even worse. Is there maybe any preprocessing/scaling on the picture I should do to take off load from the GUI-process?
The fastest is probably to use OpenGL (although I'm less sure about this under OS X which is not very OpenGL-friendly AFAIK), but I'm not really sure if the bottleneck is really DrawBitmap(), it could be the code doing the conversion to wxBitmap in the first place: if you don't use raw bitmap access, it could be quite slow.
Related
while it seems such a basic question, I open this thread after extensive stackoverflow and Google search, which helped but not in a definitive way.
My C++ code draws to an array used as RGB framebuffer, hardware acceleration is not possible for this original graphics algorithm and I want my software to directly generate the images (no hardware acceleration) also for portability reasons.
However, I would like to have hardware acceleration to copy my C++ array into the window.
In the past, I have outputted to BMP files the images generated by my code, now I want to display them as efficiently as possible in realtime, on the screen's window.
I wrote a Win32 program that has a window freely resizeable by the user, so far so good.
What is the best / fastest / most efficient way to (possibly v-synched) blit/copy my system memory RGB array (I can adapt my graphics generator to any RGB format, so to avoid conversions) into the window that can change size any moment? (I am handling resizing via WM messages, and anyway I'm not asking how to resize/rescale the image array, I will do it all by myself, I mention resizeable window just to say that its size cannot be fixed after creation, but I will rescale it with my own code or, more precisely, I will simply reallocate the RGB array and generate a new image when the user changes the window's size)
NOTE: the image array should be allocated by my own program, but if having a system pointer will make things much more efficient (e.g. directly in video RAM) then it's ok.
Windows10 is the target OS, but at least Windows7 retro-compatibility would be better. Hardware is PC's, even low end, released in the last 5-10 years till now, so I guess all will have GDI hardware acceleration or such, like Direct2D or how is it called DirectDraw now.
NOTE: please do NOT propose the use of any library, I will have to deal directly with GDI calls, Direct2D or whatever is most efficient to do the task, but without using third-party libraries.
Thank you very much, I don't know much about Windows GUI coding, I've done console mode only until now, and some windows (thus I am familiar with WM messages) and basic DeviceContext graphics output. From my research (but I wanted to ask here because I'm in doubt really, and lotsa post I've read date back to 2010) SetDIBitsToDevice should be the best solution to my problem, but if so I think I would still need some way to synchronize with VSynch to avoid, if possible, the annoying tearing/flickering.
I am using OpenCV to show image on the projector. But it seems the cv::imshow is not fast enough or maybe the data transfer is slow from my CPU to GPU then to projector, so I wonder if there is a faster way to display than OpenCV?
I considered OpenGL, since OpenGL directly uses GPU, the command may be faster than from CPU which is used by OpenCV. Correct me if I am wrong.
OpenCV already supports OpenGL for image output by itself. No need to write this yourself!
See the documentation:
http://docs.opencv.org/modules/highgui/doc/user_interface.html#imshow
http://docs.opencv.org/modules/highgui/doc/user_interface.html#namedwindow
Create the window first with namedWindow, where you can pass the WINDOW_OPENGL flag.
Then you can even use OpenGL buffers or GPU matrices as input to imshow (the data never leaves the GPU). But it will also use OpenGL to show regular matrix data.
Please note:
To enable OpenGL support, configure OpenCV using CMake with
WITH_OPENGL=ON . Currently OpenGL is supported only with WIN32, GTK
and Qt backends on Windows and Linux (MacOS and Android are not
supported). For GTK backend gtkglext-1.0 library is required.
Note that this is OpenCV 2.4.8 and this functionality has changed quite recently. I know there was OpenGL support in earlier versions in conjunction with the Qt backend, but I don't remember when it was introduced.
About the performance: It is a quite popular optimization in the CV community to output images using OpenGL, especially when outputting video sequences.
OpenGL is optimised for rendering images, so it's likely faster. It really depends if the OpenCV implementation uses any GPU acceleration AND if the bottleneck is on rendering side of things.
Have you tried GPU accelerated OpenCV? - http://opencv.org/platforms/cuda.html
How big is the image you are displaying? How long does it take to display the image using cv::imshow now?
I know it's an old question, but I happened to have exactly the same problem. And from my observations I've concluded that the root of the problem is the projector's own latency, especially if one is using an older model.
How have I concluded it?
I displayed the same video sequence with cv::imshow() on the laptop monitor and on the projector. Then I waved my hand. It was obvious, that projector introduces significant latency.
To double-check, I've opended a webcam video, waved my hand in front of it and observed the difference on the monitor and on the projector. Webcam does no processing, no opencv operations, so in my understanding the only thing that would explain the latency would be the projector itself.
I need to smoothly zoom in/out and pan an image (for a slideshow) with the sdl library.
I've tried rotozoom from the SDL GFX library but it's far away from smooth.
What would be the best approach here?
If "vanilla" SDL seems too slow for you I'd suggest switching your drawing code to OpenGL. That way you'll get hardwareaccelleration and your issues should go away:)
Using OpenGL coupled with SDL is very easy and there are lots of tutorials on how to do it.
Is there any way to Blit a zoomed (in) version of a certain image/sprite using SDL? (Besides manually saving a zoomed version of the image..)
Thanks :-)
Not with the SDL API itself.
I think there exist libraries (for SDL) who support zooming (so called resizing).
EDIT: http://www.ferzkopp.net/joomla/content/view/19/14/
Generally speaking, SDL isn't suitable for graphics that require real-time rotations and zooming. For that, you'd need an library based around an accelerated API such as DirectX or OpenGL. I understand that SFML meets this requirement.
I come from a C/C++ background and more recently flash. Anyway I wrote a 2D engine in AS3 and would like to get it running on the iPhone. To start with I converted it to C++. As a test I wrote a very simple C++ engine and added the files to a standard view-based application in XCode. I then added a UIImageView that covered the whole iPhone screen.
The way my test engine is set up at the moment is that each frame it renders the result to an image which is then used to update the UIImageView every frame. Assuming I can pass input from the iPhone to the C++ engine this seems like a fairly platform-independent solution. Since I have been coding for iPhone/Mac for less than 1 day I was wondering whether this is the standard approach to getting an existing C++ engine running on the iPhone and if not, what is?
There's no problem in you rendering into an image and refreshing that image, but you get no acceleration from the GPU using this technique. So you'd be burning a lot of CPU cycles, which in turns eats battery.
If the objects you are rendering can be described in normal graphics primitives, be sure to use the drawing APIs which are optimised for the platform and can delegate work to the GPU.
An alternative approach is to make use of OpenGLES, but this has a learning curve
Yes, that is a fairly normal way to handle it. Generally you would either use small Objective C stubs for your events and things like pushing out the frame, or you would setup an OpenGL context and then pass it to your C++ code.