Is it possible to draw in another window (Using Opencv/ffmpeg - c++

I realize this question might be closed with the "not enough research". However I did spent like 2 days googling for it and didn't find a conclusive answer.
Well I have an application that spawns a window, not written in c++. This application can have a c-interface with dlls. Now I wish to use the power of OpenCV, so I started on a dll to extend. Ss passing image data from/to the application is near impossible (only capable of passing c-strings & double values directly - using the hard drive for drawing is slowing down too much for real time image manipulation).
I am looking into letting opencv draw the image data directly - onto the window. I can gain the window handle easily, so would it then be possible to let openCV draw their data "over" the other window - or better into the other window?
Is this even possible with any library (FFMPEG, or something else)?

Yes, it's possible, but it's far from ideal. You can use GDI to draw on top of the other window (just convert IplImage to HBITMAP). Another technique is to do such drawing in a borderless layered window.
An easier approach is, since you own both applications, to write a function that passes an IplImage between them using standard C data types, after all, IplImage is nothing but a data type that is built from these standard types.
Here is how you will disassemble IplImage into 5 standard parameters:
The size (int, int) of the image (width/height);
The (int) bit depth of the image;
The number (int) of channels;
And the (unsigned char*) pixels of the image;
After receiving these parameters on the other side, you may wonder: how do I assemble a IplImage from scratch? Call cvCreateImageHeader() followed by cvSetData().

Related

Fastest way to copy my own system memory RGB array into a Win32 window

while it seems such a basic question, I open this thread after extensive stackoverflow and Google search, which helped but not in a definitive way.
My C++ code draws to an array used as RGB framebuffer, hardware acceleration is not possible for this original graphics algorithm and I want my software to directly generate the images (no hardware acceleration) also for portability reasons.
However, I would like to have hardware acceleration to copy my C++ array into the window.
In the past, I have outputted to BMP files the images generated by my code, now I want to display them as efficiently as possible in realtime, on the screen's window.
I wrote a Win32 program that has a window freely resizeable by the user, so far so good.
What is the best / fastest / most efficient way to (possibly v-synched) blit/copy my system memory RGB array (I can adapt my graphics generator to any RGB format, so to avoid conversions) into the window that can change size any moment? (I am handling resizing via WM messages, and anyway I'm not asking how to resize/rescale the image array, I will do it all by myself, I mention resizeable window just to say that its size cannot be fixed after creation, but I will rescale it with my own code or, more precisely, I will simply reallocate the RGB array and generate a new image when the user changes the window's size)
NOTE: the image array should be allocated by my own program, but if having a system pointer will make things much more efficient (e.g. directly in video RAM) then it's ok.
Windows10 is the target OS, but at least Windows7 retro-compatibility would be better. Hardware is PC's, even low end, released in the last 5-10 years till now, so I guess all will have GDI hardware acceleration or such, like Direct2D or how is it called DirectDraw now.
NOTE: please do NOT propose the use of any library, I will have to deal directly with GDI calls, Direct2D or whatever is most efficient to do the task, but without using third-party libraries.
Thank you very much, I don't know much about Windows GUI coding, I've done console mode only until now, and some windows (thus I am familiar with WM messages) and basic DeviceContext graphics output. From my research (but I wanted to ask here because I'm in doubt really, and lotsa post I've read date back to 2010) SetDIBitsToDevice should be the best solution to my problem, but if so I think I would still need some way to synchronize with VSynch to avoid, if possible, the annoying tearing/flickering.

Portable YUV Drawing Context

I have a stream of YUV data (from a video file) that I want to stream to a screen in real time. (Basically, I want to write a program that plays the video in real time.)
As such, I am looking for a portable way to send YUV data to the screen. I would ideally like to use something portable so I don't have to reimplement it for every major platform.
I have found a few options, but all of them seem to have significant issues. They are:
Use OpenGL directly, converting the YUV data to RGB. (And using the single quad for the whole screen trick.)
This obviously won't work because converting RGB to YUV on the CPU is going to be too slow for displaying images in real time.
Use OpenGL, but use a shader to convert the YUV stream to RGB.
This option is a bit better. Although the problem here is that (afaict), this will involve making two streams and splicing them together. It might work, but may have issues with larger resolutions.
Instead use SDL, which has the option of creating a YUV context directly.
The problem with this is I already am using a cross platform widget library for other aspects of my program (such as playback controls). As far as I can tell, SDL only opens up in its on (possibly borderless) window. I would ideally like my controls and drawing context to be in the same window. Which I can do with opengl, but not SDL.
Use SDL, and also use something like Qt for the on screen widgets, use a message passing protocol to communicate between the two libraries. Have the (borderless) SDL window constantly move itself on top of the opengl window.
While this approach is clever, it seems like the two windows could get out of sink easily making the user experience sub-optimal.
Forget a cross platform library, do thinks OS specific, making use of hardware acceleration if present.
This is a fine solution although its not cross platform.
As such, is there any good way to draw YUV data to a screen that ideally is:
Portable (at least to the major platforms).
Fast enough to be real time.
Allows other widgets in the same window.
Use the option number 2. There's no problem in doing the YUV to RGB conversion in the shader. There's no such other "portable" way to do that.
Think like this: no matter "how big or small" your video is, the fragment shaders (where the conversion is done) will execute per pixel at the moment of the display, so you can have either a small video in full screen or big one, the computation (for the shaders) is the same, because they are displaying the same number of pixels.
Any video card in normal conditions will be able to run this kind of shader without any problem.

OpenCV high resolution camera capture && display

Two questions (both related):
[q1] I've read through the documentation of OpenCV; it seems no matter what the backing store for a cv::Mat is (byte, float, double), the display is always rendered down to 256 levels per channel for the bgr format for display. What is currently the best method to bypass this limitation? Can I somehow use OpenGL integration to pipe directly to an opengl display?
[q2] Is it possible to back the display with a higher precision cv::Mat and do a dynamic transform on presentation? Again, OpenGL? So if I have a Mat that is 1024 x 768, and wish to keep it like this, but declare the window to have a different dimension (512 x 384) so that feeding it the 1024 cv::Mat on presentation scales down to 512, but the underlying interface (zoom in particular) retains the 1024 resolution, is this possible?
I suspect the answer is no to both (as possibilities directly addressed by OpenCV) based on their documentation - so the OpenGL integration would be interesting in this respect. If it makes a difference, I have the OpenCV with Qt involved. I don't want to have to write a bunch of GUI code. I just want to have the data in OpenCV, call a correct display (compatible with OpenCV) and have the data presented in a somewhat higher-fidelity version than what is currently being done with imshow.

Manipulating a bitmap image in memory with linux

I've done a bit of research but haven't found anything useful so far.
In short I would like to be able to create a bitmap/canvas in memory and use an api which has functions for drawing primitive shapes and text on that bitmap and read the memory directly. This should be completely done in memory and not need a window system or something like Qt or GTK.
Why? I'm writing for the raspberry pi and am interfacing with a 256x64 4 bit greyscale OLED display over spi. This is all working fine so far. I've written a couple of functions for writing text etc, but am wondering if there is already a library I can use. I double buffer the display so I just need to manipulate the image in memory and then read the entire picture in one.
I can easily do this in windows but am not sure the best way to do this in linux
I've used Image Magick to do similar things. It supports Bitmap and SVG. http://www.imagemagick.org/script/index.php

Fast Updating of QPixmap from byte array

I'm working on a vision application and I need to have a "Live View" from the camera displayed on the screen using a QPixmap object. We will be updating the screen at 30frames/second on a continuous basis.
My problem is that this application has to run on some 3-5 year old computers that, by todays standards, are slow. So what I would like to do is to be able to directly write to the display byte array inside of QPixmap. After going through the program code, almost option for changing the contents of a Pixmap results in a new QPixmap being created. This is the overhead I'm trying to get ride of.
Additionally, I would like to prevent all the new/deletes from occurring just to keep memory fragmentation under control.
Any suggestions?
First of all, the most important piece of information regarding the "picture" classes in Qt:
QImage is designed and optimized for I/O, and for direct pixel access and manipulation, while QPixmap is designed and optimized for showing images on screen.
What this means is that QPixmap is a generic representation of your platform's native image format: Pixmap on Unix, HBITMAP on Windows, CGImageRef on the Mac. QImage is a "pixel array with operations" type of class.
I'm assuming the following:
You are reading raw camera frames in a specific pixel format
You really are having memory fragmentation issues (as opposed to emotionally having them)
My advice is to use QImage instead of QPixmap. Specifically, there is a constructor that accepts a raw byte array and uses it directly as the pixel buffer:
QImage::QImage(uchar *data, int width, int height, int bytesPerLine, Format format)
Having constructed a QImage, use a QPainter to draw it to a widget at the desired frequency. Be warned however that:
If you are reading raw camera frames, format conversion may still be necessary. Twice, in the worst case: Camera ➔ Qimage ➔ Platform Bitmap.
You cannot avoid memory allocation from the free store when using QPixmap and QImage: they are implicitly shared classes and necessarily allocate memory from the free store. (On the other hand, that means you should not new/delete them explicitly.)
Our team managed to display fullscreen compressed video smoothly on Atom-powered computers using only Qt (albeit at a lower framerate). If this does not solve your problem, however, I'd bypass Qt and use the native drawing API. If you absolutely need platform independence, then OpenGL or SDL may be good solutions.
I have found that QImages are faster for Direct I/O operations.
Could you provide more detail as to what you are getting and trying to do with the QPixmap?