CUDA how to get pixels from screen? - c++

Good afternoon.
I found this article, but it shows how to take pixels from the image that is in the folder. Is it possible to take pixels straight from the desktop?
How to get image pixel value and image height and width in CUDA C?

It's not possible to use CUDA to get pixels from the screen/desktop/application window. You would have to use some sort of graphics API, like some X extension or DirectX (or OpenGL if the window you are working on is under control of OpenGL).
Once you have acquired the pixels via your graphics API, you can pass that to CUDA using CUDA/Graphics interop.
There are many resources for screen capture. Here is one example. There are many others.
One possible suggestion is to use NVIDIA capture SDK. However this is not formally part of CUDA. It is one possible method to get the screen pixels into a resource accessible to CUDA. (And, the functionality is deprecated on Windows.)

Related

Does OpenGL display image faster than OpenCV?

I am using OpenCV to show image on the projector. But it seems the cv::imshow is not fast enough or maybe the data transfer is slow from my CPU to GPU then to projector, so I wonder if there is a faster way to display than OpenCV?
I considered OpenGL, since OpenGL directly uses GPU, the command may be faster than from CPU which is used by OpenCV. Correct me if I am wrong.
OpenCV already supports OpenGL for image output by itself. No need to write this yourself!
See the documentation:
http://docs.opencv.org/modules/highgui/doc/user_interface.html#imshow
http://docs.opencv.org/modules/highgui/doc/user_interface.html#namedwindow
Create the window first with namedWindow, where you can pass the WINDOW_OPENGL flag.
Then you can even use OpenGL buffers or GPU matrices as input to imshow (the data never leaves the GPU). But it will also use OpenGL to show regular matrix data.
Please note:
To enable OpenGL support, configure OpenCV using CMake with
WITH_OPENGL=ON . Currently OpenGL is supported only with WIN32, GTK
and Qt backends on Windows and Linux (MacOS and Android are not
supported). For GTK backend gtkglext-1.0 library is required.
Note that this is OpenCV 2.4.8 and this functionality has changed quite recently. I know there was OpenGL support in earlier versions in conjunction with the Qt backend, but I don't remember when it was introduced.
About the performance: It is a quite popular optimization in the CV community to output images using OpenGL, especially when outputting video sequences.
OpenGL is optimised for rendering images, so it's likely faster. It really depends if the OpenCV implementation uses any GPU acceleration AND if the bottleneck is on rendering side of things.
Have you tried GPU accelerated OpenCV? - http://opencv.org/platforms/cuda.html
How big is the image you are displaying? How long does it take to display the image using cv::imshow now?
I know it's an old question, but I happened to have exactly the same problem. And from my observations I've concluded that the root of the problem is the projector's own latency, especially if one is using an older model.
How have I concluded it?
I displayed the same video sequence with cv::imshow() on the laptop monitor and on the projector. Then I waved my hand. It was obvious, that projector introduces significant latency.
To double-check, I've opended a webcam video, waved my hand in front of it and observed the difference on the monitor and on the projector. Webcam does no processing, no opencv operations, so in my understanding the only thing that would explain the latency would be the projector itself.

Font Outline Beizier Spline Contoures and DirectWrite

I'm considering maybe using DirectWrite for a project that will be coming in a DirectX 11 version and a OpenGL 3.1+ version.
From what understand, DirectWrite uses Direct2D which sits on top of Direct3D 10.1 (until DirectX 11.1 is released). This means that to use DirectWrite with Direct3D 11, currently, I would have to create one Direct3D 10.1 device and a Direct3D 11 device and then share the resources between these two devices, which comes with some synchronization overhead it seems. Another problem is that I won't seem to be able to render the text directly to the d3d11 backbuffer with this set up, right...?
Also I have no idea if it is even possible to combine DirectWrite with OpenGL in any practical sense..? My guess is not...
Sooo... I'm also considering writing my own font renderer and I would like to be able to render the fonts based on their Bezier spline outlines for resolution independence. I know about the GetGlyphOutline() function but it seems to be in the process of being deprecated and "...should not be used in new applications" according to MSDN library. And looking at DirectWrites reference pages at MSDN, I can't see any way of getting the same in Bezier spline information like you can with GetGlyphOutline(). You can get the outline information wrapped in a ID2D1SimplifiedGeometrySink, but I can't see how you get the pure Bezier curve, control points, information from the ID2D1SimplifiedGeometrySink, you can only use it for drawing using D2D (D3D10.1) which I am not so much interested in at this point.
Is there a way to get the font outline contours using a non-deprecated method, DirectWrite or otherwise?
I'm not that familiar with either DirectWrite and Direct2D as you can probably tell. I'm trying to figure out what direction to take. Whether it is worth going down the DirectWrite/D2D road, or to make my own font renderer, or some other brilliant idea :). Any suggestions?
PS I'm currently developing for the Win7 platform and will migrate to Win8 when it is released.
Fortunately you have it a little backwards. Direct2D uses DirectWrite, not the other way around. You can technically use DirectWrite on its own (see: IDWriteBitmapRenderTarget). If you're at the prototyping stage, it may be easier to use Direct2D to do software rendering into an IWICBitmap created through IWICImagingFactory::CreateBitmap() (or just wrap a bitmap you've already created, and implement IWICBitmap yourself). You use ID2D1Factory::CreateWicBitmapRenderTarget(), call ID2D1RenderTarget::BeginDraw(), then create your IDWriteTextFormat and/or IDWriteTextLayout via IDWriteFactory, and then call DrawText() or DrawTextLayout() on the render target, then EndDraw(). Then you copy into a hardware texture and draw it however you like.

Cinder: How to get a pointer to data\frame generated but never shown on screen?

There is that grate lib I want to use called libCinder, I looked thru its docs but do not get if it is possible and how to render something with out showing it first?
Say we want to create a simple random color 640x480 canvas with 3 red white blue circles on it, and get RGB\HSL\any char * to raw image data out of it with out ever showing any windows to user. (say we have console application project type). I want to use such feaure for server side live video stream generation and for video streaming I would prefer to use ffmpeg so that is why I want a pointer to some RGB\HSV or what ever buffer with actuall image data. How to do such thing with libCInder?
You will have to use off-screen rendering. libcinder seems to be just a wrapper for OpenGL, as far as graphics go, so you can use OpenGL code to achieve this.
Since OpenGL does not have a native mechanism for off-screen rendering, you'll have to use an extension. A tutorial for using such an extension, called Framebuffer Rendering, can be found here. You will have to modify renderer.cpp to use this extension's commands.
An alternative to using such an extension is to use Mesa 3D, which is an open-source implementation of OpenGL. Mesa has a software rendering engine which allows it to render into memory without using a video card. This means you don't need a video card, but on the other hand the rendering might be slow. Mesa has an example of rendering to a memory buffer at src/osdemos/ in the Demos zip file. This solution will probably require you to write a complete Renderer class, similar to Renderer2d and RendererGl which will use Mesa's intrusctions instead of Windows's or Mac's.

Controlling the individual pixels of a projector

I need to control the individual pixels of a projector (an Infocus IN3104) whose native resolution is 1024x768. I would like to know which subset of functions in C or an APL to do this either by:
Functions that control the individual pixels of the adapter (not the pixels of a window).
A pixel-perfect, 1:1 map from an image file (1024x728) to the adaptor set at the native resolution of the projector.
In a related question ([How can I edit individual pixels in a window?][1]) the answerer Caladain states "Things have come a bit from the old days of direct memory manipulation.". I feel I need to go back to that to achieve my goal.
I don't know enough of the "graphic pipeline" to know what API or software tool to use. I'm overwhelmed by the number of technologies when I search this topic. I program in R, which easily interfaces to C, but would welcome suggestions of subsets of functions in OpenGL or C++ or ..... any other technology?
Or even an full blown application (rendering) which will map without applying a transformation.
For example even MS paint has the >VIEW>Bitmap but I get some transformation applied and I don't get pixel perfect rendering. This projector has DisplayLink digital input and I've also tried to tweek the timing parameters when using the VESA inputs and I don't think the transformation happens in the projector. In any case, using MS paint would not be flexible enough for me.
Platform: Linux or Windows.
I don't see a reason why a full-screen window, e.g. using SDL, wouldn't work. Normal bitmapped graphics is always 1:1, there shouldn't be any weird scaling going on behind your back for a full-screen:ed window.
Since SDL is portable, you should be able to run the same code in Windows or Linux (or any other supported platform).
The usual approach to this problem on current systems is:
Set graphics card to desired resolution
Create borderless full screen window
Draw whatever you want
There's really not much to gain from a "low level access", although it were certainly possible.

Sharing a texture between direct3d and opengl?

I know mixing OpenGL and DirectX is not recommended but I'm trying to build a bridge between two different applications that use separate graphics API:s and I'm hoping there is a technique for sharing data, specifically textures.
I have a texture that is created in Direct3D like this:
d3_device-> CreateTexture(width, height,
1, D3DUSAGE_RENDERTARGET, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT,
&texture, NULL);
Is there any way I can use this texture from OpenGL without taking a roundtrip through system memory?
YES. As previously posted (see below) there should exists at least one solution.
I found two possible solutions:
On nvidia cards a new extension was integrated in the 256 dirvers. see http://developer.download.nvidia.com/opengl/specs/WGL_NV_DX_interop.txt
DXGI is the driving force to composite all windows in Vista and Windows 7. see msdn.microsoft.com/en-us/library/ee913554.aspx
I have not yet made experience with either solution, but I hope I will find some time to test one of them. But for me the first one seems to be the easier one.
[I think it should be possible. In recent windows version (Vista and 7) one can see a preview of any window content in the taskbar (whether its GDI, Direct3D, or OpenGL).
To my knowledge OpenGL preview was not supported in earlier windows versions. So at least in the newer version there should be a possibility to couple or share render contexts even between different processes...
This is also true for other modern platforms which share render contexts system wide to make different rendering effects.]
I think it is not possible. As both have different models of a texture.
You cannot access the texture memory directly without either directX or openGL.
Otherway around: If it is possible, you should be able to retrieve the texture address, its pitch, width and other (hardware dependant) memory layout informations and create a dummytexture in the other system and push the retrieved data into your just created texture object. This is not possible
Obviously, this will not work on any descend hardware, and if so it would not be very portable.
I don't think it's possible without downloading the data into host memory and re-uploading it into device memory.
It's possible now.
Use ANGLE OpenGL API instead native OpenGL.
You can share direct3d texture with EGL_ANGLE_d3d_texture_client_buffere extension.
https://github.com/microsoft/angle/wiki/Interop-with-other-DirectX-code#demo
No.
Think of it like sharing an image in photoshop and another image viewer. You would need a memory management library that those two applications shared.