Realsense SDK draw 3D scanning boundaries - c++

I am writing a c++ app for 3D scanning. I'd like to add modify the image i get from
PXCImage* image = scanner->AcquirePreviewImage();
or directly the QImage i create from
QImage* qimage = new QImage(imageData.planes[0], width, height, QImage::Format_RGB32);
to display the rectangle which represent the scanned area as in the image below (taken from the c# example).
I am probably just missing some understanding of the SDK, but I'd be grateful if someone could explain it to me.
(I am using the PXC3DScan::ScanningMode::VARIABLE if that could affect the process).

I received an answer from Intel:
It will not support to draw scan boundary when you selected variable scan mode. We will provide this feature in the future release. Thanks!
(Current SDK version 7.0.23.8048)

Related

CUDA how to get pixels from screen?

Good afternoon.
I found this article, but it shows how to take pixels from the image that is in the folder. Is it possible to take pixels straight from the desktop?
How to get image pixel value and image height and width in CUDA C?
It's not possible to use CUDA to get pixels from the screen/desktop/application window. You would have to use some sort of graphics API, like some X extension or DirectX (or OpenGL if the window you are working on is under control of OpenGL).
Once you have acquired the pixels via your graphics API, you can pass that to CUDA using CUDA/Graphics interop.
There are many resources for screen capture. Here is one example. There are many others.
One possible suggestion is to use NVIDIA capture SDK. However this is not formally part of CUDA. It is one possible method to get the screen pixels into a resource accessible to CUDA. (And, the functionality is deprecated on Windows.)

Why is my Qt5 QPixmap not showing correctly frames processed with some OpenCV algorithms?

Thank you in advance for your support.
I am using OpenCV for processing video frames taken by a video camara, and showing the processed frames in a simple GUI implemented in Qt5. In the GUI, the images are shown using QPixmap in a label. The OpenCV algorithms should be right, since if I write the outputs are right, and they are basically some examples provided by OpenCV.
I have implemented different processings: For a conversion from color to grey scale, and for binary threshold (see image 1) the results are fine (this "view" of the camera is right). Nevertheless, when trying to display ("in real time") Keypoints detections (using SURF -see image 2-) and contours detections (using Canny -see image 3-), the images displayed are strange.
The main problem is they seem to be at the same time "much closer" (see 2) and double (see 3).
In the Qt code I am using:
ui->labelView->setScaledContents(true);
I do the conversion from the processed OpenCV frame to QImage using:
QImage output((const unsigned char*) _frameProcessed.data, _frameProcessed.cols, _frameProcessed.rows, QImage::Format_Indexed8);
And I display the image using:
ui->labelView->setPixmap(QPixmap::fromImage(frame));
The GUI and the OpenCV processing are running in different threads: I move the image processing to a thread in an initial setup.
If you needed further information please just let me know.
Thank you very much in advance!
Best regards,
As #Micka pointed out, it had to do with the formats of the images. I found this handy code providing functions for an automatic transformation between OpenCv and Qt formats.

Problems rendering an image in gtk

I'm programming an application in c++ with a GUI in GTK3 that will show the images obtained from a genicam camera. I've got the official API of the camera that deals with it and extract returns an unsigned char* to the buffer where the image is contained.
The problem comes when I try to convert the image to a GTK format to render it in the GUI. I've tried with cairo and pixbuf, but I've problems in both of them, as the image is in MONO8 format, and pixbuf only deals with RGB. Cairo can deal with 8bit images, but only if they have an alpha channel, which is not the case.
Does someone know a way to approach this issue?
Thanks in advance

adding cliparts to image/video OpenCV

I'm building a web cam application as my C++ project in my college. I am integrating QT (for GUI) and OpenCV (for image processing). My application will be a simple web cam app that will access the web cam, show/record videos, capture images and other stuffs.
Well, I also want to put in a feature to add cliparts to captured images, or the streaming video. While on my research, I found out that there is no way we can overlay two images using OpenCV. The best alternative I was able to find was to reconfigure the whole image to add the clipart into the original image making it a single image. You see, that's not going to work for me as I have to be able to move the clipart and resize or rotate the clipart in my canvas.
So, I was wondering if anybody could tell me how to achieve the effect I want most efficiently.
I would really appreciate your help. The deadline for the project submission is closing in and its a huge bump on the road to completion. PLEEEASE... RELP!!
If you just want to stick a logo onto the openCV image then you simply define a region of interest (roi) on the destination video image and copy the source image to this (the details vary with each version of opencv)
If you want the logo to be semi transparent - like a TV channel ID - then you can copy the image but loop over the pixels writing a destination that is source_pixel/2 + dest_pixel/2;

C++ Spin Image Resources

Does anyone know of a good resource that will show me how to load an image with C++ and spin it?
What I mean by spin is to do an actual animation of the image rotating and not physically rotating the image and saving it.
If I am not clear on what I am asking, please ask for clarification before downvoting.
Thanks
I would definitely default to OpenGL for this type of task, you can load the image into a Texture, then either redraw the image at different angles, or better yet you can just spin the 'camera' in the OpenGL engine. There are loads of OpenGL tutorials around, a quick google search will get you everything you need.
You could use SDL and the extension sdl_image and/or sdl_gfx
In Windows using GDI+ you could show rotated image in the following way:
Graphics graphics( GetSafeHwnd() ); // initialize from window handle
// You can construct Image objects from a variety of
// file types including BMP, ICON, GIF, JPEG, Exif, PNG, TIFF, WMF, and EMF.
Image image( L"someimage.bmp" );
graphics.RotateTransform( 30.0f ); // 30 - angle, in degrees.
graphics.DrawImage( &image, 0, 0 ); // draw rotated image
You could read here more detailed explanation.
Second solution is to use DirectX. You could create texture from file and later render it. It is not trivial solution, but it'll use hardware acceleration and will give you the best performance.
On Windows 7 there is available new API called Direct2D. I have not used it yet, but it looks promising.
Direct2D provides Win32 developers with the ability to perform 2-D graphics rendering tasks with superior performance and visual quality.
i agree with DeusAduro. OpenGL is a good way of doing this.
you can also do this with Qt
The "roll-your-own" solution is difficult.
I'd suggest looking into WPF - it might have some nice options in an image control.