Combined two codes in ZED 2 camera - c++

I try to combine 2 examples Depth Sensing and Advanced point cloud mapping. Link (https://github.com/stereolabs/zed-examples/tree/master/spatial%20mapping/advanced%20point%20cloud%20mapping)
I am successfully getting output with Depth-Sensing but I am not able to do point mapping. Also, I am not getting any type of error. When the program starts I can see a very small dot/pixel which moves when I move the camera but it’s not drawing any line or anything using OpenGL.
Does anyone, know where I am getting problems?
Also, when I do same changes in Point mapping programme I can see the output but I can not able to see the Depth-Sensing output. I done all this using CMAKE.

Related

CGAL: Hole Filling .exe file is stuck

This is my terminal(result) after running the .exe file. Click the link for the terminal.
It doesn't stop or gives an error. It's stuck like this for hours.
I got my code from here code, but this is the data (.off) I used.
You are probably trying to fill a hole that is too large or that cannot be filled using the 3D Delaunay triangulation search space (which can happen also if you have pinched holes). In CGAL 5.5 (not yet released but available in master), we added the option do_not_use_cubic_algorithm() (doc here) to not use the cubic search space to fill such holes.

Toggling NvAPI_Stereo_Deactivate/NvAPI_Stereo_activate crashes the unity application

I'm currently working on external plugin in Unity3d which uses NVAPI & 3D Vision. In NVAPI there are two API calls to turn on/off active stereo.
NvAPI_Stereo_Deactivate
NvAPI_Stereo_Activate
So whenever I try to toggle on/off stereo it crashes at random time with following exception:
Unity Player [version: Unity 2017.1.0f3 (472613c02cf7)]
nvwgf2umx.dll caused an Access Violation (0xc0000005) in module nvwgf2umx.dll at 0033:6f9981d8.
The crash can happen at third try or any try later sometimes. What I'm assuming currently is it has to do something with some value accessed by the dll. Problem is since its NVIDIA internal I have no access to it.
I have already tried other simple methods such as Vsync off, Change Quality settings to max in Manage 3d settings but all failing.
I did come across similer issue in NVDIA dev forums but there is not answer to it seems. Any suggestions or help regarding this would be greatly appreciated.
Also here is the link to error log
I have managed to fix this above issue using a roundabout way. Instead of using
NvAPI_Stereo_Deactivate
NvAPI_Stereo_Activate
functions to turn on & off 3d vision I'm passing the render texture to mono eye in NvAPI_Stereo_SetActiveEye to mono camera while in active mode I pass it to Left Eye & Right Eye respectively. Toggling seems to work properly although I have also noted using NvAPI_Stereo_IsActivated in a loop seems to cause also same access violation so rather only user NvAPI_Stereo_SetActiveEye function to set eye and not to mess around with NVAPI native functions. One downside of using this is 3d emitter will be kept on unitil the exit of application(for my project this seems ok). Hope this helps anyone in future coming across this problem. Do update the answer if anyone has a better solution. That would be nice.

Using OpenGL from ArrayFire

If I have an af::array A already in GPU memory, what is the procedure to pass it through to OpenGl? My intention is to plot it as a line graph using OpenGl, but I'm not sure how to deal with the fact that the backend of Arrayfire could be OpenCl, CUDA or even the CPU. How does OpenGl achieve ownership of the array? I would preferably like to avoid copying if possible.
On a separate note, if I was to use the in built forge library to plot graphs in Arrayfire, I find that if I follow the tutorials to plot a graph, by pressing and holding on the data in the plot and dragging it somewhere else, the data plot moves from its original location and is no longer correctly aligned with the axis. Is there a way to correct this?
#HamzaAB
What you are asking is known as GL-CUDA or GL-OpenCL interoperability, if you already don't know about it, which is the area Forge tries to address. You can look the ComputeCopy.h header inside forge repository to understand how to do OpenGL interop.
Having said that, Do you want to reset the transformation you are doing to the line plot, if you want to reset it then there is way. While holding the left control, hit middle mouse button, that will reset the pan/zoom done to the line plot. If you are facing some other issue with forge's line plot, you may raise an issue here and we will try to look into it.
PS. I am one of the core developers of ArrayFire.

Perspective transform (warp) of an image

I'm stuck trying to get perspective transformation to work.
I want to draw an image so it fits 4 given points. It's something like you have in Photoshop and it's used to change the perspective of image (see image below).
I have an image in byte array and I'm not using any additional libraries.
Everything I've found so far was either for OpenCV or didn't do what I wanted.
I have found some open-source program PhotoDemon and it does exactly what I want, and there is a code for that. I was trying to get it to work for many hours but it gives me completly weird results (second line on the image below).
Could someone provide me with some code or step-by-step math of what and how to do or even just a pseudo-code. I'm a little bit sick of it. It seems easy but I need some help.

Detect type of scanned document and normalize it to given size

I'm trying to implement a program that will take a scanned (possibly rotated) document like an ID card, detect its type based on two or more image templates and normalize it (de-rotated it and resize so it matches the template). Everything will be scanned, so luckily perspective is not a problem.
I have already tried a number of approaches with no success:
I tried using openCV's features2d to detect the template and findHomograpy to normalize it but it fails extremely often. If I take a template, change it a little bit (other data/photo on ID card), rotate ~40 degrees then it usually fails, no matter what configuration of descriptors detectors and matcher I use.
Also tried this http://manpages.ubuntu.com/manpages/gutsy/man1/unpaper.1.html which is an de-rotate tool and then tried to do normal matching but unpaper doesn't work great with rotation angles greater than 20 deg.
If there's a ready solution it would be really great, a commercial library (preferably c/c++ or a command line tool) is also an option. I hate to admit that but I fail miserably when try to understand computer vision papers so liniking unfortunately won't help me.
Thank you very much for help!