(Kinect v2) Alternative Kinect Fusion Pipeline (texture mapping) - c++

I am trying a different Pipeline without success until now.
The Idea is to use the classic pipeline (as in the Explorer Example) but additionally to use the last ColorImage for the texutre.
So the idea (after clicking SAVE MESH):
Save current Image as BMP
Get the current transformation [m_pVolume->GetCurrentWorldToCameraTransform(&m_worldToCameraTransform);] .. lets call it M
Transform all Mesh vertices v in the last Camera Space Coordinate System ( M * v )
Now the current m_pMapper refers to the latest Frame which we want to use [ m_pMapper->MapCameraPointToColorSpace(camPoint, &colorPoint); ]
In theory I should have now every Point of the fusion mesh as a texture coordinate.. I want to use them to export as OBJ File (with texture and not only color).
What am I doing wrong?
The 3D Transformations seem to be correct.. when I visualize the resulting OBJ file in MeshLab I can see that the transformation is correct.. the WorldCoordinateSystem is Equal to the latest recorded position.
Only the texture is not set correctly.
I would be very very very very happy if anyone could help me. I am trying already for a long time :/
Thank you very much :)

Related

Unable to get textures to work in OpenGL in Common Lisp

I am building a simple Solar system model and trying to set textures on some spheres.
The geometry is properly generated, and I tried a couple different ways to generate the texture coordinates. At present I am relying on glu:quadric-texture for generating the coordinates when glu:sphere is called.
However, the textures never appear - objects are rendered in flat colors.
I went through several OpenGL guides and I do not think I am missing a step, but who knows.
Here is what is roughly happening:
call gl:enable :texture-2d to turn on textures
load images using cl-jpeg
call gl:bind-texture
copy data from image using gl:tex-image-2d
generate texture ids with gl:gen-textures. Also tried generating ids one by one instead of all at once, which had no effect.
during drawing create new quadric, enable texture coordinates generation and bind the texture before generating the quadric points:
(let ((q (glu:new-quadric)))
(if (planet-state-texture-id ps)
(progn (gl:enable :texture-gen-s)
(gl:enable :texture-gen-t)
(glu:quadric-texture q :true)
(gl:bind-texture :texture-2d planet-texture-id)))
(glu:quadric-texture q :false))
(glu:sphere q
planet-diameter
*sphere-resolution*
*sphere-resolution*)
I also tried a more manual method of texture coordinates generation, which had no effect.
Out of ideas hereā€¦
make-texture function
texture id generation
quadric drawing
When the program runs, I can see the textures are loaded and texture ids are reserved, it prints
loading texture from textures/2k_neptune.jpg with id 1919249769
Loaded data. Image dimensions: 1024x2048
I don't know if you've discovered a solution to your problem, but after creating a test image, and modifying some of your code, I was able to get the texture to be applied to the sphere.
The problem comes into play with the fact that you are attempting to upload textures to the GPU before you've enabled them. (gl:enable :texture-2d) has to be called before you start handling texture/image data.
I'd recommend putting the let* block with the planets-init that is in the main function after 'setup-gl', and also moving the 'format' function with the planets data to work correctly without an error coming up.
My recommendation is something like:
(let ((camera ...
...
(setup-gl ...)
(let* ((planets...
...
(format ... planet-state)
In your draw-planet function, you'll want to add (gl:bind-texture :texture-2d 0) at the end of it so that the texture isn't used for another object, like the orbital path.
As is, the (gl:color 1.0 ...) before the (gl:quadratic-texture ...) will modify the color of the rendered object, so it may not look like what you're expecting it to look like.
Edit: I should've clarified this, but as your code stands it goes
initialize-planets > make-textures > enable-textures > render
When it should be
enable-textures > init-planets > make-textures > render
You're correct about not missing a step, the steps in your code are just misordered.

Getting 3D world coordinate from (x,y) pixel coordinates

I'm absolutely new to ROS/Gazebo world; this is probably a simple question, but I cannot find a good answer.
I have a simulated depth camera (Kinect) in a Gazebo scene. After some elaborations, I get a point of interest in the RGB image in pixel coordinates, and I want to retrieve its 3D coordinates in the world frame.
I can't understand how to do that.
I have tried compensating the distortions, given by the CameraInfo msg. I have tried using PointCloud with pcl library, retrieving the point as cloud.at(x,y).
In both cases, the coordinates are not correct (I have put a small sphere in the coords given out by the program, so to check if it's correct or no).
Every help would be very appreciated. Thank you very much.
EDIT:
Starting from the PointCloud, I try to find the coords of the points doing something like:
point = cloud.at(xInPixel, yInPixel);
point.x = point.x + cameraPos.x;
point.y = point.y + cameraPos.y;
point.z = point.z - cameraPos.z;
but the x,y,z coords I get as point.x seems not to be correct.
The camera has a pitch angle of pi/2, so to points on the ground.
I am clearly missing something.
I assume you've seen the gazebo examples for the kinect (brief, full). You can get, as topics, the raw image, raw depth, and calculated pointcloud (by setting them in the config):
<imageTopicName>/camera/color/image_raw</imageTopicName>
<cameraInfoTopicName>/camera/color/camera_info</cameraInfoTopicName>
<depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
<depthImageCameraInfoTopicName>/camera/dept/camera_info</depthImageCameraInfoTopicName>
<pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
Unless you need to do your own things with the image_raw for rgb and depth frames (ex ML over rgb frame & find corresponding depth point via camera_infos), the pointcloud topic should be sufficient - it's the same as the pcl pointcloud in c++, if you include the right headers.
Edit (in response):
There's a magical thing in ros called tf/tf2. Your pointcloud, if you look at the msg.header.frame_id, says something like "camera", indicating it's in the camera frame. tf, like the messaging system in ros, happens in the background, but it looks/listens for transformations from one frame of reference to another frame. You can then transform/query the data in another frame. For example, if the camera is mounted at a rotation to your robot, you can specify a static transformation in your launch file. It seems like you're trying to do the transformation yourself, but you can make tf do it for you; this allows you to easily figure out where points are in the world/map frame, vs in the robot/base_link frame, or in the actuator/camera/etc frame.
I would also look at these ros wiki questions which demo a few different ways to do this, depending on what you want: ex1, ex2, ex3

Clip Unstructured grid and keep arrays data

I'm trying to clip a vtkUnstructuredGrid using vtkClipDataSet. The problem is that after I clip, the resulting vtkUnstructuredGrid doesn't have the point/cells data (the arrays).
This is my code:
vtkSmartPointer<vtkUnstructuredGrid> model = reader->GetOutput();
// this shows that model has one point data array called "Displacements" (vectorial of 3 components)
model->Print(std::cout);
// Plane to cut it
vtkSmartPointer<vtkPlane> plane = vtkSmartPointer<vtkPlane>::New();
plane->SetOrigin(0.0,0.0,0.0); plane->SetNormal(1,0,0);
// Clip data
vtkSmartPointer<vtkClipDataSet> clipDataSet = vtkSmartPointer<vtkClipDataSet>::New();
clipDataSet->SetClipFunction(plane);
clipDataSet->SetInputConnection(model->GetProducerPort());
clipDataSet->InsideOutOn();
clipDataSet->GenerateClippedOutputOn();
//PROBLEM HERE. The print shows that there aren't any arrays on the output data
clipDataSet->GetOutput()->Print(std::cout);
I need the output grid to have the arrays, because I would like to display the values on the resulting grid.
For example, if the data are are scalars, I would like to display isovalues on the cutted mesh. If the data is vectorial, I would like to deform the mesh (warp) in the direction of the data vectors.
Here I have an example on ParaView of what I would like to do. The solid is the original mesh and the wireframe mesh is the deformed one.
I'm using VTK 5.10 under C++ (Windows 8.1 64 bits, if that helps).
Thank you!
PS: I tried asking this on the VTKusers list, but I got no answer.
Ok I found the error after the comment of user lib. I was missing the call to update after I set the inputconnection.
Thank you all.
// Clip data
vtkSmartPointer<vtkClipDataSet> clipDataSet = vtkSmartPointer<vtkClipDataSet>::New();
clipDataSet->SetClipFunction(plane);
clipDataSet->SetInputConnection(model->GetProducerPort());
clipDataSet->InsideOutOn();
clipDataSet->GenerateClippedOutputOn();
clipDataSet->Update(); // THIS is the solution

Get correspondence between flat and depth images from .oni file

I have a video captured from Kinect in an .oni file
I can extract an RGB image from it then find a feature on the image.
What I need to do then is to find a point in 3D that corresponds to my point on 2D image.
Is this possible?
(I am going to be using C++)
It is possible. In openni 2 there is a class call CoordinateConverter, there is no direct function from RGB -> World coordinates though.... but if you have the correspondance in the depth image this should class could work. Also, I am almost sure that if the image registration is done the x,y points with valid depth should be the same x,y from the image.
I hope this helps, if you use openni 1 just tell me

How to create a depth map from PointGrey BumbleBee2 stereo camera using Triclops and FlyCapture SDKs?

I've got the BumbleBee 2 stereo camera and two mentioned SDKs.
I've managed to capture a video from it in my program, rectify stereo images and get a disparity map. Next thing I'd like to have is a depth map similar to one, the Kinect gives.
The Triclops' documentation is rather short, it only references functions, without typical workflow description. The workflow is described in examples.
Up to now I've found 2 relevant functions: family of triclopsRCDxxToXYZ() functions and triclopsExtractImage3d() function.
Functions from the first family calculate x, y and z coordinate for a single pixel. Z coordinate perfectly corresponds to the depth in meters. However, to use this function I should create two nested loops, as shown in the stereo3dpoints example. That gives too much overhead, because each call returns two more coordinates.
The second function, triclopsExtractImage3d(), always returns error TriclopsErrorInvalidParameter. The documentation says only that "there is a geometry mismatch between the context and the TriclopsImage3d", which is not clear for me.
Examples of Triclops 3.3.1 SDK do not show how to use it. Google brings example from Triclops SDK 3.2, which is absent in 3.3.1.
I've tried adding lines 253-273 from the link above to current stereo3dpoints - got that error.
Does anyone have an experience with it?
Is it valid to use triclopsExtractImage3d() or is it obsolete?
I also tried plotting values of disparity vs. z, obtained from triclopsRCDxxToXYZ().
The plot shows almost exact inverse proportionality: .
That is z = k / disparity. But k is not constant across the image, it varies from approximately 2.5e-5 to 1.4e-3, that is two orders of magnitude. Therefore, it is incorrect to calculate this value once and use forever.
Maybe it is a bit to late and you figured it out by yourself but:
To use triclopsExtractImage3d you have to create a 3dImage first.
TriclopsImage3d *depthImage;
triclopsCreateImage3d(triclopsContext, &depthImage);
triclopsExtractImage3d(triclopsContext, depthImage);
triclopsDestroyImage3d(&depthImage);