How to project point cloud on a plane with OpenGL? [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I am trying to present a point cloud and its projection with OpenGL on the plane normal to the line connecting the two most distant points. I have succeed in presenting the point cloud on the scene with an orthonormal system. I have found the two farthest point in the cloud. And I found the plan on which I projected.
I tried to make this projection but envin.
I tried with the transformation matrices as GL_PROJECTION but envin.
Can someone give me hand?

You can calculate the coordinates of the projected points by mathematical formulas, then draw them with OpenGL.
Take a look at this link

You can drop a perpendicular from each point to your found plane, and compute its foot point on the plane. Then draw another point there.

Related

Need help on visual odometry formular [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 11 months ago.
Improve this question
Hey there I try to do a realtime visual odometry system for a monocular camera.
Now I'm looking for an equation to describe 3d points movement in 2d vectors. While researching I came across a very interesting looking equation. I'm refering to page 22. It basically makes a simplification under the assumption of a relatively small time step. But now I'm struggeling about the image coordinates x and y. It's said that x would be sth like x=(px-px0) and y=(py-py0). When I understand it right p0 is the center of rotation. But if this is the case the whole formular would make no sense for my case cause I would need a prior knowledge of the center of rotation. Which is based on the translation again.
So maybe can help understanding it or maybe point me to a better way to do it.
To use this equation, you must have calibrated your camera (with a pinhole model), so you have a set of distortion coefficients, a focal distance and the principal point, which is the intersection of the optical axis with the image plane, as illustrated here: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html.
In the equation you mention, x and y coordinates are in pixels after distortion correction and relative to the center of projection, not the center of rotation. So, the px0 and py0 you are looking for, are the coordinates of the principal point, that is, cx0 and cy0 using the naming convention of the link above.

How to interpolate 3D points computed from a Kinect to get a ball trajectory? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm getting 3D points from the Kinect via OpenNI. Let's say I have :
X = [93.7819,76.8463,208.386,322.069,437.946,669.999]
Y = [-260.147,-250.011,-230.717,-211.104,-195.538,-189.851]
Z = [958,942,950,945,940,955]
That's the points I was able to catch from my moving ball. Now I would like to be able to compute something like an interpolation or least square with those points to know the trajectory of the ball. I can then know where the ball is going and where it will hit the wall.
I'm not sure of which mathematical tool to use and how to translate it in C++. I've seen lots of resources for 2D interpolation (cubic,...) or least squares, but it seems that it's harder for 3D or I missed something maybe.
Best regards
EDIT : the question is marked as too broad by moderators, so I will reduce the scope with the responses I got : if I use 2D polynomial regression with the 3 plans separately (thx yephick), what can I use in C++ to implement it ?
For what you are interested in there's hardly any difference between 3D and 2D.
All you do is work with planes independently (XY plane, XZ plane, and YZ plane). This will reduce the complexity significantly and allow you to "draw" much simpler diagrams on a piece of paper when you work on this problem.
Once you figured the coordinates in each of the planes it is quite trivial to not only reconcile the coordinates into a 3D space but also provides an added benefit of error checking. For example an X coordinate found in XY plane should match (or be "close enough") to the same X coordinate found in XZ plane.
If the accuracy is not too critical you don't even need to go higher than the first power of polynomial approximation, using just a simple plain-old arithmetical average of the two consequential points.
You can use spline interpolation to create a smooth trajectory.
If not in the "mood" to implement it yourself, a quick google search will give you open source libraries like SINTEF's SISL that have such functionallity.

Detect U Shaped Edges in an Image [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to detect the location of a fingertip from an image. I've been able to crop out a region in the image where it must have a fingertip, and extract the edges using Canny Edge Detector. However I'm stuck. Since my project description says I can't use the skin color for detection, I cannot find the exact contour of the finger, and will have to try to separate the fingertip with edges alone. Right now I'm thinking since the finger has a curved arch shape/letter U shape, maybe that could be used for detection. But since it has to be rotation/scale invariant, most algorithms I found so far are not up to it. Does anyone have an idea of how to do this? Thanks for anyone that responds!
This is the result I have now. I want to put a bounding box around the index fingertip, or the highest fingertip, whichever is the easiest.
You may view the tip of U as a corner, and try corner detection method such as the Foerstner Algorithm that will position of a corner with sub-pixel accuracy, and Haris corner detector which has implementation included in the feature2D class in opencv.
There is a very clear and straighforward lecture on Haris corner detector that I would like to share with you.

Matlab's TriScatteredInterp implementation in C++ [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have x,y,z 3D points in the array size of N x 3 dimensions. As they are scattered data points, I need to match into equally spaced grid data. The implementation in Matlab used TriScatteredInterp as shown in the link. I need to implement it in C++ and plot the data and save as png file. So I searched and found out that I should implement in PCL library. As I am not familiar with PCL, how can I approach that problem using PCL? Can I have any sample program?
Thanks
I don't understand your exact needs for the equaly spaced grid data. When looking at the matlab function I believe you would like to do the following:
1) Perform surface reconstruction on the scattered data points
In PCL you should be able to do this according to example:
Greedy Triangulation tutorial
2) Show the surface in a viewer
This step could be realized by using the VTK viewer. An example is shonw in:
VTK mesh viewing
3) Save the image of the viewer as a PNG file.
The last step could be realized using the VTKviewer also. An example can be found:
VTKviewer save as PNG example
Now I understand how TriScatteredInterp works in Matlab.
We have x,y,z points for N X 3 dimensions. All these points, we need to implement Delaunay triangles in C++. That is easy.
Then, for all your desired grid points x', y', please search the triangle in which your x',y' is located. Then do Barycentric interpolation in a triangle as shown in the link. You will get z' for these x',y'.
That is all what we need to do in C++ for TriScatteredInterp.
You will get a matrix of x',y',z', then I follow #Deepfreeze's idea for plotting using PCL. We can also use OpenGl for plotting.
It doesn't stop at Delaunay triangulation, still need to do interpolation.

Blender: Impossible Cube [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm working on a graphics project trying to create an impossible cube in 3D. An impossible cube looks like that:
The trick behind this is two of the edges which are 'cut' and a picture taken from a specific angle to give the illusion of the impossibility.
Well I'm trying to make this but instead of a static image, I want to be able to animate it (rotate around) maintaining the impossible properties.
I have managed to make a cube in blender as you can see in the screenshot below:
I would like to hear your suggestions as to how I can achieve the desired effect. An idea would be to make transparent the portion of the edge that has an edge(or more) behind it, so that every time the camera angle changes, the transparent patch moves along.
It doesn't have to be done in Blender exclusively so any solutions in OpenGL etc are welcome.
To give you an idea of what the end result should be, this is a link to such an illustration:
3D Impossible Cube Illusion Animation
It's impossible (heh). Try to imagine rotating the cube so that the impossibly-in-front bit moves to the left. As soon as it would "cross" the current leftmost edge, the two properties of "it's in front" and "it's in the back" will not be possible to fulfill simultaneously.
If you have edge culling enabled, but clipping (depth-testing) disabled, and draw primitives in the right order, you should get the Escher cube without any need for cuts. This should be relatively easy to animate.