Is it possible to make point vector reycast in pcl? - c++

I have a 3d points world. I have point in it a [x,y,z] and direction (azimuthal angle θ, and polar angle ) I want to get point b [x2,y2,z2] where my ray (sent from my point a into direction) would stop. (only from one point and only for one direction). How to do such thing in pcl, is it possible (I see a ray caster class but it seems to work on whole world not point to point)?

I think that the OctreePointCloudSearch class might help you a little bit more. Have a quick look at the OctreePointCloudSearch::getIntersectedVoxelIndices method: once your point cloud is organized in an octree, it allows you to specify the origin and a direction for the ray to be used for raycasting. In your case, the origin would be the point a and the direction would be obtained from the azimuthal and polar angles (see this)
The function returns the indices to the point within the intersected voxels.
If you google for that class name you can easily find a good number of working examples (this example casts a ray from each point of the cloud toward the camera and checks for occlusions).

Related

How to pick points inside a circle in OpenGL?

For a point cloud, I am trying to pick points inside a circle(which represent the mouse's position), just like .
Currently I do this work by:
cast a ray from the circle center.
calculate the closest point to the ray.
calculate the distance between closest point and other points in this point cloud, and then select those points which distance < r(r represent the radius of the circle).
But I found some problem with this method,
sometimes the closest point to the ray is not what I want, since a point cloud have two side, front side and back side. Just like the following image, what I expect is the blue one, but the closest point to the ray might be the red one.
this method is so slow.
Is there a better way to do this?

Duplicate points along NURBS curve

in my current project I have implemented NURBS curves and at the beginning of the curve I have some 3D points, which are all located in the normal plane of the point (u = 0.0). Now I want to copy these points to other locations of the curve (e.g. u = 0.5) to create some kind of extrude / sweep mechanism. My theoretical approach is to create a local coordinate system in point 0.0 and to calculate the coordinates of every point in relation to this system. Then I can create local coordinate systems at the desired points and place the points there. My problem is that with the first derivation of the NURBS curve I can get the tangent and therefore the normal plane of the point / system (local X direction) but I don't know how to orient the system. My first idea was to take the second derivative of the NURBS curve and use this to calculate the local Y and Z axis of the system but the results of the second derivatives does not seem to be suitable for this approach.
Is there a common approach to solve this problem?
As an additional question I am wondering how to dictate the tangent vector of a given control point, for example the tangent of the first control point. Currently I solve this by dictating the position of the second control point, which seems to be not very elegant.
We solved the same problem using this approach:
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/Computation-of-rotation-minimizing-frames.pdf
Look like you would like to find a local coordinate system at any given point on the NURBS curve. If this is the case, Frenet frame is the typical choice. See this link for more details.
As for the issue of "tangent vector of a given control point", since control points in general do not lie on the NURBS curve, it does not have a tangent vector. If you really need one for some special reason, you can use the tangent vector at the point on the curve that is closest to the control point.

Need help understanding the Perspective-Three-Point

I'm following this explanation on the P3P problem and have a few questions.
In the heading labeled Section 1 they project the image plane points onto a unit sphere. I'm not sure why they do this, is this to simulate a camera lens? I know in OpenCV, we first compute the intrinsics of the camera and factor it into solvePnP. Is this unit sphere serving a similar purpose?
Also in Section 1, where did $u^{'}_x$, $u^{'}_y$, and $u^{'}_z$ come from, and what are they? If we are projecting onto a 2D plane then why do we need the third component? I know the standard answer is "because homogenous coordinates" but I can't seem to find an explanation as to why we use them or what they really are.
Also in Section 1 what does "normalize using L2 norm" mean, and what did this step accomplish?
I'm hoping if I understand Section 1, I can understand the notation in the following sections.
Thanks!
Here are some hints
The projection onto the unit sphere has nothing to do with the camera lens. It is just a mathematical transformation intended to simplify the P3P equation system (whose solutions we are trying to compute).
$u'_x$ and $u'_y$ are the coordinates of $(u,v) - P$ (here $P=(c_x, c_y)$), normalized by the focal distances $f_x$ and $f_y$. The subtraction of the camera optical center $P$ is a translation of the origin to this point. The introduction of the $z$ coordinate $u'_z=1$ moves the 2D point $(u'_x, u'_y)$ to the 3D plane defined by the equation $z=1$ (the 3D plane parallel to the $xy$ plane). Note that by moving points to the plane $z=1$, you now can better visualize of them as the intersections of 3D lines that pass thru $P$ and them. In other words, these points become the projections onto a 2D plane of 3D points located somewhere on those lines (well, not merely "somewhere" but at the focal distance, which has now been "normalized" to 1 after dividing by $f_x$ and $f_y$). Again, all transformations intended to solve the equations.
The so called $L2$ norm is nothing but the usual distance that comes from the Pithagoras Theorem ($a^2 + b^2 = c^2$), only that it's being used to measure distances between points in the 3D space.

Quaternion rotation to latitude/longitude

TL;DR
I have a quaternion representing the orientation of a sphere (an Earth globe). From the quaternion I wish to derive a latitude/longitude. I can visualize in my mind the process, but am weak with the math (matrices/quaternions) and not much better with the code (still learning OpenGL/GLM). How can I achieve this? This is for use in OpenGL using c++ and the GLM library.
Long Version
I am making a mapping program based on a globe of the Earth - not unlike Google Earth, but for a customized purpose that GE cannot be adapted to.
I'm doing this in C++ using OpenGL with the GLM library.
I have successfully coded the sphere and am using a quaternion directly to represent it's orientation. No Euler angles involved. I can rotate the globe using mouse motions thus rotating the globe on arbitrary axes depending on the current viewpoint and orientation.
However, I would like to get a latitude and longitude of a point on the sphere, not only for the user, but for some internal program use as well.
I can visualize that this MUST be possible. Imagine a sphere in world space with no rotations applied. Assuming OpenGL's right hand rule, the north pole points up the Y axis with the equator parallel on the X/Z plane. The latitude/longitude up the Y axis is thus 90N and something else E/W (degenerate). The prime meridian would be on the +Z axis.
If the globe/sphere is rotated arbitrarily the globe's north pole is now somewhere else. This point can be mapped to a latitude/longitude of the original sphere before rotation. Imagine two overlaying spheres, one the globe which is rotated, and the other a fixed reference.
(Actually, it would be in reverse. The latitude/longitude I seek is the point on the rotated sphere that correlates to the north pole of the unrotated reference sphere)
In my mind it seems that somehow I should be able to get the vector of the Earth globe's orientation axis from it's quaternion and compare it to that of the unrotated sphere. But I just can't seem to grok how to do that. (I guess I still don't fully understand mats and quats and have only blundered into my success so far)
I'm hoping to achieve this without needing a crash course in the deep math. I'm looking for a solution/understanding/guidance from the point of view of being able to use the GLM library to achieve my goal. Ideally a code sample with some general explanation. I learn best from example.
FYI, in my code the rotation of the globe/sphere is totally independent of the camera (which does use Euler angles) so it can be moved independently. So I can't use anything from the camera to determine this.
Maybe you could try to follow that link (ie. use boost ;) ) from that thread Longitude / Latitude to quaternion and then deduct the inverse of that conversion.
Or you could also go add a step by converting your quaternion into Euler angle?

my plane is not vertical, How to update coordinate of point cloud to lie on a vertical plane

I have a bunch of points lying on a vertical plane. In reality this plane
should be exactly vertical. But, when I visualize the point cloud, there is a
slight inclination (nearly 2 degrees) from the verticality. At the moment, I can calculate
this inclination only. Concerning other errors, I assume there are no
shifts or something like that.
So, I want to update coordinates of my point data so that they lie on the vertical plane. I think, I should do some kind of transformation. It may be only via rotation along X-axis. Not sure what it would be.
I guess, you understood my question. Honestly, I am poor at
mathematics. So, please let me know how to update my point coordinates
to lie on the exact vertical plane.
Note: AS I am implementing this in c++ and there are many programmers who have sound knowledge on these things, I am posting this question under c++.
UPDATES
If I say exactly what I have done so far;
I have point cloud data representing a vertical object + its surroundings things. (The data is collected by a moving scanner and may have axes deviations from the correct world axes). The problem is, I cannot say exactly that there is an error on my data or not. Therefore, I checked this with a vertical planar object (which is the dominated object in my data as well). In reality that plane is truly vertical. But, when I fit a plane by removing outliers, then that plane is not truly vertical and has nearly 2 degree inclination. Therefore, I am suspecting that my data has some error. So I want to update all my point clouds (including points on the plane and points which represent other objects) in a way to lay that particular planar points exactly on the vertical plane. Then, I guess, all the points will be updated into their correct positions as in the reality. That is all (x,y,z) coordinates should be updated.
As an example please refer the below figure.
left-represents original point cloud (as you can see, points themselves are not vertical) and back line tells the vertical plane which I fitted and red is the zenith line. as you can see, there is an inclination of the vertical plane.
So, I want to update whole my data in the right figure. then, after updating if i fit a plane again (removing outliers), then it is exactly parallel to the zenith line. please help me.
I may be able to help you out, considering I worked with planes recently. First of all, how come the points aren't coplanar from the get go? I'd make the points coplanar in the first place instead of them being at an inclination (from what origin?), and then having to fix them. Also, having the points be coplanar on your first go would increase efficiency.
Sorry if this is the answer you're not looking for, but I need more information before I can help you out. Also, 3D math is hard. If you work with it enough, it starts to get pounded into your head, where you will NEVER forget it, especially if you went through the headaches I had to go through.
I did a bit of thinking on it, and since you want to rotate along the x-axis, your rotation will be done on the xz-plane, which means we can make this a 2D problem. After doing a bit of research on Wikipedia, this may be your solution.
new z = ((x - intended x) * sin(angle)) + (z * cos(angle)) + intended x
What I'm doing here is subtracting our intended x value from our current x value, so that we make (intended x, 0) our point of origin to rotate around. After the point is rotated, I add (intended x, 0) back to our coordinate so that we get the correct result.
Depending on where you got your points from (some kind of measurement, I guess) and what you want to do with them, there are several different things you could do with your data.
The search keyword "regression plane" might help - there are several ways of finding planes approximating point clouds, and several ways to "snap" points to planes.
Edit: You want to apply a rotation around the axis defined by the cross product of the normal vector on your regression plane and the normal of your desired plane, and a point your choice. From your illustration I take it that you probably want the bottom of your vertical planar object to be the point of reference for the rotation.
So you've got your point of reference, you now the axis around which you want to rotate, and the angle. All you need to do is:
Translation (to get to your point of reference)
Rotation
I read your question again, and hopefully this answer will help you out. If there's anything else I need to know, please tell me.
Now, In order to rotate anything, there must be a center point to rotate around. Now you've already been able to detect the angle of inclination, so now we need a formula for rotating a point a certain angle around an origin. In addition, since this problem only occurs on a 2D plane, we can use this basic formula to readjust the points. For any two axis x and y:
Theta is the angle that you will rotate around in a counter-clockwise direction. x' and y' are your new points. x.origin and y.origin are the coordinates for the point you will be going around. Now I don't know if my math is 100% correct on this but if it's not, hopefully you can change a thing or two and it will work.