How to project a spherical map onto a sphere / cube: "Equirectangular to cubic" - c++

UPDATE:
I found that, http://os.ivrpa.org/panosalado/wiki , has an implementation in java. Anyone who has something similar in c or c++?
I have this panorama, an spherical map from google streetview, and want to map this on a sphere/cube. Below are some examples and illustrations, what i seek is a library that can do it, or some implementation guides.
I tried http://krpano.com/docu/tutorials/quickstart/#top that gives the results listed at the bottom. It illustrates what i want, but the rotation axis is off. I need to create the views of direct ahead and back, left and right. Ideal i would like to map it to the sphere and tell it what angles to extract (the orientation of the cube).
[Back,Down,Front,Left,Right,Up]

You could do this easily in POV-Ray putting the camera in the middle of a sphere mapped with your texture. See image_map map_type 1 and e.g this example.
But really this is very easy to implement yourself, assuming the input images are some sort of cylindrical equidistant or equirectangular projection: for each (x,y) in the output image you are rendering, just use the inverse formulas to compute a (longitude,latitude) in the input image and interpolate/copy over a pixel value.

Related

OpenCv Blob tracking of point relative to plane

Am doing an installation that tracks blobs using openCv, and projecting graphics over the blobs. Problem is my camera is off and away from the projector.
I'm thinking to get the point's position in relation to the projection's plane, I would need to calibrate by marking out the plane's corners as seen in the camera view.
My problem is how do i use that 4 points info, and then convert the tracked blob from the camera view to the projection plane, so the projected graphic lines up with the tracked blob? Not sure what i should be searching for.
After you detect the 4 corners points, you can calculate the transformation to the projector plane by using PerspectiveTransform.
Once you have this transformation, you could use warpPerspective, to go from one coordinate system to another.
Unfortunately I'm unable to help with a minimal code example at the moment, but I recommend having a look at ofxCv and it's examples. There is a camera based undistort example, but the wrapper also provides utilities for warping/unwarping perspective via warpPerspective and unwarpPerspective.
Bare in mind ofxCv has handy function to convert to/from ofImage to cv::Mat like toCv() and toOf()
ofxCv may make it easier to use the OpenCV functions Elad Joseph recommends (which sound like exactly what you need)

How to get curve from intersection of point cloud and arbitrary plane?

I have various point clouds defining RT-STRUCTs called ROI from DICOM files. DICOM files are formed by tomographic scanners. Each ROI is formed by point cloud and it represents some 3D object.
The goal is to get 2D curve which is formed by plane, cutting ROI's cloud point. The problem is that I can't just use points which were intersected by plane. What I probably need is to intersect 3D concave hull with some plane and get resulting intersection contour.
Is there any libraries which have already implemented these operations? I've found PCL library and probably it should be able to solve my problem, but I can't figure out how to achieve it with PCL. In addition I can use Matlab as well - we use it through its runtime from C++.
Has anyone stumbled with this problem already?
P.S. As I've mentioned above, I need to use a solution from my C++ code - so it should be some library or matlab solution which I'll use through Matlab Runtime.
P.P.S. Accuracy in such kind of calculations is really important - it will be used in a medical software intended for work with brain tumors, so you can imagine consequences of an error (:
You first need to form a surface from the point set.
If it's possible to pick a 2d direction for the points (ie they form a convexhull in one view) you can use a simple 2D Delaunay triangluation in those 2 coordinates.
otherwise you need a full 3D surfacing function (marching cubes or Poisson)
Then once you have the triangles it's simple to calculate the contour line that a plane cuts them.
See links in Mesh generation from points with x, y and z coordinates
Perhaps you could just discard the points that are far from the plane and project the remaining ones onto the plane. You'll still need to reconstruct the curve in the plane but there are several good methods for that. See for instance http://www.cse.ohio-state.edu/~tamaldey/curverecon.htm and http://valis.cs.uiuc.edu/~sariel/research/CG/applets/Crust/Crust.html.

Same Marker position, Different Rotation and Translation matrices - OpenCV

I'm working on an Augmented Reality marker detection program, using OpenCV and I'm getting two different rotation and translation values for the same marker.
The 3D model switches between these states automatically without my control, when the camera is slightly moved. Screenshots of the above two situations are added below. I want the Image#1 to be the correct one. How to and where to correct this?
I have followed How to use an OpenCV rotation and translation vector with OpenGL ES in Android? to create the Projection Matrix for OpenGL.
ex:
// code to convert rotation, translation vector
glLoadMatrixf(ConvertedProjMatrix);
glColor3f(0,1,1) ;
glutSolidTeapot(50.0f);
Image #1
Image #2
Additional
I'd be glad if someone suggests me a way to make the Teapot sit on the marker plane. I know I have to edit the Rotation matrix. But what's the best way of doing that?
To rotate the teapot you can use glRotatef(). If you want to rotate your current matrix for example by 125° around the y-axis you can call:
glRotate(125,0,1,0);
I can't make out the current orientation of your teapot, but I guess you would need to rotate it by 90° around the x-axis.
I have no idea about your first problem, OpenCV seems unable to decide which of the shown positions is the "correct" one. It depends on what kind of features OpenCV is looking for (edges, high contrast, unique points...) and how you implemented it.
Have you tried swapping the pose algorithm (ITERATIVE, EPNP, P3P)? Or possibly use the values from the previous calculation - remember that it's just giving you its 'best guess'.

make a charater follow an uneven terrain (2D)

I'd like to make a game where the terrain is not even and is based on a png. How s this done in theory, given the object's vec2 and its angle, because if for instance there is a hill, the character will rotate based on the angle of the hill. Thanks
2d like mario
I think you are talking about a heightmap which is you PNG which is then converted to a 3D triangle mesh. You need to use the information from the mesh (or PNG color value) to calculate the current height where you should place your character.
If this is a flying character your pretty much done here, but in your case you need to calculate the normal vector of the current triangle the character is standing on. This is pretty simple using the cross product of the two triangle vectors (V2 - V1) x (V3 - V1). That should be your characters angle as well. You could maybe average this vector by including normals from the surrounding trangles as well.
Btw, when you have the normals of the triangle you can apply some basic shading to the ground as well.
Added: The OP changed the question to be a 2D problem. The above approach still works, but it much easier in 2D.
Use the height values not as triangles but as lines (silhouette) and calculate the normal of the current line instead. That is, create a vector, v, between the current height value and the next. Then the normal of that vector is n = <-v.y, v.x>. Use that as the angle of your character.
You need a mapping function that converts the PNG data to a 3-d representation... This mapping function can be simple, as in simply interpreting greyscale values in the PNG as altitude or via human guidance, or it can be complex, as in shadow detection used in advanced computer vision algorithms. In either case, you would then move your character based on the data gleaned thru the mapping function.
In c++, I would suggest looking for a 3d-gaming engine that support more than just plain terrains.
You could start with this list
That is of course, if you're not trying to start from scratch, in which case you need to look for algorithms first.
Edit:
since it's the game and not the terrain that is 2d, if you want to make your environment out of an image, you're in for quite some edge-detection.

Implementing Marching Cube Algorithm?

From My last question: Marching Cube Question
However, i am still unclear as in:
how to create imaginary cube/voxel to check if a vertex is below the isosurface?
how do i know which vertex is below the isosurface?
how does each cube/voxel determines which cubeindex/surface to use?
how draw surface using the data in triTable?
Let's say i have a point cloud data of an apple.
how do i proceed?
can anybody that are familiar with Marching Cube help me?
i only know C++ and opengl.(c is a little bit out of my hand)
First of all, the isosurface can be represented in two ways. One way is to have the isovalue and per-point scalars as a dataset from an external source. That's how MRI scans work. The second approach is to make an implicit function F() which takes a point/vertex as its parameter and returns a new scalar. Consider this function:
float computeScalar(const Vector3<float>& v)
{
return std::sqrt(v.x*v.x + v.y*v.y + v.z*v.z);
}
Which would compute the distance from the point and to the origin for every point in your scalar field. If the isovalue is the radius, you just figured a way to represent a sphere.
This is because |v| <= R is true for all points inside a sphere, or which lives on its interior. Just figure out which vertices are inside the sphere and which ones are on the outside. You want to use the less or greater-than operators because a volume divides the space in two. When you know which points in your cube are classified as inside and outside, you also know which edges the isosurface intersects. You can end up with everything from no triangles to five triangles. The position of the mesh vertices can be computed by interpolating across the intersected edges to find the actual intersection point.
If you want to represent say an apple with scalar fields, you would either need to get the source data set to plug in to your application, or use a pretty complex implicit function. I recommend getting simple geometric primitives like spheres and tori to work first, and then expand from there.
1) It depends on yoru implementation. You'll need to have a data structure where you can lookup the values at each corner (vertex) of the voxel or cube. This can be a 3d image (ie: an 3D texture in OpenGL), or it can be a customized array data structure, or any other format you wish.
2) You need to check the vertices of the cube. There are different optimizations on this, but in general, start with the first corner, and just check the values of all 8 corners of the cube.
3) Most (fast) algorithms create a bitmask to use as a lookup table into a static array of options. There are only so many possible options for this.
4) Once you've made the triangles from the triTable, you can use OpenGL to render them.
Let's say i have a point cloud data of an apple. how do i proceed?
This isn't going to work with marching cubes. Marching cubes requires voxel data, so you'd need to use some algorithm to put the point cloud of data into a cubic volume. Gaussian Splatting is an option here.
Normally, if you are working from a point cloud, and want to see the surface, you should look at surface reconstruction algorithms instead of marching cubes.
If you want to learn more, I'd highly recommend reading some books on visualization techniques. A good one is from the Kitware folks - The Visualization Toolkit.
You might want to take a look at VTK. It has a C++ implementation of Marching Cubes, and is fully open sourced.
As requested, here is some sample code implementing the Marching Cubes algorithm (using JavaScript/Three.js for the graphics):
http://stemkoski.github.com/Three.js/Marching-Cubes.html
For more details on the theory, you should check out the article at
http://paulbourke.net/geometry/polygonise/