Measure cube volume without point cloud or depth - c++

I would like to compute the volume of that cube in the figure without point cloud or depth map, I don't have access to them, but I have access to the corners of the cube in the screen space coordinates.
I know the ground mesh it's it a 0,0,0. Then I project a ray from origin 0,0,0 to all the points. I'm following the article to project a ray from camera to an image plane http://nghiaho.com/?page_id=363
My question is how would I know which points are candidates for the cube and which points are not candidates ?

Related

OpenGL eye position to texture space

I'm following this tutorial by nVidia for implementing a fluid simulation, however i'm confused about this part in the the ray marching algorithm section.
The ray direction is given by the vector from the eye to the entry point (both in texture space).
I already have the coordinates for the ray entry points so that's not an issue, but i don't understand how can i get the eye position in texture space.

How to segment a plane from point cloud

I have a pallet that shown in the following image and I would like to segment it. I donĀ“t have the ground plane, so that I can check the normals of the points if they are parallel to the ground plane normal then segment those points. but I dont have the ground plane. How would I segment those points.
[Edit by Spektre]
Here Red/Blue color encoded partial derivation of your depth image from your previous question:
The black areas has the same Z coordinate (poor Z resolution or plane parallel with screen projection plane) The Red/Blue lines are geometric edges in point cloud. As you can see it is far from the real plane.

Display voxels using shaders in openGL

I am working on voxelisation using the rendering pipeline and now I successfully voxelise the scene using vertex+geometry+fragment shaders. Now my voxels are stored in a 3D texture which has size, for example, 128x128x128.
My original model of the scene is centered at (0,0,0) and it extends in both positive and negative axis. The texure, however, is centered at (63,63,63) in tex coordinates.
I implemented a simple ray marcing for visualizing but it doesn't take into account the camera movements (I can render only from very fixed positions because my ray have to be generated taking into account the different coordinates of the 3D texture).
My question is: how can I map my rays so that they are generated at point Po with direction D in the coordinates of my 3D model but intersect the voxels at the corresponding position in texture coordinates and every movement of the camera in the 3D world is remapped in the voxel coordinates?
Right now I generate the rays in this way:
create a quad in front of the camera at position (63,63,-20)
cast rays in direction towards (63,63,3)
I think you should store your entire view transform matrix in your shader uniform params. Then for each shader execution you can use its screen coords and view transform to compute the view ray direction for your particular pixel.
Having the ray direction and camera position you just use them the same as currently.
There's also another way to do this that you can try:
Let's say you have a cube (0,0,0)->(1,1,1) and for each corner you assign a color based on its position, like (1,0,0) is red, etc.
Now for every frame you draw your cube front faces to the texture, and cube back faces to the second texture.
In the final rendering you can use both textures to get enter and exit 3D vectors, already in the texture space, which makes your final shader much simpler.
You can read better descriptions here:
http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
http://web.cse.ohio-state.edu/~tong/vr/

2D to 3D map OpenGL

I am trying to make a simple game engine but I got stuck at a point when I tried to map a 2D mouse coordinate to a 3D coordinate in my world. The basic game has a plane that serves as the ground as it is going to be (hopefully with time) an RTS gameengine.
My problem is that I can't really come up with anything useful. The plane is located at the 0,-100,-300 points and is about 1000x1000 in size. My camera is at 0,0,0 and the scene is rotated at 60 degreesto give the impression of a "bird eye" cam.
I was thinking about the trigonometric equations, using that I know the height of my camera and the angle and possibly calculating the distance will give me the right coords but this is just an idea.
Can somebody please give me some advice?
You can do it with a simple ray casting.
First, using gluUnProject, you can obtain the 3D world coordinates m corresponding to the 2D window coordinates of the mouse pointer.
Given the camera position e = (0, 0, 0), you can compute the mouse ray direction r = m - e.
Now, given a point p on the plane and the plane normal n, you can compute the intersection of the mouse ray with the plane.

How to create views from a 360 degree panorama. (like street view)

Given a sphere like this one from google streetview.
If i wanted to create 4 views, front view, left view, right view and back view, how do i do the transformations needed to straiten the image out like if i was viewing it in google streetview. Notice the green line i drawed in, in the raw image its bended, but in street view its strait. How can i do this?
The streetview image is a spherical map. The way streetview and Google Earth work is by rendering the scene as if you were standing at the center of a giant sphere This sphere is textured with an image like in your question. The longitude on the sphere corresponds to the x coordinate on the texture and the latitude with the y coordinate.
A way to create the pictures you need would be to render the texture as a sphere like Google Earth does and then taking a screenshot of all the sides.
A way to do it purely mathematical is to envision yourself at the center of a cube and a sphere at the same time. The images you are looking for are the sides of the cube. If you want to know how a specific pixel in the cube map relates to a pixel in the spherical map, make a vector that points from the center of the cube to that pixel, and then see where that same vector points to on the sphere (latitude & longitude).
I'm sure if you search the web for spherical map cube map conversion you will be able to find more examples and implementations. Good luck!