How to create views from a 360 degree panorama. (like street view) - c++

Given a sphere like this one from google streetview.
If i wanted to create 4 views, front view, left view, right view and back view, how do i do the transformations needed to straiten the image out like if i was viewing it in google streetview. Notice the green line i drawed in, in the raw image its bended, but in street view its strait. How can i do this?

The streetview image is a spherical map. The way streetview and Google Earth work is by rendering the scene as if you were standing at the center of a giant sphere This sphere is textured with an image like in your question. The longitude on the sphere corresponds to the x coordinate on the texture and the latitude with the y coordinate.
A way to create the pictures you need would be to render the texture as a sphere like Google Earth does and then taking a screenshot of all the sides.
A way to do it purely mathematical is to envision yourself at the center of a cube and a sphere at the same time. The images you are looking for are the sides of the cube. If you want to know how a specific pixel in the cube map relates to a pixel in the spherical map, make a vector that points from the center of the cube to that pixel, and then see where that same vector points to on the sphere (latitude & longitude).
I'm sure if you search the web for spherical map cube map conversion you will be able to find more examples and implementations. Good luck!

Related

How to find overlapping fields of view?

If I have 2 cameras and I'm given the positions and orientations of the cameras in the same coordinate system, is there any way I could detect overlapping fields of view? In other words, how could I tell if something that's displayed in the frame of 1 camera is also displayed in another? In addition, I'm also given the view and projection matrices of the 2 cameras.
To detect two overlapping fields of view you'll want to do a collision check between two viewing frustums (viewing volume).
A frustum is a convex polyhedra so you can use the separating axis theorem to do it.
See here.
However, if you just want to know if an object that is displayed in the frame of one camera is displayed in the frame of another camera the best way to do that is to transform the world space coordinates from said object in to the viewport space of both cameras. If both coordinates land within the range [0:width, height:0] for both and the z coordinate is positive, then the object is in view of both cameras.
This page has a great diagram of the 3D transformation viewing pipeline if you want to read more on what viewspace and worldspace are.

OpenGL, Culling objects that are outside the view

In my case I wanna render 50,000 or more cubes that are distributed randomly inside a large bounding box, I don't want using instancing right now so I have to render each cube, I wanna improve the performance by culling out the cubes that are outside the camera view.
I have a camera class that has two matrices view and projection, each cube has its own bounding box, so I am planning to check each frame if the camera view bounding box contains the center of each cube if yes call its draw function if not ignore it.
I have for view matrix 3 vectors eye, target and up, and for projection width, height, near, far and FOV.
so I have two questions:
Is this a right scenario? I will calculate the camera view boumding box each frame then test each cube.
How can I calculate the camera bounding box each frame?
I got an idea from here how_to_check_if_vertex_is_visible_for_user that worked fine with me.
multiplying the projection view matrix of the camera by the any point in the 3D space the visible ones should be between [-1,1].

light field rendering from camera array

I'm trying to implement something similar to "Dynamically Reparameterized Light Fields" (Isaksen, McMillan, & Gortler), where the light field is a series of cameras placed on a plane:
In the paper, it discusses that we can find the corresponding camera and pixels by the following formulation: MF→D s,t = Ps,t ◦ TF.
The dataset that I'm using doesn't contain any camera parameters or any information regarding what is the distance between the cameras. I just know that they are placed on a plane uniformly. I have a free moving camera and rendering a view quad. So I can get the 3D position of the focal surface but I don't know how to get the (s,t,u,v) parameters from that. As soon as I get these parameters I can render correctly.

opengl 3d object picking - raycast

I have a program displaying planes of cubes, like levels in a house, I have the planes displayed so that the display angle is consistent to the viewport projection plane. I would like to be able to allow the user to select them.
First I draw them relative to each other with the first square drawn at {0,0,0}
then I translate and rotate them, each plane has it's own rotate and translate.
Thanks to this this page I have code that can cast a ray using the user's last touch. If you notice in the picture above, there is a green square and blue square, this is debug graphic displaying the ray intersecting the near and far planes in the projection matrix after clicking in the centre (with z of zero in order to display them), so it appears to be working.
I can get a bounding box of the cube, but it's coordinates will think they are still up in the left corner.
My question is how do I use my ray to check intersections with the objects after they have been rotated and translated? I'm very confused as I once had this working when I was translating and rotating the whole grid as one, now each plane is being moved separately I can't work out how to do it.

Screen space bounding box computation in OpenGL

I'm trying to implement tiled deferred rendering method and now I'm stuck. I'm computing min/max depth for each tile (32x32) and storing it in texture. Then I want to compute screen space bounding box (bounding square) represented by left down and top right coords of rectangle for every pointlight (sphere) in my scene (see pic from my app). This together with min/max depth will be used to check if light affects actual tile.
Problem is I have no idea how to do this. Any idea, source code or exact math?
Update
Screen-space is basically a 2D entity, so instead of a bounding box think of a bounding rectangle.
Here is a simple way to compute it:
Project 8 corner points of your world-space bounding box onto the screen using your ModelViewProjection matrix
Find a bounding rectangle of these points (which is just min/max X and Y coordinates of the points)
A more sophisticated way can be used to compute a screen-space bounding rect for a point light source. We calculate four planes that pass through the camera position and are tangent to the light’s sphere of illumination (the light radius). Intersections of each tangent plane with the image plane gives us 4 lines on the image plane. This lines define the resulting bounding rectangle.
Refer to this article for math details: http://www.altdevblogaday.com/2012/03/01/getting-the-projected-extent-of-a-sphere-to-the-near-plane/