opengl 3d object picking - raycast - opengl

I have a program displaying planes of cubes, like levels in a house, I have the planes displayed so that the display angle is consistent to the viewport projection plane. I would like to be able to allow the user to select them.
First I draw them relative to each other with the first square drawn at {0,0,0}
then I translate and rotate them, each plane has it's own rotate and translate.
Thanks to this this page I have code that can cast a ray using the user's last touch. If you notice in the picture above, there is a green square and blue square, this is debug graphic displaying the ray intersecting the near and far planes in the projection matrix after clicking in the centre (with z of zero in order to display them), so it appears to be working.
I can get a bounding box of the cube, but it's coordinates will think they are still up in the left corner.
My question is how do I use my ray to check intersections with the objects after they have been rotated and translated? I'm very confused as I once had this working when I was translating and rotating the whole grid as one, now each plane is being moved separately I can't work out how to do it.

Related

OpenGL, Culling objects that are outside the view

In my case I wanna render 50,000 or more cubes that are distributed randomly inside a large bounding box, I don't want using instancing right now so I have to render each cube, I wanna improve the performance by culling out the cubes that are outside the camera view.
I have a camera class that has two matrices view and projection, each cube has its own bounding box, so I am planning to check each frame if the camera view bounding box contains the center of each cube if yes call its draw function if not ignore it.
I have for view matrix 3 vectors eye, target and up, and for projection width, height, near, far and FOV.
so I have two questions:
Is this a right scenario? I will calculate the camera view boumding box each frame then test each cube.
How can I calculate the camera bounding box each frame?
I got an idea from here how_to_check_if_vertex_is_visible_for_user that worked fine with me.
multiplying the projection view matrix of the camera by the any point in the 3D space the visible ones should be between [-1,1].

Mapping Bullet phyics coordinates to OpenGL

I've been using the Bullet physics engine with OpenGL to visualise my simulations. I currently have a very simple simulation of a cube that has an initial horizontal and forward velocity that falls down from the sky and collides with the walls of a room that are all slanted at 45 degrees, with the bottom of the wall meeting the floor.
I use getOpenGLMatrix to get the orientation, position, etc. of the cube and map it to OpenGL by making that matrix the Model matrix. However, when I run it and visualise the simulation the cube behaves as expected (rolls down the wall), but it does not "touch" the rendered OpenGL wall (I say touch but of course mean the rendered cube does not appear to come near the rendered wall).
My Bullet cube is 2x2x2 (specified by btBoxShape(btVector3(1.0f,1.0f,1.0f))).
My OpenGL cube is also 2x2x2, with the origin at 0 and corners 1.0 away in each direction.
The only thing I can think of is that the coordinates in Bullet physics do not map directly to the coordinates of OpenGL (for example, a cube edge of length 1 in Bullet is X pixels, but a cube edge of length 1 in OpenGL is Y pixels). Is this the case? If not, can you think why I might have this issue (obviously I don't expect you to magically know the answer, just wondering if there are any known issues like this).
Thanks

Screen space bounding box computation in OpenGL

I'm trying to implement tiled deferred rendering method and now I'm stuck. I'm computing min/max depth for each tile (32x32) and storing it in texture. Then I want to compute screen space bounding box (bounding square) represented by left down and top right coords of rectangle for every pointlight (sphere) in my scene (see pic from my app). This together with min/max depth will be used to check if light affects actual tile.
Problem is I have no idea how to do this. Any idea, source code or exact math?
Update
Screen-space is basically a 2D entity, so instead of a bounding box think of a bounding rectangle.
Here is a simple way to compute it:
Project 8 corner points of your world-space bounding box onto the screen using your ModelViewProjection matrix
Find a bounding rectangle of these points (which is just min/max X and Y coordinates of the points)
A more sophisticated way can be used to compute a screen-space bounding rect for a point light source. We calculate four planes that pass through the camera position and are tangent to the light’s sphere of illumination (the light radius). Intersections of each tangent plane with the image plane gives us 4 lines on the image plane. This lines define the resulting bounding rectangle.
Refer to this article for math details: http://www.altdevblogaday.com/2012/03/01/getting-the-projected-extent-of-a-sphere-to-the-near-plane/

2d shadow mapping

I have been wondering about how to implement this with openGL:
I have a map, with a flat floor and walls. Every thing here is 2d, there is no 3d geometry, only 2d poligons that compose the map.
Using the vertex of the polygons I cast shadows, to define the viewable area.
The shadows define the field of view, but since the cells with walls obstructi view, they are also darkened. I can draw the walls on top of the shadows, but doing so would show even walls outside the field of view.
I have been suggested to approach this problem with shadow mapping. I should render the 2D scene into 4 different 1D textures that hold the depth of the distance to the first colliding surface.
The problem is that I have no idea how to render the projection of the 2d scene into the 1D texture. If I use, for example:
gluLookAt (x, y, 0.0, 0.0, x , y+1, 0.0, 0.0, 1.0);
To render the top view, the result is still 2D. Also, nothing would be rendered since all the vertex will be at the same plane, so all surfaces will be ortogonal to the camera.
Do you have any tip or idea of how to do these 2D-to-1D projections? I have been googling for scenarios like this one, but all of them are in 3D enviroments.
Shadow mapping assumes either a directional light, or a spotlight, and you have a point light. But since you only need shadow on the floor, you could model it as a spotlight that hovers e.g. 2 m above the floor and points downwards. All the walls would have to be at least 2 m high. In the first shadow mapping pass, you could render the floor and all the walls into the shadow buffer.
However, I would not go with shadow mapping, but use volumetric shadows instead. If you go from 3D to 2D, a 3D volume becomes a 2D polygon.
Assuming that all the walls are on a regular grid, we can compute view rays from the player's position P to all the corners of walls. For each corner, store the adjacent walls, and ignore all the walls that face away from the player. Then cast rays from P to each corner, convert the rays to polar coordinates, and sort them by their angle, say counter-clockwise. Now go through this sorted list in a sweeping motion, and build the shadow polygon.
The shadow polygon consists either of corner points in this list, or of intersections between a) a line that is parallel to a wall and b) a line that goes through P and a corner. The only thing that makes this a bit complicated is that you have to find the wall that receives the shadow. Since the input is so small, I would probably start with brute force (check the corner against each wall), and see how slow it is. Note that only player-facing walls can cast shadows. Also note that the point closest to the player doesn't need to be visible.
It's probably going to look really cool with a moving character.

Rotating a cube so that the front facing face remains square

In OpenGL I have to rotate a cube (and translate it) so that it looks like in these two images.
Without any transformations only the front facing red face is visible. I just don't understand how I can rotate it (so that the top and right sides are visible like in the images) and keep the red face perfectly square.
I've thought about translating it to the bottom left, but that only moves the red square around, it doesn't make the other faces visible.
I'm using glFrustum(-20, 20, -20, 20, -1, -10);
If you are using a perspective projection (which you are) and the front face of your cube is parallel to the x-y plane, then you will only see the other two faces if the cube is entirely in one quadrant of the eye space; that is, if there were horizontal and vertical lines cutting the window in half, the cube would have to lie entirely within one of the four resulting rectangles.
Other options for making the other two faces show are
use an isometric projection
rotate the cube to bring the other faces into view.
To aid in visualising this, try playing Minecraft (say) and moving around in different ways to see how different sides of different blocks come into view.
That is not a rotation.
The second picture looks like an orthographic projection (glOrtho), but that may be a coincidence.
In either case, you can only get an image like that if the cube is translated away from the origin toward the bottom left, as you suggest.