OpenGL, Culling objects that are outside the view - c++

In my case I wanna render 50,000 or more cubes that are distributed randomly inside a large bounding box, I don't want using instancing right now so I have to render each cube, I wanna improve the performance by culling out the cubes that are outside the camera view.
I have a camera class that has two matrices view and projection, each cube has its own bounding box, so I am planning to check each frame if the camera view bounding box contains the center of each cube if yes call its draw function if not ignore it.
I have for view matrix 3 vectors eye, target and up, and for projection width, height, near, far and FOV.
so I have two questions:
Is this a right scenario? I will calculate the camera view boumding box each frame then test each cube.
How can I calculate the camera bounding box each frame?

I got an idea from here how_to_check_if_vertex_is_visible_for_user that worked fine with me.
multiplying the projection view matrix of the camera by the any point in the 3D space the visible ones should be between [-1,1].

Related

How to draw a rectangle overlay with fixed aspect ratio that represents a render region?

I have a small custom ray tracer that I am integrating in an application. There is a resizable OpenGL window that represents the camera into the scene. I have a perspective matrix that adjusts the overall aspect ratio when the window resizes (basic setup).
Now I would like to draw a transparent rectangle over the window representing the width x height of the render so a user knows exactly what will be rendered. How could this be done? How can I place the rectangle accurately? The user can enter different output resolutions for the ray tracer.
If I understand well your problem, I think that your overlay represents the new "screen" in your perspective frustum.
Redefine then a perspective matrix for the render, in which the overlay 4 corners define the "near" projection plane.

opengl 3d object picking - raycast

I have a program displaying planes of cubes, like levels in a house, I have the planes displayed so that the display angle is consistent to the viewport projection plane. I would like to be able to allow the user to select them.
First I draw them relative to each other with the first square drawn at {0,0,0}
then I translate and rotate them, each plane has it's own rotate and translate.
Thanks to this this page I have code that can cast a ray using the user's last touch. If you notice in the picture above, there is a green square and blue square, this is debug graphic displaying the ray intersecting the near and far planes in the projection matrix after clicking in the centre (with z of zero in order to display them), so it appears to be working.
I can get a bounding box of the cube, but it's coordinates will think they are still up in the left corner.
My question is how do I use my ray to check intersections with the objects after they have been rotated and translated? I'm very confused as I once had this working when I was translating and rotating the whole grid as one, now each plane is being moved separately I can't work out how to do it.

Display voxels using shaders in openGL

I am working on voxelisation using the rendering pipeline and now I successfully voxelise the scene using vertex+geometry+fragment shaders. Now my voxels are stored in a 3D texture which has size, for example, 128x128x128.
My original model of the scene is centered at (0,0,0) and it extends in both positive and negative axis. The texure, however, is centered at (63,63,63) in tex coordinates.
I implemented a simple ray marcing for visualizing but it doesn't take into account the camera movements (I can render only from very fixed positions because my ray have to be generated taking into account the different coordinates of the 3D texture).
My question is: how can I map my rays so that they are generated at point Po with direction D in the coordinates of my 3D model but intersect the voxels at the corresponding position in texture coordinates and every movement of the camera in the 3D world is remapped in the voxel coordinates?
Right now I generate the rays in this way:
create a quad in front of the camera at position (63,63,-20)
cast rays in direction towards (63,63,3)
I think you should store your entire view transform matrix in your shader uniform params. Then for each shader execution you can use its screen coords and view transform to compute the view ray direction for your particular pixel.
Having the ray direction and camera position you just use them the same as currently.
There's also another way to do this that you can try:
Let's say you have a cube (0,0,0)->(1,1,1) and for each corner you assign a color based on its position, like (1,0,0) is red, etc.
Now for every frame you draw your cube front faces to the texture, and cube back faces to the second texture.
In the final rendering you can use both textures to get enter and exit 3D vectors, already in the texture space, which makes your final shader much simpler.
You can read better descriptions here:
http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
http://web.cse.ohio-state.edu/~tong/vr/

Screen space bounding box computation in OpenGL

I'm trying to implement tiled deferred rendering method and now I'm stuck. I'm computing min/max depth for each tile (32x32) and storing it in texture. Then I want to compute screen space bounding box (bounding square) represented by left down and top right coords of rectangle for every pointlight (sphere) in my scene (see pic from my app). This together with min/max depth will be used to check if light affects actual tile.
Problem is I have no idea how to do this. Any idea, source code or exact math?
Update
Screen-space is basically a 2D entity, so instead of a bounding box think of a bounding rectangle.
Here is a simple way to compute it:
Project 8 corner points of your world-space bounding box onto the screen using your ModelViewProjection matrix
Find a bounding rectangle of these points (which is just min/max X and Y coordinates of the points)
A more sophisticated way can be used to compute a screen-space bounding rect for a point light source. We calculate four planes that pass through the camera position and are tangent to the light’s sphere of illumination (the light radius). Intersections of each tangent plane with the image plane gives us 4 lines on the image plane. This lines define the resulting bounding rectangle.
Refer to this article for math details: http://www.altdevblogaday.com/2012/03/01/getting-the-projected-extent-of-a-sphere-to-the-near-plane/

How to create views from a 360 degree panorama. (like street view)

Given a sphere like this one from google streetview.
If i wanted to create 4 views, front view, left view, right view and back view, how do i do the transformations needed to straiten the image out like if i was viewing it in google streetview. Notice the green line i drawed in, in the raw image its bended, but in street view its strait. How can i do this?
The streetview image is a spherical map. The way streetview and Google Earth work is by rendering the scene as if you were standing at the center of a giant sphere This sphere is textured with an image like in your question. The longitude on the sphere corresponds to the x coordinate on the texture and the latitude with the y coordinate.
A way to create the pictures you need would be to render the texture as a sphere like Google Earth does and then taking a screenshot of all the sides.
A way to do it purely mathematical is to envision yourself at the center of a cube and a sphere at the same time. The images you are looking for are the sides of the cube. If you want to know how a specific pixel in the cube map relates to a pixel in the spherical map, make a vector that points from the center of the cube to that pixel, and then see where that same vector points to on the sphere (latitude & longitude).
I'm sure if you search the web for spherical map cube map conversion you will be able to find more examples and implementations. Good luck!