Sampling texture to small area of a large mesh - opengl

I'm trying to implement water reflections in a large body of water. The water fills the entire horizontal distance of camera Frustum. My player is over the water. But I want that only the area close to the camera shows reflection. By scaling up the plane (which is the mesh used for water displacement) it also scales the reflection texture.
I would like to apply the reflection texture only to the area close to the camera. Any idea?

Related

Mapping Bullet phyics coordinates to OpenGL

I've been using the Bullet physics engine with OpenGL to visualise my simulations. I currently have a very simple simulation of a cube that has an initial horizontal and forward velocity that falls down from the sky and collides with the walls of a room that are all slanted at 45 degrees, with the bottom of the wall meeting the floor.
I use getOpenGLMatrix to get the orientation, position, etc. of the cube and map it to OpenGL by making that matrix the Model matrix. However, when I run it and visualise the simulation the cube behaves as expected (rolls down the wall), but it does not "touch" the rendered OpenGL wall (I say touch but of course mean the rendered cube does not appear to come near the rendered wall).
My Bullet cube is 2x2x2 (specified by btBoxShape(btVector3(1.0f,1.0f,1.0f))).
My OpenGL cube is also 2x2x2, with the origin at 0 and corners 1.0 away in each direction.
The only thing I can think of is that the coordinates in Bullet physics do not map directly to the coordinates of OpenGL (for example, a cube edge of length 1 in Bullet is X pixels, but a cube edge of length 1 in OpenGL is Y pixels). Is this the case? If not, can you think why I might have this issue (obviously I don't expect you to magically know the answer, just wondering if there are any known issues like this).
Thanks

View frustum culling for animated meshes

I implemented frustum culling in my system, it tests the frustum planes on every object's bounding sphere, and it works great. (I find the PlaneVsAabb check unneeded)
However, the bounding sphere of the mesh is adjusted for its bind pose, so when the mesh starts moving (e.g the player attacks with his sword) some vertices could go out of the sphere.
This often results in a mesh getting culled, although there are some vertices that should be rendered (e.g the player's sword that went out of the sphere).
I could think of two possible solutions for this:
For every mesh in every frame, calculate its new bounding sphere based on bone changes. (I have no idea how to start with this...) Could this be too inefficient?
Add a fixed offset for every sphere radius (based on the entire mesh size maybe?), so there could be no chance of the mesh getting culled even when animated.
(1) would be inefficient in real-time yes. However you can do a mixture of both, by computing the largest possible bounding sphere statically i.e. when you load it. Using that in (2) would guarantee a better result than some arbitrary offset you make up.
(1) You can add locators to key elements (e.g. dummy bone on the tip of the sword) and transform their origin while animating. You can done it on CPU on each update and then calculate bounding box or bounding sphere. Or you can precompute bounding volumes for each frame of animation offline. Doom3 uses second approach.

Screen space bounding box computation in OpenGL

I'm trying to implement tiled deferred rendering method and now I'm stuck. I'm computing min/max depth for each tile (32x32) and storing it in texture. Then I want to compute screen space bounding box (bounding square) represented by left down and top right coords of rectangle for every pointlight (sphere) in my scene (see pic from my app). This together with min/max depth will be used to check if light affects actual tile.
Problem is I have no idea how to do this. Any idea, source code or exact math?
Update
Screen-space is basically a 2D entity, so instead of a bounding box think of a bounding rectangle.
Here is a simple way to compute it:
Project 8 corner points of your world-space bounding box onto the screen using your ModelViewProjection matrix
Find a bounding rectangle of these points (which is just min/max X and Y coordinates of the points)
A more sophisticated way can be used to compute a screen-space bounding rect for a point light source. We calculate four planes that pass through the camera position and are tangent to the light’s sphere of illumination (the light radius). Intersections of each tangent plane with the image plane gives us 4 lines on the image plane. This lines define the resulting bounding rectangle.
Refer to this article for math details: http://www.altdevblogaday.com/2012/03/01/getting-the-projected-extent-of-a-sphere-to-the-near-plane/

Background image in OpenGL

I'm doing a 3D asteroids game in windows (using OpenGL and GLUT) where you move in space through a bunch of obstacles and survive. I'm looking for a way to set an image background against the boring bg color options. I'm new to OpenGL and all i can think of is to texture map a sphere and set it to a ridiculously large radius. What is the standard way of setting image bg in a 3d game?
The standard method is to draw two texture mapped triangles, whose coordinates are x,y = +-1, z=0, w=1 and where both camera and perspective matrices are set to identity matrix.
Of course in the context of a 'space' game, where one could want the background to rotate, the natural choice is to render a cube with cubemap (perhaps showing galaxies). As the depth buffering is turned off during the background rendering, the cube doesn't even have to be "infinitely" large. A unit cube will do, as there is no way to find out how close the camera is to the object.

Rendering fire in OpenGL

I want to render a fire effect in OpenGL based on a particle simulation. I have hundreds of particles which have a position and a temperature (and therefore a color) as well as with all their other properties. Simply rendering a solidSphere using glut doesn't look very realistic, as the particles are spread too wide. How can I draw the fire based on the particles information?
If you are just trying to create a realistic fire effect I would use some kind of re-existing library as recommended in other answers. But it seems to me you that you are after a display of the simulation.
A direct solution worth trying might be replace your current spheres with billboards (i.e. graphic image that always faces toward the camera) which are solid white in the middle and fade to transparent towards the edges - obviously positioning and colouring the images according to your particles.
A better solution I feel is to approach the flame as a set of 2D Grids on which you can control the transparency and colour of each vertex on the grid. One could do this in OpenGL by constructing a plane from quads and use you particle system to calculate (via interpolation from the nearest particles you have) the colour and transparency of each vertex. OpenGL will interpolate each pixel between vertexes for you and give you a smooth looking picture of the 'average particles in the area'.
You probably want to use a particle system to render a fire effect, here's a NeHe tutorial on how to do just that: http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=19