I'm doing a 3D asteroids game in windows (using OpenGL and GLUT) where you move in space through a bunch of obstacles and survive. I'm looking for a way to set an image background against the boring bg color options. I'm new to OpenGL and all i can think of is to texture map a sphere and set it to a ridiculously large radius. What is the standard way of setting image bg in a 3d game?
The standard method is to draw two texture mapped triangles, whose coordinates are x,y = +-1, z=0, w=1 and where both camera and perspective matrices are set to identity matrix.
Of course in the context of a 'space' game, where one could want the background to rotate, the natural choice is to render a cube with cubemap (perhaps showing galaxies). As the depth buffering is turned off during the background rendering, the cube doesn't even have to be "infinitely" large. A unit cube will do, as there is no way to find out how close the camera is to the object.
Related
This question already has answers here:
OpenGL ES 2.0 - Fisheye shader displays a grey image
(1 answer)
Shader - Anti fisheye by "pulling" pixels
(2 answers)
Closed 1 year ago.
I am trying to create lomo fisheye effect on an image using openGL.
Should I use cube mapping and fisheye projection? Is there any open source that I can refer to?
You can draw a single quad with the image textured onto it, and use a fragment shader to warp the texture coordinate per-pixel as you desire. You'll have to do all the math yourself, but it looks like the previous post here might be a good starting point.
Taking the question title beyond effect on an image to producing "true" fisheye views, ie. usable field of view 180+ degrees...
There are two slightly different methods to adapt existing pipelines to fisheye view (with "simple" OpenGL). Both require scene rendering for up to 6 times - for each side of the "box" that will be projected to a flat screen. Each side / surface must be square - or should be, depends on the method - and likely smaller than the original full viewport.
The number of sides required depends on how wide the fisheye field of view is requested. In a typical FPS, for a FOV of 130 - three sides is enough. For a FOV up to 220 - five sides.
Method 1 - cubemap texture (GL_ARB_texture_cube_map)
init once, for a specific FOV, pre-calculate a translation table of 2d on-screen coordinates to cubemap texture 3d coordinates, 16x16 grid for the whole screen should be enough
setup the viewport and position camera accordingly to render box sides, do the usual rendering
bind sides to the GL_TEXTURE_CUBE_MAP_ARB texture
iterate over screen emitting (rectangular) GL_QUAD_STRIP-s using the translation table and the cubemap.
Method 2 - 2d or rectangular textures (GL_NV_texture_rectangle)
init once, for a specific FOV, pre-calculate a "ray" table of texture + 2d texture coordinates to 2d screen coordinates
as in Method 1, setup the viewport and position camera accordingly to render box sides, do the usual rendering
bind sides to the GL_TEXTURE_RECTANGLE_NV or GL_TEXTURE_2D textures
iterate over textures emitting (trapezoid) GL_QUAD_STRIP-s on the screen using the "ray" table.
The Method 1 is simpler and delivers better results.
Gotchas:
set cubemap texture wrapping to GL_CLAMP_TO_EDGE
in a typical FPS, player view is not only pitch and yaw, but also roll - calculate camera orientation for each side via proper rotation
if the render loop is combined with progress / physics / ai, this repeated scene re-rendering may confuse existing internals.
This is all of course depends on the specifics of a particular engine. I'm not sure how well this applies to OpenGL 3.3+ core profile yet the idea should be the same.
It is possible to draw the world to fisheye in one pass by doing fisheye transformaion in vertex shader. But! it requires original geometry sufficiently (pre-)tessellated. Or, it could be possible to employ Geometry Shader and/or Tessellation Shaders to organize tessellation / transform feedback on the GPU. The later should be built into the rendered from the ground up, perhaps.
For a well-isolated example using Quake 1 engine see Fisheye and Panorama OpenGL FPS and this diff specifically. Unfortunatelly, the vertex shader example is lost.
I need some help in surface area selection on a 3d model rendered in opengl by picking points through mouse. I know how to get a point in world coordinate but cant find a way to select an area. Later I need to remesh that selected area and map an image over it which I know.
Well, OpenGL by itself can't help you there. OpenGL is a drawing API. You draw things, but once the drawing commands have been executed all that's left are pixels in a framebuffer and OpenGL has no recollection about the geometry whatsoever.
You can use OpenGL to implement image based area selection algorithms, for example by drawing each face with a unique index color into an off screen framebuffer. Then by looking at what values can be found therein you know which faces are present in a given area.
Later I need to remesh
This is called topology modification and is completely outside the scope of OpenGL.
that selected area and map an image over it which I know
You can use a image based approach for this again, however you must know in which way you want to make images to faces first. If you want to unwrap the mesh, then OpenGL is of no help. However if you want the user to be able to "directly draw" onto the mesh, this can be done by drawing texture coordinates into another off screen framebuffer and by this reverse mapping screen coordinates to texture coordinates.
I'm developing a render-to-texture process that involves using several cameras which render an entire scene of geometry. The output of these cameras is then combined and mapped directly to the screen by converting each geometry's vertex coordinates to screen coordinates in a vertex shader (I'm using GLSL, here).
The process works fine, but I've realized a small problem: every RTT camera I create will create a texture the same dimensions as the screen output. That is, if my viewport is sized to 1024x1024, even if the geometry occupies a 256x256 section of the screen, each RTT camera will render the 256x256 geometry in a 1024x1024 texture.
The solution seems reasonably simple - adjust the RTT camera texture sizes to match the actual screen area the geometry occupies, but I'm not sure how to do that. That is, how can I (for example) determine that a geometry occupies a 256x256 area of a screen so that I can correspondingly set the RTT camera's output texture to 256x256 pixels?
The API I use (OpenSceneGraph) uses axis-aligned bounding boxes, so I'm out of luck there..
Thoughts?
why out of luck? can't you use the axis-aligned bounding box to compute the area?
My idea:
take the 8 corner points of the bounding box and project them onto the image plane of the camera.
for the resulting 2d points on the image plane you can determine an axis-aligned 2d bounding box again
This should be a correct upper bound for the space the geometry can occupy.
I want to render a fire effect in OpenGL based on a particle simulation. I have hundreds of particles which have a position and a temperature (and therefore a color) as well as with all their other properties. Simply rendering a solidSphere using glut doesn't look very realistic, as the particles are spread too wide. How can I draw the fire based on the particles information?
If you are just trying to create a realistic fire effect I would use some kind of re-existing library as recommended in other answers. But it seems to me you that you are after a display of the simulation.
A direct solution worth trying might be replace your current spheres with billboards (i.e. graphic image that always faces toward the camera) which are solid white in the middle and fade to transparent towards the edges - obviously positioning and colouring the images according to your particles.
A better solution I feel is to approach the flame as a set of 2D Grids on which you can control the transparency and colour of each vertex on the grid. One could do this in OpenGL by constructing a plane from quads and use you particle system to calculate (via interpolation from the nearest particles you have) the colour and transparency of each vertex. OpenGL will interpolate each pixel between vertexes for you and give you a smooth looking picture of the 'average particles in the area'.
You probably want to use a particle system to render a fire effect, here's a NeHe tutorial on how to do just that: http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=19
This question already has answers here:
OpenGL ES 2.0 - Fisheye shader displays a grey image
(1 answer)
Shader - Anti fisheye by "pulling" pixels
(2 answers)
Closed 1 year ago.
I am trying to create lomo fisheye effect on an image using openGL.
Should I use cube mapping and fisheye projection? Is there any open source that I can refer to?
You can draw a single quad with the image textured onto it, and use a fragment shader to warp the texture coordinate per-pixel as you desire. You'll have to do all the math yourself, but it looks like the previous post here might be a good starting point.
Taking the question title beyond effect on an image to producing "true" fisheye views, ie. usable field of view 180+ degrees...
There are two slightly different methods to adapt existing pipelines to fisheye view (with "simple" OpenGL). Both require scene rendering for up to 6 times - for each side of the "box" that will be projected to a flat screen. Each side / surface must be square - or should be, depends on the method - and likely smaller than the original full viewport.
The number of sides required depends on how wide the fisheye field of view is requested. In a typical FPS, for a FOV of 130 - three sides is enough. For a FOV up to 220 - five sides.
Method 1 - cubemap texture (GL_ARB_texture_cube_map)
init once, for a specific FOV, pre-calculate a translation table of 2d on-screen coordinates to cubemap texture 3d coordinates, 16x16 grid for the whole screen should be enough
setup the viewport and position camera accordingly to render box sides, do the usual rendering
bind sides to the GL_TEXTURE_CUBE_MAP_ARB texture
iterate over screen emitting (rectangular) GL_QUAD_STRIP-s using the translation table and the cubemap.
Method 2 - 2d or rectangular textures (GL_NV_texture_rectangle)
init once, for a specific FOV, pre-calculate a "ray" table of texture + 2d texture coordinates to 2d screen coordinates
as in Method 1, setup the viewport and position camera accordingly to render box sides, do the usual rendering
bind sides to the GL_TEXTURE_RECTANGLE_NV or GL_TEXTURE_2D textures
iterate over textures emitting (trapezoid) GL_QUAD_STRIP-s on the screen using the "ray" table.
The Method 1 is simpler and delivers better results.
Gotchas:
set cubemap texture wrapping to GL_CLAMP_TO_EDGE
in a typical FPS, player view is not only pitch and yaw, but also roll - calculate camera orientation for each side via proper rotation
if the render loop is combined with progress / physics / ai, this repeated scene re-rendering may confuse existing internals.
This is all of course depends on the specifics of a particular engine. I'm not sure how well this applies to OpenGL 3.3+ core profile yet the idea should be the same.
It is possible to draw the world to fisheye in one pass by doing fisheye transformaion in vertex shader. But! it requires original geometry sufficiently (pre-)tessellated. Or, it could be possible to employ Geometry Shader and/or Tessellation Shaders to organize tessellation / transform feedback on the GPU. The later should be built into the rendered from the ground up, perhaps.
For a well-isolated example using Quake 1 engine see Fisheye and Panorama OpenGL FPS and this diff specifically. Unfortunatelly, the vertex shader example is lost.