Does anyone know a good place to learn how to implement shadows in openGL? Or does anyone know how to do this? I know it isn't built into openGL by default but I can't seem to find any good examples. I've created a cube sitting on top of a plane in which I'm going to test this. I've created the cube and plane using:
glBegin(GL_QUADS);
to create a flat plane as well as a six sided cube that sits on that plane.
Creating shadows is an advanced technique (well, there are at least 3 methods applicable in your case). Before even attempting to do so, you need a firm grasp of OpenGL and it's concepts.
OpenGL is not a scene graph, it's a drawing API, so the idea is to combine drawing operations in a way, that you'll end up with what looks like correct shadows.
You may want to look up the topics:
Shadow Buffers
Stencil Volume Shadows
Planar Projection Shadows
Each finds tons of Google results. Add OpenGL to the seach term and you'll get numerous tutorials for each.
Related
I am unsure of how to describe what I'm after, so I drew a picture to help:
My question, is it possible within OpenGL to create the illusion of those pixel looking bumps on a single polygon, without having to resort to using many polygons? And if it is, what's the method?
I think what your looking for is actually Parallax mapping (Or Parallax Occlusion mapping).
Demos:
http://www.youtube.com/watch?v=01owTezYC-w
http://www.youtube.com/watch?v=gcAsJdo7dME&NR=1
http://www.youtube.com/watch?v=njKdLvmBl88
Parralax mapping basically works by using the height map to alter the texture UV coordinate being used.
The main disadvantage to parallax is that anything that appears to be 'outside' the polygon will be clipped (think of looking at an image on a 3D tv), so it's best for things indented in a surface rather than sticking out of it (although you can reduce this by making the polygon lager than the visible texture area). It's also fairly complex and would need to be combined with other shader techniques for a good effect.
Bump mapping works by using a texture for normal's, this makes the light's shading appear to be 3D however it does not change 3D data depending on the position of the viewer only the shading. Bump mapping would also be fairly useless for the OP's sample image since the surface is all the same angle just at different heights, bump mapping relies on the changes in the surfaces angles. You would have to slope the edges like this.
Displacement mapping/tessellation uses a texture to generate more polygons rather than just being 1 polygon.
There's a video comparing all 3 here
EDIT: There is also Relief mapping, which is a similar to parallax. See demo. There's a comparison video too (it's a bit lowquality but relief looks like it gives better depth).
I think what you're after is bump mapping. The link goes to a simple tutorial.
You may also be thinking of displacement mapping.
Of the techniques mentioned in other people's answers:
Bump mapping is the easiest to achieve, but doesn't do any occlusion.
Parallax mapping is probably the most complex to achieve, and doesn't work well in all cases.
Displacement mapping requires high-end hardware and drivers, and creates additional geometry.
Actually modeling the polygons is always an option.
It really depends on how close you expect the viewer to be and how prominent the bumps are. If you're flying down the Death Star trench, you'll need to model the bumps or use displacement mapping. If you're a few hundred meters up, bumpmapping should suffice.
If you have DX11 class hardware then you could tessellate the polygon and then apply displacement mapping. See http://developer.nvidia.com/node/24. But then it gets a little complicated to get it running and develop something on top of it.
I have been asked to do 3D sphere and adding textures to it so that it looks like different planets in the Solar System. However 3ds max was not mentioned as mandatory.
So, how can I make 3D spheres using OpenGL and add textures to it? using glutsphere or am I suppose to do it some other method and how to textures ?
The obvious route would be gluSphere (note, it's glu, not glut) with gluQuadricTexture to get the texturing done.
I am not sure if glutSolidSphere has texture coordinates (as far as I can remeber they were not correct, or not existant). I remember that this was a great resource to get me started on the subject though:
http://paulbourke.net/texture_colour/texturemap/
EDIT:
I just remembered that subdividing an icosahedron gives a better sphere. Also texture coordinates are easier to implement that way:
see here:
http://www.gamedev.net/topic/116312-request-for-help-texture-mapping-a-subdivided-icosahedron/
and
http://www.sulaco.co.za/drawing_icosahedron_tutorial.htm
and
http://student.ulb.ac.be/~claugero/sphere/
I just started learning opengl and writing a first person shooter but I'm getting horrible framerates when I draw 5000 cubes. So now I'm attempting to perform occlusion and culling using an octree. What I'm confused about is where to cast the rays from. Do I only cast them from the fustrum near plane? It seems like I would miss part of the fustrum that expands. Any help is appreciated.
If 5000 cubes already gives bad framerates, you should consider changing the way you render your cubes.
It's very unclear to us what you are drawing the cubes for. If they are static (ie. don't move), then its best to pack them all into a single vertex buffer. If the cubes are supposed to move, then you should go for instancing. If you're going for a landscape made of cubes like minecraft, then you should create vertex buffers but only put in the faces of cubes that are actually visible.
I'd like to help more, but I'm unsure what you're doing.
I would like to draw voxels by using opengl but it doesn't seem like it is supported. I made a cube drawing function that had 24 vertices (4 vertices per face) but it drops the frame rate when you draw 2500 cubes. I was hoping there was a better way. Ideally I would just like to send a position, edge size, and color to the graphics card. I'm not sure if I can do this by using GLSL to compile instructions as part of the fragment shader or vertex shader.
I searched google and found out about point sprites and billboard sprites (same thing?). Could those be used as an alternative to drawing a cube quicker? If I use 6, one for each face, it seems like that would be sending much less information to the graphics card and hopefully gain me a better frame rate.
Another thought is maybe I can draw multiple cubes using one drawelements call?
Maybe there is a better method altogether that I don't know about? Any help is appreciated.
Drawing voxels with cubes is almost always the wrong way to go (the exceptional case is ray-tracing). What you usually want to do is put the data into a 3D texture and render slices depending on camera position. See this page: https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch39.html and you can find other techniques by searching for "volume rendering gpu".
EDIT: When writing the above answer I didn't realize that the OP was, most likely, interested in how Minecraft does that. For techniques to speed-up Minecraft-style rasterization check out Culling techniques for rendering lots of cubes. Though with recent advances in graphics hardware, rendering Minecraft through raytracing may become the reality.
What you're looking for is called instancing. You could take a look at glDrawElementsInstanced and glDrawArraysInstanced for a couple of possibilities. Note that these were only added as core operations relatively recently (OGL 3.1), but have been available as extensions quite a while longer.
nVidia's OpenGL SDK has an example of instanced drawing in OpenGL.
First you really should be looking at OpenGL 3+ using GLSL. This has been the standard for quite some time. Second, most Minecraft-esque implementations use mesh creation on the CPU side. This technique involves looking at all of the block positions and creating a vertex buffer object that renders the triangles of all of the exposed faces. The VBO is only generated when the voxels change and is persisted between frames. An ideal implementation would combine coplanar faces of the same texture into larger faces.
Using stereovision, I am producing depthmaps representing the 3d environment as viewed from a camera. There is one depthmap per "keyframe" associated with a camera position. The goal is to translate those 2d depthmaps into the 3d space (and later merge them to reconstruct the whole environment).
What would be the best (efficient) way to translate those depthmaps in 3d? Each depthmap is 752x480 large, so the number of triangles can grow quite fast. I would like an automatic system to manage the level of detail of the objects.
My team uses Ogre3d so it would be great to find a solution with it. What I am looking for is very similar to what Terrain do, except that I want to be able to put the resulting objects wherever I want (translation, rotation) and I think Terrain can't do that.
I am quite new to Ogre3d so please forgive me if there is a straightforward solution I should know. If another tool than Ogre3d is more appropriate to my problem, I'd be happy to learn about it!
Not clear what you want to do "merge depahtmap with envirronement" ?
Anyway, in your case, you seems stuck to make them 3d using terrain heightmap techniques.
In you case, as the depthmap is screen aligned, use a screen space simple raycasting technique. So you must do a compositor in ogre3D that takes that depth map and transform it on the pixel you want.
Translation and rotation from a depth map may be limited to xy on screen, as like terrain heightmap (you cannot have caves using heightmaps), you do miss a dimension.
Not directly related but might help: in pure screen space, there is a technique "position reconstruction" that help getting object world space positions, but only if you have a load of infos on the camera used to generate the depthmap you're using, for example: http://www.gamerendering.com/2009/12/07/position-reconstruction/