I recently created some landscape code and added some diffuse lighting to the scene, however, to my disappointment, there are no shadows. I looked around the web for hours looking for ways to get shadows in OpenGL, however they all seemed terribly complicated; and very unique to their own demo programs.
Are there any simple ways to make shadows?
No. Rasterization is very bad at this (even recent AAA games have noticeable shadow artefacts), but everybody lives with it.
Solutions include (approx. from easiest/poorest to best/hardest) :
No shadows. Simply account for occlusion with darker colors. xNormal, Blender.
If you want an approximate shadow for a character, a simple flat polygon on the ground with a transparent and blurry texture will do. See Zelda screenshots, for instance. Even some recent games still use this.
Lightmaps. Static geometry only, but perfect lighting (precomputed). Reasonnably simple to implement. Lots of tools exist.
Shadow volumes, popularised by Carmack. Pixel perfect, reasonnably simple to implement, quite slow. Good for a few objects. No soft shadows.
Shadow maps. A little hard to implement if you never made any openGL. Hard to get right. Pixellated shadows. Deals with lots of polygons. Doesn't deal with big worlds.
Myriads of Shadow maps variants. Lots of research these recent years. Current best is Cascaded Shadow Maps : Difficult, still hard to make it look good, but fast, deals with loads of polygons and huge worlds.
Raytraced shadows : This may be the next-gen. Nobody really uses that except for some research papers. Very complicated, doesn't do well with dynamic worlds (yet), huge static scenes ok. Pixel perfect or soft shadows, depending on how much spare GPU you have. Several variants; as of 2014 this still didn't make in any game for performance reasons.
So the usual trick is to mix beautiful-but-static-only approaches with dynamic-but-not-that-good approaches. For instance, see my tutorials on lightmapping and shadowmapping.
No.
The easiest way I know of involves using a pregenerated shadow texture that is overlaid onto the terrain using multitexturing. The complicated part is generating this texture, but if you don't use directional lighting, a simple "big blurry dot" is usually better than nothing.
Related
I'm trying to find the optimal shadow mapping technique(s) for use in my game engine. So far I've implemented standard shadow maps with PCF, cascaded shadow maps, and variance shadow maps. However, none of them seem to be providing satisfactory results.
I'm trying to find the optimal shadow mapping method for all situations. I am requiring backface correct geometry, so rendering backfaces can be used. However, I also have a fair bit of low-poly, smooth normals geometry, which results in some really ugly acne even when drawing backfaces.
What are some other techniques that can be used to get nice shadow maps, without severe acne, peter-panning or light bleeding, but also not place any major constraints on geometry (only backfaces)?
Unfortunately, there is no general purpose approach that results in artifact-free shadows but I might be able to give you some hints.
Percentage closer filtering is not only a good starting point but is also the basis for contact hardening shadows (If you are interested I may give additional information about it). PCF basically yields better results than statistical algorithms like variance shadow mapping (VSM) or exponential shadow mapping (ESM) but is slower because the filtering step requires O(n²) instead of a seperable filter in O(n). On the other hand, light bleeding is a pain and cannot be removed completely.
The best approach for reducing or even removing shadow acne is dual depth layer shadow maps, which is an improvement of mid-point shadow maps. Both algorithms are explained in this paper:
Weiskopf D., Ertl T. (2003): "Shadow Mapping Based on Dual Depth Layers".
The technique requires an additional rendering pass of the scene for each shadow map (can be quite expensive when you are using cascaded shadow maps) but yields extremely good results in almost all scenarios. The depth peeling might be implemented in a faster way when using modern graphics cards and geometry shaders, which enable the rendering of two depth layers in one rendering pass. Unfortunately, this does not come for free and I have no experience with this technique yet. I have not found any technique that can remove Peter panning and shadow acne as well as dual depth layer shadow mapping.
If your penumbrae are not that big, you might be able to achieve great results with "normal offset biases", which are explained on a GDC poster:
Holbert Daniel (2011): "Saying goodbye to shadow acne".
The implementation can be a little tricky but removes worst case scenarios with steep slopes. Note that this technique also introduces new artifacts because it "shifts" the shadows a little bit.
In summary: I vote for PCF with dual depth layer shadow mapping and implementing contact hardening based on a blocker search and a filtering step. Other techniques like exponential variance shadow maps (EVSM), summed-area variance shadow maps (SAVSM) or techniques in screen space (mip-mapped screen space shadow maps) all come with a more complex implementation, light bleeding or edge cases where they simply fail requiring a fall-back approach.
For extremely high quality and physically accurate shadows you might consider multi-view shadow mapping (MVSS) which basically renders multiple shadow maps and accumulates their light contribution. They are explained here:
Bavoil, Louis (2011): "Multi-View Soft Shadows".
MVSS is expensive and is only usable in real-time for the main character or the most important object in your scene. It is available in "Batman: Arkham City" for Batman himself.
Good luck! :-)
I'm working with an environmental reflection in OpenGL+GLSL.
I want to reflect the environment around an object in the most accurate way possible.
I found basically two way to do this, one is called SphericalMapping and the other is CubeMapping.
They differ in the shader code but really don't understand what is the difference between them.
Obviously for the cubemapping shader I have 6 images printed on a cube that are needed for the fragment shader to look the right pixel, and for my Spheric mapping shader a single image which is distorted with a photo-retouch software or obtained by taking a photo of a specular reflective sphere.
The drawbacks of spherical mapping seems to be that the camera (and the person which holds it) is always showed in the image and the sampling is non-uniform. What is meant by this latest statement? What is meant by "black-hole" effect in spherical mapping?
I would like to find an interactive demonstration of the differences and drawbacks of these two approaches, it seems like cubemapping is the best, but don't know why.
What is the best of the two especially for a realtime simulation with head tracking in your opinion?
Spheremaps are usually for small, low quality stuff.
The drawbacks of spherical mapping seems to be that the camera (and the person which holds it) is always showed in the image
We're talking about computer graphics here; there is no real camera, or no real person. Try imagegoogling "spheremap", you won't see anybody in the pictures.
the sampling is non-uniform
This means that the center of the spheremap has many pixels for a relatively small area, while near the border, you have few pixels for a relatively large area.
Cubemaps are almost always better : you can generate them at runtime easily, it's faster to sample for the hardware, and even though you have 6 textures instead of 1, you can use a lower resolution and still get the same quality.
I'm trying to, in JOGL, pick from a large set of rendered quads (several thousands). Does anyone have any recommendations?
To give you more detail, I'm plotting a large set of data as billboards with procedurally created textures.
I've seen this post OpenGL GL_SELECT or manual collision detection? and have found it helpful. However it can take my program up to several minutes to complete a rendering of the full set, so I don't think drawing 2x (for color picking) is an option.
I'm currently drawing with calls to glBegin/glVertex.../glEnd. Given that I made the switch to batch rendering on the GPU with vao's and vbo's, do you think I would receive a speedup large enough to facilitate color picking?
If not, given all of the recommendations against using GL_SELECT, do you think it would be worth me using it?
I've investigated multithreaded CPU approaches to picking these quads that completely sidestep OpenGL all together. Do you think a OpenGL-less CPU solution is the way to go?
Sorry for all the questions. My main question remains to be, whats a good way that one can pick from a large set of quads using OpenGL (JOGL)?
The best way to pick from a large number of quad cannot be easily defined. I don't like color picking or similar techniques very much, because they seem to be to impractical for most situations. I never understood why there are so many tutorials that focus on people that are new to OpenGl or even programming focus on picking that is just useless for nearly everything. For exmaple: Try to get a pixel you clicked on in a heightmap: Not possible. Try to locate the exact mesh in a model you clicked on: Impractical.
If you have a large number of quads you will probably need a good spatial partitioning or at least (better also) a scene graph. Ok, you don't need this, but it helps A LOT. Look at some tutorials for scene graphs for further information's, it's a good thing to know if you start with 3D programming, because you get to know a lot of concepts and not only OpenGl code.
So what to do now to start with some picking? Take the inverse of your modelview matrix (iirc with glUnproject(...)) on the position where your mouse cursor is. With the orientation of your camera you can now cast a ray into your spatial structure (or your scene graph that holds a spatial structure). Now check for collisions with your quads. I currently have no link, but if you search for inverse modelview matrix you should find some pages that explain this better and in more detail than it would be practical to do here.
With this raycasting based technique you will be able to find your quad in O(log n), where n is the number of quads you have. With some heuristics based on the exact layout of your application (your question is too generic to be more specific) you can improve this a lot for most cases.
An easy spatial structure for this is for example a quadtree. However you should start with they raycasting first to fully understand this technique.
Never faced such problem, but in my opinion, I think the CPU based picking is the best way to try.
If you have a large set of quads, maybe you can group quads by space to avoid testing all quads. For example, you can group the quads in two boxes and firtly test which box you
I just implemented color picking but glReadPixels is slow here (I've read somehere that it might be bad for asynchron behaviour between GL and CPU).
Another possibility seems to me using transform feedback and a geometry shader that does the scissor test. The GS can then discard all faces that do not contain the mouse position. The transform feedback buffer contains then exactly the information about hovered meshes.
You probably want to write the depth to the transform feedback buffer too, so that you can find the topmost hovered mesh.
This approach works also nice with instancing (additionally write the instance id to the buffer)
I haven't tried it yet but I guess it will be a lot faster then using glReadPixels.
I only found this reference for this approach.
I'm using the solution that I've borrowed from DirectX SDK, there's a nice example how to detect the selected polygon in a vertext buffer object.
The same algorithm works nice with OpenGL.
Just learning the basics of OpenGL for a class and was looking for something challenging and interesting to try and draw. Any suggestions?
Aiming to photorealism (just plain models, lights, materials, textures, etc.) is one thing, but what is even more interesting in my opinion is demoscene and all kinds of non-photorealistic effects. The idea of a demo is to program some nice animated graphics that automatically change from one effect to another or tell some sort of a story, and have a background music. Here you can find some videos. Just take a look at what some others have done and use your imagination. That's the funniest part of 3D programming in my opinion. Of course what you'll first program would be something extremely simple when compared to those videos on youtube, but everyone has to start from somewhere. Simple also doesn't need to be ugly. Some random suggestions:
mathematical shapes with sin(), cos(), etc.
alpha blending, especially addition blending (glBlendFunc(GL_ONE, GL_ONE);)
terrain rendering
read 3d model data from a file. (Wavefront .OBJ is a relatively simple one)
feedback effects with glCopyTexImage2D, which copies pixels from screen to a texture (in real life you shouldn't use this because it's too slow, but when learning the basics it's ok)
etc...
You might consider building an OBJ viewer. You will get the experience you're looking for, and it's a pretty good project for a beginning 3D graphics programmer, in terms of difficulty.
I believe opengl has built in shapes such as a teapot that you can call and have it draw. For starters, I'd stick with easy shapes like squares, circles, and cones. Try drawing a wireframe model first since that's the easiest, by using either quadstrips ,triangles or just poly lines. After you've gotten that down, learn to set up lighting and materials so you can draw a solid model.
At school we had a very interesting assignement to get started with OpenGL that I will share. The long term goal was to modelize a living room so you basically have to draw:
A table.
Two chairs.
A carpet.
A sofa
Some stuff that you might find interesting to add on the table for
instance a TV!
When you have all the things done, try to polish the scene a little bit by adding some lighting effects!
Hint: for all the objects you simply need to start with a basic rectangle. Then you can construct your scene step by step using translations/rotations.
Let i have some mesh (for ex. sphere) in the center of room, full of cubes and one light source. How can i make fast and easy shadow-casting in OpenGL, using "standard" (fixed) functions only? Note: the result must contain cube and sphere shadows as well.
If you can generate a silhouette of the sphere then you could use shadow volumes. nVidia hardware has also supported fixed function shadow mapping for a fair while as well.
Shadow volumes have the disadvantage of very high fill rate requirements. Shadow maps can be better but require an extra pass.
If you are projecting on to a single plane it may well be easier to just project the object on to a plane.
There is no fast and easy way. There are lots of differnt techiques, that each have their own pros and cons. You can look at a project I host on github, that uses very simple code to create a shadow, using the shadow volume technique (http://iuiz.github.com/VolumeShadow/). However it is written in Java, but it should not be hard to port it to any other language.
The most important ways to create shadows are the so called "shadow mapping" method, where you render your scene (with the camera at the light source, directed to each shadow casting object) to a texture. And the second technique is the shadow voulume method (made famous with Doom3).
I've found one way using StencilBuffers. Being a little confused for a while, i finally got the idea - whith this the most hard thing would be looping through each light source and projecting all scene objects. This one looks more pretty than texture shadowing and works faster than volumeric shadows. here and here are some resources, which helped me to understand matrix multiplication step (it confused me a bit when i was looking through dino demo). As for me, this method is most easy to understand and use. The only question left to solve is how to calculate multiplication matrix.
Although this method could be changed a bit using textures as shown here.
Thanks everybody! =)