In a 2D platform game, how can I create a flashlight-effect (like in this video at around 0:41 http://www.youtube.com/v/DHbjped9gM8&hl=en_US&start=42) ?
I'm using OpenGL for my lighting.
PS: I've seen effects like this a few times, but I really don't know how to create them. I know that I can create new lightsources with glEnable, but they are always circular shining in a 90° angle onto my stage, so that's quite different from what I am looking for.
You have to tell OpenGL that you want a spot light, and what kind of cone you want. Let's guess that a typical flash-light covers around a 30 degree angle. For that you'd use:
glLightf(GL_LIGHTn, GL_SPOT_CUTOFF, 15.0f);
[where GL_LIGHTn would be GL_LIGHT1 for light 1, GL_LIGHT2 for light 2, and so on]
You'll also need to use glLightfv with GL_LIGHT_DIRECTION to specify the direction the flashlight is pointing toward. You may also want to use GL_SPOT_EXPONENT to specify how the light falls off at the edges of the cone. Oh, you may also want to use one of the GL_XXX_ATTENUATIONs as well (but a lot of times, that's unnecessary).
If you want to support shadows being cast, that's another, much more complex, subject of its own (probably too much to try to cover in an answer here).
What platform (as in hardware/operating system) are you developing for? Like previous post mentioned, it sounds like you're using fixed function OpenGL, something that is considered "deprecated" today. You might want to look into OpenGL 3.2, and do it with a full shader-based approach. This means handling all the light sources yourself. This will also allow you to create real-time shadows and other nice effects!
Related
I have been assigned to implement shadows in the project which I am working now. Since we have one light source and our embedded hardware is very old(even not have gpu) we thought stencil buffer implementation of shadow volumes will fit our app best.
As first step I want to implement Silhouette Detection which have been described in the link. The link is very good but uses geometry shader for dot product calculation of neighboring edges' normal with light direction. Since we still use old fixed pipeline I won't be able to use that part of this example.
I wanted to ask if the best way for me doing all this dot products myself or is there a old opengl trick-function call which may help me?
I have the following declaration:
glBegin( GL_QUADS );
glColor3f(0.0f,0.7f,0.7f);
glVertex2f(x1,y1);
glVertex2f(x2,y2);
glVertex2f(x3,y3);
glVertex2f(x4,y4);
glEnd();
The question is: If I apply a rotation, let's say, of 20 degrees, how can I know where these vertices are then?
Because later I need to be able to click on the square and identify if the place where I am clicking is, indeed, inside the square or not.
While I hope that nobody has used it in this millennium, there actually was a mechanism for getting transformed vertices in legacy OpenGL. It's called "feedback mode". Explaining it in detail is beyond the scope of an answer. But if you want to see how it worked, you can read up on it in the freely available online version of the Red Book.
The "click and identify" you talk about in your question is often called "picking" or "selection". There are numerous approaches to implement it, and the one to choose depends somewhat on your application. To give you a quick overview of some common approaches:
Selection mode. This is almost as obsolete as feedback mode. It's as old, but I have a feeling that it was at least much more commonly used, so it might have better support. Still, I wouldn't recommend using it in new code. Again, if you want to learn about it anyway, the explanation can be found in the Red Book.
Modern OpenGL has a feature called Transform Feedback. While its primary purpose is different, it can be used to read back transformed vertices similar to legacy Feedback Mode.
Draw the scene to an off screen buffer, with each object rendered in a different color. Then read back the color at the selection position, and map it to an object. This is a fairly elegant and efficient approach, and can be recommended if it works for your requirements.
Perform the calculations in your own code on the CPU. Instead of transforming all objects, the much more efficient approach is normally to apply the inverse transformation to your pick point (which actually becomes a ray), and intersect it with the geometry.
I am learning opengl es and am planning to make a program which will have a shape which can be cut into a smaller shape by removing a part of the shape dynamicly. The constraint is I must be able to tell if an object is inside or outside the cut shape.
The option I thought of are:
1) use a stencil buffer made up of just a black and white mask. This way I can also use the same map for collision detection.
2) the other option is to dynamicly change my mind renderd primitive an then tesselating it. This sounds more complex and is currently my least favorite option. It would also make the collision detection more difficult.
PS
I would like the part of the shape removed to be fall of in animation, I am not sure how choosing any of these methods will affect the ease of doing so. Please express your opinion.
What are your thoughts on this?
Keep in mind that I am new to opengl an might be making mistakes without realizing it.
Thanks, Jason
It is generally considered a good idea to issue only write-commands to the graphics card. Basically that is "dont use glGet* commands at all", because the latency of those commands might be somewhat high.
That said option 1) is great if you just want to mask out stuff. As you are trying to make the cut part fall off this is really not an option, as you have to retrieve/reconstruct the vertices of that part.
I don't quite get the "tesselation" part of your second option, but if your primitive is a polygon and your cuts are straight lines, it is easy to calculate the 2 polygons after the cut. In fact the viewport clipping routine in OpenGL does that all the time and there is a lot of literatur, for example http://en.wikipedia.org/wiki/Sutherland-Hodgman
In the long term it is often way better to first build a (non-visual) model of what is going on in the application before visualizing.
A lot of sites/articles say 'batch! batch! batch!'. Can someone explain what 'batching' represents with respect to shaders?
Namely, does
changing textures
changing arbitrary shader variables
mean something can't be 'batched'?
The easiest way to summarize it is to try to make as few API calls as you can to draw what you need to draw. Using vertex arrays or VBOs (not even optional in modern APIs), texture atlases and avoiding the need for state changes all contribute to that. It's really amazing how many triangles a modern GPU can draw in the time it takes you to turn around and set up the next drawing call.
There is some good info around and about. From Tom Forsyth:
http://home.comcast.net/~tom_forsyth/blog.wiki.html#%5B%5BRenderstate%20change%20costs%5D%5D
Shawn Hargreaves (On Sprite batching):
Link
http://blogs.msdn.com/b/shawnhar/archive/2006/12/14/spritebatch-sorting-part-2.aspxv
Link
Christer Ericson:
http://realtimecollisiondetection.net/blog/?p=86
Let i have some mesh (for ex. sphere) in the center of room, full of cubes and one light source. How can i make fast and easy shadow-casting in OpenGL, using "standard" (fixed) functions only? Note: the result must contain cube and sphere shadows as well.
If you can generate a silhouette of the sphere then you could use shadow volumes. nVidia hardware has also supported fixed function shadow mapping for a fair while as well.
Shadow volumes have the disadvantage of very high fill rate requirements. Shadow maps can be better but require an extra pass.
If you are projecting on to a single plane it may well be easier to just project the object on to a plane.
There is no fast and easy way. There are lots of differnt techiques, that each have their own pros and cons. You can look at a project I host on github, that uses very simple code to create a shadow, using the shadow volume technique (http://iuiz.github.com/VolumeShadow/). However it is written in Java, but it should not be hard to port it to any other language.
The most important ways to create shadows are the so called "shadow mapping" method, where you render your scene (with the camera at the light source, directed to each shadow casting object) to a texture. And the second technique is the shadow voulume method (made famous with Doom3).
I've found one way using StencilBuffers. Being a little confused for a while, i finally got the idea - whith this the most hard thing would be looping through each light source and projecting all scene objects. This one looks more pretty than texture shadowing and works faster than volumeric shadows. here and here are some resources, which helped me to understand matrix multiplication step (it confused me a bit when i was looking through dino demo). As for me, this method is most easy to understand and use. The only question left to solve is how to calculate multiplication matrix.
Although this method could be changed a bit using textures as shown here.
Thanks everybody! =)