openGL simple 2d light - opengl

I am making a simple pixel top-down game. And I want to add some simple lights there, but I don't know what the best way to do that. This image is an example of light what I want to realise.
http://imgur.com/a/PpYiR
When I googled that task, I saw only solutions for that kind of light.
https://www.youtube.com/watch?v=mVlYsGOkkyM
But I need to increase a brightness of the texture part when the light source is near. How can I do this if I am using textures with GL_QUADS without UV?

Ok, my response may not totally answer you question, but it will lead you down the right path.
It appears you are using immediate mode, this is now depreciated and changing to VBOs (vertex buffer objects) will make you life easier.
The lighting in the picture appears to be hand drawn. You cannot create that style of lighting exactly with even the best algorithm.
You really have two options to solve your problem, and both of them will require texture coordinates and shaders.
You could go with lightmaps, which use a pre generated texture multiplied over the texture of a quad. This is extremely fast, but requires some sort of tool to generate the lightmaps which might be a bit over your head at the moment.
Instead, learn shader based lighting. Many tutorials exist for 3d lighting but the principles remain the same for 2D.
Some Googling will get you the resources you need to implement shaders.
A basic distance based lighting algorithm will look like this:
GL_Color = texturecolor * 1.0/distance(light_position,world_position);
It multiplies the color of the texel by how far away the texel is from the light position. There are tutorials that go more into depth on this.
If you want to make the lighting look "retro" like in the first image,you can downsample the colors in a postprocesing step.

Related

How to apply displacement mapping with tessellation?

I'm feeling like I'm grasping at straws right now researching this!
My goal is to write a simple water shader. The plan is to use tessellation to implement dynamic LODs, and apply a height map based on fractal noise (ref this paper). Where I am stumbling is where we are supposed to apply the height map. It seems like it should be applied to the vertex shader, but the vertex shader precedes the tessellation shaders.
So I am looking to apply the displace vertices at the tessellation evaluation shader (OpenGL) using noise, is that the best way to go?
For the noise, I am planning on inputting the vertices positions to the noise function.
It is confusing to me because so far I have not found any examples on the web on the matter. I am seeing people people sampling in the tessellation shader, but I don't have a texture, only noise. I've also seen someone mention something about using a geometry shader to displace vertices. What's the widely accepted procedure here?
I'm wondering about the impacts of the performances and advice on whether I should think of generating noise texture and interpolating those instead.

Having trouble finding lighting normals per pixel rather than per fragment

I'm working on a game with fairly low-polygon models, so when I try to implement directional lighting, I end up getting unsightly edges like the per-face example in this picture (not my picture) where the
model's mesh is clearly visible.
I have tried many tutorials for fragment lighting in the hopes that I can smooth it out, but I can't figure out how they calculate a normal for each fragment. I just want to know how to calculate a normal for each pixel -- I know how to take care of all the other aspects of lighting I need.

OpenGL - Lights don't affect (only) glutSolidCone

As seen in the following image, I have a nice rendering with OpenGL using a mesh and OpenGL lights.
However, when I try to depict just the underlying skeleton of the hand, the ball-joints are depicted in a nice way, but OpenGL lights seem not to have an impact on the cone-bones, something that ruins the 3d perception of them.
Both the sptheres and the cones are depicted at the same point of the code (no intermediate things that can cause harm), using glut.
glutSolidSphere
glutSolidCone
The exact call to glutSolidCone (please ingore variables the set lenght, etc) is:
glutSolidCone( 2.2,boneLength-2*_screenshotWidth_Points,4,100*boneLength );
This has been pending for quite some time now, whenever I have some free time I look into this, but no luck up to now. Any hint?
The problem you're running into is, that in fixed function OpenGL (which is used by glutSolidCone) illumination calculations are done only at the vertices and then the resulting colors interpolated across the face. This of course looks bad if there are not enough vertices to sample the light falloff or specular highlights.
The most straightforward solution would be to drop in a per-fragment illumination shader program in compatibility profile mode, that uses the built-in variables instead of user supplied uniforms.

point rendering in openGL and GLSL

Question: How do I render points in openGL using GLSL?
Info: a while back I made a gravity simulation in python and used blender to do the rendering. It looked something like this. As an exercise I'm porting it over to openGL and openCL. I actually already have it working in openCL, I think. It wasn't until i spent a fair bit of time working in openCL that I realized that it is hard to know if this is right without being able to see the result. So I started playing around with openGL. I followed the openGL GLSL tutorial on wikibooks, very informative, but it didn't cover points or particles.
I'm at a loss for where to start. most tutorials I find are for the openGL default program. I want to do it using GLSL. I'm still very new to all this so forgive me my potential idiocy if the answer is right beneath my nose. What I'm looking for is how to make halos around the points that blend into each other. I have a rough idea on how to do this in the fragment shader, but so far as I'm aware I can only grab the pixels that are enclosed by polygons created by my points. I'm sure there is a way around this, it would be crazy for there not to be, but me in my newbishness is clueless. Can some one give me some direction here? thanks.
I think what you want is to render the particles as GL_POINTS with GL_POINT_SPRITE enabled, then use your fragment shader to either map a texture in the usual way, or generate the halo gradient procedurally.
When you are rendering in GL_POINTS mode, set gl_PointSize in your vertex shader to set the size of the particle. The vec2 variable gl_PointCoord will give you the coordinates of your fragment in the fragment shader.
EDIT: Setting gl_PointSize will only take effect if GL_PROGRAM_POINT_SIZE has been enabled. Alternatively, just use glPointSize to set the same size for all points. Also, as of OpenGL 3.2 (core), the GL_POINT_SPRITE flag has been removed and is effectively always on.
simply draw a point sprites (using GL_POINT_SPRITE) use blending functions: gl_src_alpha and gl_one and then "halos" should be visible. Blending should be responsible for "halos" so look for some more info about that topic.
Also you have to disable depth wrties.
here is some link about that: http://content.gpwiki.org/index.php/OpenGL:Tutorials:Tutorial_Framework:Particles

OpenGL, applying texture from image to isosurface

I have a program in which I need to apply a 2-dimensional texture (simple image) to a surface generated using the marching-cubes algorithm. I have access to the geometry and can add texture coordinates with relative ease, but the best way to generate the coordinates is eluding me.
Each point in the volume represents a single unit of data, and each unit of data may have different properties. To simplify things, I'm looking at sorting them into "types" and assigning each type a texture (or portion of a single large texture atlas).
My problem is I have no idea how to generate the appropriate coordinates. I can store the location of the type's texture in the type class and use that, but then seams will be horribly stretched (if two neighboring points use different parts of the atlas). If possible, I'd like to blend the textures on seams, but I'm not sure the best manner to do that. Blending is optional, but I need to texture the vertices in some fashion. It's possible, but undesirable, to split the geometry into parts for each type, or to duplicate vertices for texturing purposes.
I'd like to avoid using shaders if possible, but if necessary I can use a vertex and/or fragment shader to do the texture blending. If I do use shaders, what would be the most efficient way of telling it was texture or portion to sample? It seems like passing the type through a parameter would be the simplest way, but possible slow.
My volumes are relatively small, 8-16 points in each dimension (I'm keeping them smaller to speed up generation, but there are many on-screen at a given time). I briefly considered making the isosurface twice the resolution of the volume, so each point has more vertices (8, in theory), which may simplify texturing. It doesn't seem like that would make blending any easier, though.
To build the surfaces, I'm using the Visualization Library for OpenGL and its marching cubes and volume system. I have the geometry generated fine, just need to figure out how to texture it.
Is there a way to do this efficiently, and if so what? If not, does anyone have an idea of a better way to handle texturing a volume?
Edit: Just to note, the texture isn't simply a gradient of colors. It's actually a texture, usually with patterns. Hence the difficulty in mapping it, a gradient would've been trivial.
Edit 2: To help clarify the problem, I'm going to add some examples. They may just confuse things, so consider everything above definite fact and these just as help if they can.
My geometry is in cubes, always (loaded, generated and saved in cubes). If shape influences possible solutions, that's it.
I need to apply textures, consisting of patterns and/or colors (unique ones depending on the point's "type") to the geometry, in a technique similar to the splatting done for terrain (this isn't terrain, however, so I don't know if the same techniques could be used).
Shaders are a quick and easy solution, although I'd like to avoid them if possible, as I mentioned before. Something usable in a fixed-function pipeline is preferable, mostly for the minor increase in compatibility and development time. Since it's only a minor increase, I will go with shaders and multipass rendering if necessary.
Not sure if any other clarification is necessary, but I'll update the question as needed.
On the texture combination part of the question:
Have you looked into 3d textures? As we're talking marching cubes I should probably immediately say that I'm explicitly not talking about volumetric textures. Instead you stack all your 2d textures into a 3d texture. You then encode each texture coordinate to be the 2d position it would be and the texture it would reference as the third coordinate. It works best if your textures are generally of the type where, logically, to transition from one type of pattern to another you have to go through the intermediaries.
An obvious use example is texture mapping to a simple height map — you might have a snow texture on top, a rocky texture below that, a grassy texture below that and a water texture at the bottom. If a vertex that references the water is next to one that references the snow then it is acceptable for the geometry fill to transition through the rock and grass texture.
An alternative is to do it in multiple passes using additive blending. For each texture, draw every face that uses that texture and draw a fade to transparent extending across any faces that switch from one texture to another.
You'll probably want to prep the depth buffer with a complete draw (with the colour masks all set to reject changes to the colour buffer) then switch to a GL_EQUAL depth test and draw again with writing to the depth buffer disabled. Drawing exactly the same geometry through exactly the same transformation should produce exactly the same depth values irrespective of issues of accuracy and precision. Use glPolygonOffset if you have issues.
On the coordinates part:
Popular and easy mappings are cylindrical, box and spherical. Conceptualise that your shape is bounded by a cylinder, box or sphere with a well defined mapping from surface points to texture locations. Then for each vertex in your shape, start at it and follow the normal out until you strike the bounding geometry. Then grab the texture location that would be at that position on the bounding geometry.
I guess there's a potential problem that normals tend not to be brilliant after marching cubes, but I'll wager you know more about that problem than I do.
This is a hard and interesting problem.
The simplest way is to avoid the issue completely by using 3D texture maps, especially if you just want to add some random surface detail to your isosurface geometry. Perlin noise based procedural textures implemented in a shader work very well for this.
The difficult way is to look into various algorithms for conformal texture mapping (also known as conformal surface parametrization), which aim to produce a mapping between 2D texture space and the surface of the 3D geometry which is in some sense optimal (least distorting). This paper has some good pictures. Be aware that the topology of the geometry is very important; it's easy to generate a conformal mapping to map a texture onto a closed surface like a brain, considerably more complex for higher genus objects where it's necessary to introduce cuts/tears/joins.
You might want to try making a UV Map of a mesh in a tool like Blender to see how they do it. If I understand your problem, you have a 3D field which defines a solid volume as well as a (continuous) color. You've created a mesh from the volume, and now you need to UV-map the mesh to a 2D texture with texels extracted from the continuous color space. In a tool you would define "seams" in the 3D mesh which you could cut apart so that the whole mesh could be laid flat to make a UV map. There may be aliasing in your texture at the seams, so when you render the mesh it will also be discontinuous at those seams (ie a triangle strip can't cross over the seam because it's a discontinuity in the texture).
I don't know any formal methods for flattening the mesh, but you could imagine cutting it along the seams and then treating the whole thing as a spring/constraint system that you drop onto a flat surface. I'm all about solving things the hard way. ;-)
Due to the issues with texturing and some of the constraints I have, I've chosen to write a different algorithm to build the geometry and handle texturing directly in that as it produces surfaces. It's somewhat less smooth than the marching cubes, but allows me to apply the texcoords in a way that works for my project (and is a bit faster).
For anyone interested in texturing marching cubes, or just blending textures, Tommy's answer is a very interesting technique and the links timday posted are excellent resources on flattening meshes for texturing. Thanks to both of them for their answers, hopefully they can be of use to others. :)