Simplest way to "decal" in GLSL - opengl

This is all done in C++ with GLSL...
I have mesh A. I project another (flat) mesh onto mesh A, and it takes the shape of the part of Mesh A it collides with. This is how I'm putting decals onto a model.
Now, I used a seperate shader to render the decal's pixels. Without going too deep into it, I prevent z-fighting and put the decal on top by simply multiplying the final vertex position's "w" by 1.0005f. This seems to work with everything I throw at it.
The only downside is, when very, very, very zoomed in, you can see that the decal is hovering above mesh A (because it is). Is there some better way for me to do this decal deal? Rendering it multitexture is not an option because of the application involved-- it has to be a seperate mesh. Is there some better method of adjusting w than a multiply like this?

Use glPolygonOffset (official documentation). This applies an offset to the depth only and not the actual position of the vertex.

Related

How to put 2D frame-by-frame animation on 3d model (hybrid animation)

I'd like to do a cartoony 3D character, where the facial features are flat-drawn and animated in 2D. Sort of like the Bubble Guppies characters.
I'm struggling with finding a good method to do it. I'm using Libgdx, but I think the potential methodologies could apply to any game engine.
Here are ideas I thought of, but each has drawbacks. Is there a way this is commonly done? I was just playing a low-budget Wii game with my kids (a Nickelodeon dancing game) that uses this type of animation for the faces.
Ideas:
UV animation - Is there a way to set up a game model (FBX format) so that certain UV's are stored in various skins? Then the UV's could jump around to various places in a sprite map.
Projected face - This idea is convoluted. Use a projection of a texture onto the model with a vertex shader uniform that shifts the UV's of the projected texture around. So basically, you'd need a projection matrix that's set up to move the face projection around with the model. But you'd need enough padding around the face frame sprites to keep the rest of the model clear of other parts of the sprite map. And this results in a complicated fragment shader that would not be great for mobile.
Move flat 3D decal with model - Separately show a 3D decal that's lined up with the model and batched as a separate mesh in the game. The decal could just be a quad where you change the UV attributes of the vertices on each frame of animation. However, this method won't wrap around the curvature of a face. Maybe it could be broken down to separate decals for each eye and the mouth, but still wouldn't look great, and require creating a separate file to go with each model to define where the decals go.
Separate bone for each frame of animation - Model a duplicate face in the mesh for every frame of animation, and give each a unique bone. Animate the face by toggling bone scales between zero and one. This idea quickly breaks down if there are more than a few frames of animation.
Update part of skin each frame - Copy the skin into an FBO. Draw the latest frame of animation into the part of the FBO color texture that contains the face. Downsides to this method are that you'd need a separate copy of the texture in memory for every instance of the model, and the FBO would have to either do a buffer restore every frame (costly) or you'd have to redraw the entire skin into the FBO each frame (also costly).
I have other ideas that are considerably more difficult than these. It feels like there must be an easier way.
Edit:
One more idea... Uniform UV offset and vertex colors - This method would use vertex colors since they are easily supported in all game engines and modeling packages, but in many cases are unused. In the texture, create a strip of the frames of animation. Set up the face UV's for the first frame. Color all vertices with Alpha 0 except the face vertices, which can be colored Alpha 1. Then pass a UV face offset uniform to the vertex shader, and multiply it by a step function on the vertex colors before adding it to the UVs. This avoids the downsides of all the above methods: everything could be wrapped into one texture shared by all instances of the model, and there would be no two-pass pixels on the model except possibly where the face is. The downside here is a heftier model (four extra attributes per vertex, although perhaps the color could be baked down to a single byte).
Your shader could receive 2 textures, one for the body, and one for the face. The face one being transparent so you could overlay it on top of the body one. Then you just need to send a different face texture based on the animation.
I am struggling with the same problem with implementing a 2d animation to a background billboard in my 3d scene.
I believe that Using Decals is the simplest solution, and implementing the animation is as easy as updating the decal’s TextureRegion according to an Animation object:
TextureRegion frame = animation.getKeyFrame(currentFrameTime, true);
decal.setTextureRegion (frame);
I guess the real problem in your case is positioning the decal inside the scene.
One solution could be using your 3D modeling software for modeling a "phantom" mesh that will store the position of the decal.
The "phantom" mesh will not be rendered with all the other 3d elements, instead it will be used to determine the position of the decals vertices. The only thing you’ll need to do is copy the “phantom” position vertices and paste them to the decal.
I hadn’t got to implement this solution yet, but theoretically it could be relatively easily done.
Hope this idea will help you, and I will appreciate you sharing other solutions/code to this problem if you find any.

OpenGL: Specify what value gets wrote to the depth buffer?

As I understand the depth buffer, it calculates a fragment's relation to the far/near clipping planes, and deduces the depth value from that before writing it. However, this isn't what I want as I don't utilize the clipping planes, or the 3rd dimension at all. However, depth testing would still be immensely helpful to me.
My question, is there any way to specify what value gets written to the depth buffer manually, for all geometry rendered after you set it (that passes the Alpha Test) regardless of it's true depth in a scene? The Stencil buffer works this way, with the value specified as the second argument of glStencilFunc(), so I thought glDepthFunc() might have behaved similarly but I was mistaken.
The main reason I need depth testing in a 2D game, is because my lighting model uses stencils a great deal. Objects closer to the camera than the light must be rendered first, for shadow stencils to be properly laid out, with the lights drawn after that. It's a pretty tricky draw order, but basically it just means lights have to be drawn after the scene is finished drawing, is all.
The OpenGL version I'm using is 2.0, though I'm trying to avoid using a fragment shader if possible.
It seems you are talking about a technique called Parallax scrolling. You don't need to write to the depth buffer manually, just enable it, and then you can use a layer approach and specify the Z manually for each object. Then render the scene front to back (sorting).

Multi-pass shading using render-to-texture

I'm trying to implement a multi-pass rendering method using OpenSceneGraph. However, I'm not entirely certain my problem is theoretical or due to a lack of applied knowledge of OSG. Thus far, I've successfully implemented multi-pass shading by rendering to a texture using an orthogonal projection, but I cannot seem to make a perspective projection work.
It may be that I don't quite understand how to implement multi-pass shading. Of course, I have to pre-render the entire scene with the multi-pass shaders to a texture, then use the texture in the final render. However, I'm not talking about creating a separate texture for each object in the scene, but effectively capturing a screenshot of the entire prerendered scene. Then, from that texture alone, applying the rendered effects to the individual geometries.
I assume this means I would have to do an extra conversion of the vertex coordinates for each geometry in the vertex shader. That is, after computing:
gl_Position = ModelViewProjectionMatrix * Vertex;
I would need to go a step further and calculate the vertex's screen coordinates in order to map the vertices correctly (again, given that the texture consists of an entire screen shot of the scene).
If I am correct, then I must be able to pre-render the scene in a perspective view identical to the view used in the final render, rather than an orthogonal view. This is where I have troubles. I can make an orthogonal view do what I want, but not the perspective view.
Am I correct in my approach? The only other approach I can imagine is to render everything to a screen-filling quad (in effect, the same thing as converting to screen coordinates), but that doesn't alleviate the need to use a perspective projection in the pre-render stage.
Thoughts? Links??
edit: I should also point out that in my successful attempts, I used a fragment shader only. The perspective projection worked, but, of course, the screen aligned quad I was using was offset rather than centered. I added a pass-through vertex shader and everything went blank.
As it turns out, my approach was correct. It's especially nice as it avoids having to add another camera to my scene graph to render the final output - I can simply use the main camera. Unfortunately, it means that all of my output textures are rendered at the screen resolution, rather than a resolution appropriate to the size of the object. That is, if my screen is 1024 x 1024, then so is the output texture, one for each pre-render camera in the graph. Not exactly efficient, but it'll do for now.

Lens shader / Image disortion

Well, i have a 3d scene currently with just a quad (painting) with texture on it. Between the painting and the "camera" i have places an other quad i would like to behave like a optical lens: distorting the picture "below" it
how would one achieve it preferably with a shader and some pixelbuffers?
Here is an example I found a while ago which does something very similar to what you want. http://www.paulsprojects.net/opengl/refract/refract.html
You will probably have to modify the code a bit to achieve the inversion effect you want, but this will get you started on the right track.
Edit:
By the way, you will not need the second image (the inverted small rectangle). Just use a single background image and the shader.
Between the painting and the "camera" i have places an other quad i would like to behave like a optical lens:
This is a tricky one. First one must understand that OpenGL is a so called localized rendering model rasterizer, which means in layman terms, that it works like pencils and brushes on a canvas.
It thus works in very contrast to global scene representation renderers like raytracers. A raytracer actually operates on a fully defined scene, because of that it can to things like refraction trivially.
Indeed one must treat OpenGL like an artist treats its tools. So any optical "effect" you want to create must be implemented by mastering various drawing techiques possible with the tools OpenGL offers. To create the effect you desire you must implement a multistage process.
For refraction you first render the scene as "seen" by the refracting object in all directions (you create a dynamic cube map), then you use this cube map as input data for rasterizing the "refracting" object, where a shader is used to determine the refracted direction of a ray of light hitting the rasterized fragments.
BTW: What holds for refraction holds for any other like interacting effect. Shadows are as non-trivial like refractions in OpenGL.

OpenGL: Using shaders to create vertex lighting by using pre-calculated colormap?

First of all, I have very little knowledge of what shaders can do, and i am very interested in making vertex lighting. I am attempting to use a 3d colormap which would be used to calculate the vertex color at that position of the world, and also interpolate the color by using the nearby colors from the colormap.
I cant use typical OpenGL lighting because its probably too slow and theres a lot of lights i need to render. I am going to "render" the lights at the colormap first, and then i could either manually map every vertex drawn with the corresponding color from the colormap.
...Or i could somehow automate this process, so i wouldnt have to change the color values of vertexes myself, but a shader could perhaps do this for me?
Questions is... is this possible, and if it is: what i need to know to make it possible?
Edit: Note that i also need to update the lightmap efficiently, without caring about the size of the lightmap, so the update should be done only at that specific part of the lightmap i want to update.
It almost sounds like what you want to do is render the lights to your color map, then use your color map as a texture, but instead of decal mode set it to modulate mode, so it's multiplied with the existing color instead of just replacing it.
That is different in one way though: instead of just affecting the vertexes, it'll map to the individual fragments (pixels, in essence).
Edit: What I had in mind wasn't a 3D texture -- it was a cube map. Basically, create a virtual cube surrounding everything in your "world". Create a 2D texture for each face of that cube. Render your coloring to the cube map. Then, to color a vertex you (virtually) extend a ray outward from the center, through the vertex, to the cube. The pixel you hit on the cube map gives you the color of lighting for that vertex.
Updating should be relatively efficient -- you have normal 2D textures for the top, bottom, front, etc., and you update them as needed.
If you cant use the fixed function pipeline functionality the best way to do per vertex lighting should be to do all the lighting calculations per vertex in the vertex-shader, when you then pass it on the the fragment shader it will be correctly interpolated across the face.
Another way to deal with performances issues when using a lot of light sources is to use deferred rendering as it will only do lighting calculation on the geometry that is actually visible.
That is possible, but will not be effective on the current hardware.
You want to render light volumes into 3d texture. The rasterizer works on a 2D surface, so your volumes have to be split along one of the axises. The split can be done in one of the following ways:
Different draw calls for each split
Instanced draw, with layer selection based on glInstanceID (will require geometry shader)
Branch in geometry shader directly from a single draw call
In order to implement it, I would suggest reading GL-3 specification and examples. It's not going to be easy, nor it will be fast enough in the result for complex scenes.