Is there a way to partially tint bitmaps in Allegro? - c++

I'm programming a game in Allegro 5 and I'm currently working on my drawing algorithm. After calculation I end up with two ALLEGRO_BITMAP*-objects where one of them is my "scene" with the terrain drawn on it and the other one is a shadow-map.
The scene are simply textures of game-elements drawn on the bitmap.
The shadow-map is a bitmap using black colors for light and white colors for shadows rendered previously.
For drawing those bitmaps on the screen I use al_draw_scaled_bitmap(...) and al_set_blender(ALLEGRO_DEST_MINUS_SRC, ALLEGRO_ONE, ALLEGRO_ONE) to substract the white elements of the shadow-map from the scene to make those visible.
The problem I have is that I want to have all pixels that are white on the shadows-map to be tinted in a world-color, which has been calculated in every frame before, and all black elements simply not modified (gray means partially tinted).
The final color could be calculated like p.r * c.r + 1 - p.r with p = the pixel-color on the scene and c = the world-color for red, green and blue channels in rgb.
Is there any way to achieve a partial tinting effect in Allegro 5 (possibly without massive overdraw)?
I thought of using shaders, but I haven't found a solution to implement these with my ALLEGRO_BITMAP*-objects.

Allegro's blenders are fairly simple, so you would need a shader for this. Simply write a shader, and when you're drawing your shadow map al_use_shader the shader and then al_use_shader(NULL) when you're done. The shader can use the default vertex source which you can get with al_get_default_shader_source. So you only have to write the fragment shader. Your fragment shader should have a uniform vec4 which is the tint colour. You would also have x, y floats which respresent the x and y values of the destination. It would also have a sampler which is the scene bitmap (which should be the target, since you're drawing to it). You'd sample from the sampler at x, y (x and y should be from 0 to 1, you can get these by using xpixel / width (same with y)) and then do your calculation with the input colour, and then store the result in gl_FragColor (this text assume you're using GLSL (OpenGL). The same principle applies to Direct3D.) Check out the default shaders in src/shader_source.inc in the Allegro source code for more info on writing shaders.

Related

How to easily change pixel color of a texture in the fragment shader? (OpenGL)

My goal is to make simple sand simulation using c++ and OpenGL. Right now my plan is to have a 2d array of pixel colors and a texture the same size. To simulate sand I will update the array accordingly to the sand coordinates and where it has to travel. I'm thinking of sending the 2d array of pixels to the fragment shader and update the texture there with colors on the array. The problem is that I can't find a way to change the pixel color on the texture.
So how do I change the color of the pixel at certain coordinates on texture?
Is doing this even practical? If, no what are the other ways?
Did u mean for texture coordinates? You can change only fragment color output. You have also the texture color but you can use only for frgament output.
gl_FragColor = tex + vec4(0.5, 0, 0, 1)*tex.a;
for blending operation
Anyway here's more info about the frag https://www.khronos.org/opengl/wiki/Fragment_Shader#Other_outputs

Write to texture GLSL

I want to be able to (in fragment shader) add one texture to another. Right now I have projective texturing and want to expand on that.
Here is what I have so far :
Im also drawing the viewfrustum along which the blue/gray test image is projected onto the geometry that is in constant rotation.
My vertex shader:
ProjTexCoord = ProjectorMatrix * ModelTransform * raw_pos;
My Fragment Shader:
vec4 diffuse = texture(texture1, vs_st);
vec4 projTexColor = textureProj(texture2, ProjTexCoord);
vec4 shaded = diffuse; // max(intensity * diffuse, ambient); -- no shadows for now
if (ProjTexCoord[0] > 0.0 ||
ProjTexCoord[1] > 0.0 ||
ProjTexCoord[0] < ProjTexCoord[2] ||
ProjTexCoord[1] < ProjTexCoord[2]){
diffuse = shaded;
}else if(dot(n, projector_aim) < 0 ){
diffuse = projTexColor;
}else{
diffuse = shaded;
}
What I want to achieve:
When for example - the user presses a button, I want the blue/gray texture to be written to the gray texture on the sphere and rotate with it. Imagine it as sort of "taking a picture" or painting on top of the sphere so that the blue/gray texture spins with the sphere after a button is pressed.
As the fragment shader operates on each pixel it should be possible to copy pixel-by-pixel from one texture to the other, but I have no clue how, I might be googling for the wrong stuff.
How can I achieve this technically? What method is most versatile? Suggestions are very much appreciated, please let me know If more code is necessary.
Just to be clear, you'd like to bake decals into your sphere's grey texture.
The trouble with writing to the grey texture while drawing another object is it's not one to one. You may be writing twice or more to the same texel, or a single fragment may need to write to many texels in your grey texture. It may sound attractive as you already have the coordinates of everything in the one place, but I wouldn't do this.
I'd start by creating a texture containing the object space position of each texel in your grey texture. This is key, so that when you click you can render to your grey texture (using an FBO) and know where each texel is in your current view or your projective texture's view. There may be edge cases where the same bit of texture appears on multiple triangles. You could do this by rendering your sphere to the grey texture using the texture coordinates as your vertex positions. You probably need a floating point texture for this, and the following image probably isn't the sphere's texture mapping, but it'll do for demonstration :P.
So when you click, you render a full screen quad to your grey texture with alpha blending enabled. Using the grey texture object space positions, each fragment computes the image space position within the blue texture's projection. Discard the fragments that are outside the texture and sample/blend in those that are inside.
I think you are overcomplicating things.
Writes to textures inside classic shaders (i.e. not compute shader) are only implemented for latest hardware and very latest OpenGL versions and extensions.
It could be terribly slow if used wrong. It's so easy to introduce pipeline stalls and CPU-GPU sync points
Pixel shader could become a terribly slow unmaintainable mess of branches and texture fetches.
And all this mess will be done for every single pixel every single frame
Solution: KISS
Just update your texture on CPU side.
Write to texture, replacing parts of it with desired content
Update is only need to be done once and only when you need this. Data persists until you rewrite it (not even once per frame, but only once per change request)
Pixel shader is dead brain simple: no branching, one texture
To get target pixels, implement ray-picking (you will need it anyway for any non-trivial interactive 3D-graphics program)
P.S. "Everything should be made as simple as possible, but not simpler." Albert Einstein.

Why fragments do not necessarily correspond to pixels one to one?

Here is a good explanation what is a fragment:
https://gamedev.stackexchange.com/questions/8977/what-is-a-fragment/8981#8981
But here (and not only here) I have read that "I want to stress the fact that one pixel is not necessarily one fragment, multiple fragment can be combined to make one pixel....". But I don't understand clearly what are fragments and why they are not necessarily correspond to pixels one to one?
EDIT: When multiple fragments form one pixel it is only in the case when they overlap after projection, or it is because the pixel is bigger than the fragment, hence you need to put together next to each other multiple fragments with the same color to form a pixel?
A fragment has a location that can be queried via its built-in gl_FragCoord variable where the x and y component directly correspond to pixels on your screen. So you could say that a fragment indeed corresponds to a pixel.
However, a fragment outputs a color and stores that color in a color buffer at its coordinates. This does not mean this color is the actual pixel color that is shown to the viewer.
Because a fragment shader is run for each object, it could happen that other objects are drawn after your first object that also output a fragment at the same screen coordinate. When taking depth-testing, stencil testing and blending into account, the resulting color value in the color buffer might get overwritten and/or merged with new colors.
Think of it like this:
Object 1 gets drawn and draws the color purple at screen coordinate (200, 300);
Object 2 gets drawn and draws the color red at same coordinate, overwriting it.
Object 3 (is blue) has transparency of 50% at same coordinate, and merges colors.
Final fragment color output is then combination of red and blue (50%).
The final resulting pixel could then be a color from a single fragment shader run, a color that is overwritten by many other fragment shader runs, or a combination of colors via blending.
A fragment is not equal to a pixel when multi sample anti-aliasing (MSAA) or any of the other modes that change the ratio of rendered pixels to screen pixels is activated.
In the case of 4x MSAA, each screen pixel will be represented by 4 (2x2) fragments in the display buffer. The fragment shader for a particular polygon will only be run once for the screen pixel no matter how many of the fragments are covered by the polygon. Since a polygon may not cover all the fragments within a pixel it will only store color into the fragments it covers. This is repeated for every polygon that may cover one or more of the fragments. Then at the final display all 4 fragments are blended to produce the final screen pixel.

Manually change color of framebuffer

I am having a scene containing of thousands of little planes. The setup is that the plane can occlude each other in the depth.
The planes are red and green. Now I want to do the following in a shader:
Render all the planes. As soon as a plane is red, substract 0.5 from the currently bound framebuffer and if the texture is green, add 0.5 to the framebuffer.
Therefore I should be able to see for each pixel in the texture of the framebuffer: < 0 => more red planes at this pixel, = 0 => Same amount of red and green and for the last case >0 => more green planes, as well as I can tell the difference.
This is just a very rough simplification of what I need to do, but the core is to write change a pixel of a texture/framebuffer depending on the given values of planes in the scene influencing the current fragment. This should happen in the fragment shader.
So how do I change the values of the framebuffer using GLSL? using gl_FragColor just sets a new color, but does not manipulate the color set before.
PS I also gonna deactivate depth testing.
The fragment shader cannot read the (old) value from the framebuffer; it just generates a new value to put into the framebuffer. When multiple fragments output to the same pixel (overlapping planes in your example), how those value combine is controlled by the BLEND function of the pipeline.
What you appear to want can be done by setting a custom blending function. The GL_FUNC_ADD blending function allows adding the old value and new value (with weights); what you want is probably something like:
glBlendEquationSeparate(GL_FUNC_ADD, GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE, GL_ONE, GL_ONE, GL_ONE);
this will simply add each output pixel to the old pixel in the framebuffer (in all four channels; its not clear from your question whether you're using a 1-channel, 3-channel, or 4-channel frame buffer). Then, you just have your fragment shader output 0.5 or -0.5 depending. In order for this to make sense, you need a framebuffer format that supports values outside the normal [0..1] range, such as GL_RGBA32F or GL_R32F

Outline effects in OpenGL

In OpenGL, I can outline objects by drawing the object normally, then drawing it again as a wireframe, using the stencil buffer so the original object is not drawn over. However, this results in outlines with one solid color.
In this image, the pixels of the creature's outline seem to get more transparent the further they are from the creature they outline. How can I achieve a similar effect with OpenGL?
They did not use wireframe for this. I guess it is heavily shader related and requires this:
Rendering object to a stencil buffer
Rendering stencil buffer with color of choice while applying blur
Rendering model on top of it
I'm late for an answer but I was trying to achieve the same thing and thought I'd share the solution I'm using.
A similar effect can be achieved in a single draw operation with a not so complex shader.
In the fragment shader, you will calculate the color of the fragment based on lightning and texture giving you the un-highlighted color 'colorA'.
Your second color is the outline color, 'colorB'.
You should obtain the fragment to camera vector, normalize it, then get the dot product of this vector with the fragment's normal.
The fragment to camera vector is simply the inverse of the fragment's position in eye-space.
The colour of the fragment is then:
float CameraFacingPercentage = dot(v_fragmentToCamera, v_Normal);
gl_FragColor = ColorA * CameraFacingPercentage + ColorB * (1 - FacingCameraPercentage);
This is the basic idea but you'll have to play around to have more or less of the outline color. Also, the concave parts of your model will also be highlighted but that is also the case in the image posted in the question.
Detect edges in GLSL shader using dotprod(view,normal)
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Toon_Shading#Outlines
As far as I see it the effect on the screen short and many "edge" effects are not pure edges, as in comic outline. What mostly is done, you have one pass were you render the object normally then a pass with only the geometry (no textures) and a GLSL shader. In the fragment shader the normal is taken and that normal is perpendicular to the camera vector you color the object. The effect is then smoothed by including area close to perfect perpendicular.
I would have to look up the exact math but I think if you take the dot product of the camera vector and the normal you get the amount of "perpendicularness". That you can then run through a function like exp to get a bias towards 1.
So (without guarantee that it is correct):
exp(dot(vec3(0, 0, 1), normal));
(Note: everything is in screenspace.)