Altering brigthness on OpenGL texture - opengl

I would like to increase the brightness on a texture used in OpenGL rendering. Such as making it bright red or white. This is a 2D rendering environment, where every sprite is mapped as a texture to an OpenGL polygon.
I know little to nothing on manipulating data, and my engine works with a texture cache, so altering the whole surface would affect everything using the texture.
I can simulate the effect by having a "mask" and overlaying it, allowing me to make the sprite having solid colors, but that takes away memory.
If there any other solution to this?

If your requirement afford it, you can always write a very simple GLSL fragment shader which does this. It's literally a one liner.
Something like:
uniform sampler2d tex;
void main()
{
gl_FragColor = texture2d(tex, gl_TexCoord[0]) + gl_Color;
}

Perhaps GL_ADD instead of GL_MODULATE?

use GL_MODULATE to multiply the texture color by the current color.
see the texture tutorial in this page.

Related

Write to texture GLSL

I want to be able to (in fragment shader) add one texture to another. Right now I have projective texturing and want to expand on that.
Here is what I have so far :
Im also drawing the viewfrustum along which the blue/gray test image is projected onto the geometry that is in constant rotation.
My vertex shader:
ProjTexCoord = ProjectorMatrix * ModelTransform * raw_pos;
My Fragment Shader:
vec4 diffuse = texture(texture1, vs_st);
vec4 projTexColor = textureProj(texture2, ProjTexCoord);
vec4 shaded = diffuse; // max(intensity * diffuse, ambient); -- no shadows for now
if (ProjTexCoord[0] > 0.0 ||
ProjTexCoord[1] > 0.0 ||
ProjTexCoord[0] < ProjTexCoord[2] ||
ProjTexCoord[1] < ProjTexCoord[2]){
diffuse = shaded;
}else if(dot(n, projector_aim) < 0 ){
diffuse = projTexColor;
}else{
diffuse = shaded;
}
What I want to achieve:
When for example - the user presses a button, I want the blue/gray texture to be written to the gray texture on the sphere and rotate with it. Imagine it as sort of "taking a picture" or painting on top of the sphere so that the blue/gray texture spins with the sphere after a button is pressed.
As the fragment shader operates on each pixel it should be possible to copy pixel-by-pixel from one texture to the other, but I have no clue how, I might be googling for the wrong stuff.
How can I achieve this technically? What method is most versatile? Suggestions are very much appreciated, please let me know If more code is necessary.
Just to be clear, you'd like to bake decals into your sphere's grey texture.
The trouble with writing to the grey texture while drawing another object is it's not one to one. You may be writing twice or more to the same texel, or a single fragment may need to write to many texels in your grey texture. It may sound attractive as you already have the coordinates of everything in the one place, but I wouldn't do this.
I'd start by creating a texture containing the object space position of each texel in your grey texture. This is key, so that when you click you can render to your grey texture (using an FBO) and know where each texel is in your current view or your projective texture's view. There may be edge cases where the same bit of texture appears on multiple triangles. You could do this by rendering your sphere to the grey texture using the texture coordinates as your vertex positions. You probably need a floating point texture for this, and the following image probably isn't the sphere's texture mapping, but it'll do for demonstration :P.
So when you click, you render a full screen quad to your grey texture with alpha blending enabled. Using the grey texture object space positions, each fragment computes the image space position within the blue texture's projection. Discard the fragments that are outside the texture and sample/blend in those that are inside.
I think you are overcomplicating things.
Writes to textures inside classic shaders (i.e. not compute shader) are only implemented for latest hardware and very latest OpenGL versions and extensions.
It could be terribly slow if used wrong. It's so easy to introduce pipeline stalls and CPU-GPU sync points
Pixel shader could become a terribly slow unmaintainable mess of branches and texture fetches.
And all this mess will be done for every single pixel every single frame
Solution: KISS
Just update your texture on CPU side.
Write to texture, replacing parts of it with desired content
Update is only need to be done once and only when you need this. Data persists until you rewrite it (not even once per frame, but only once per change request)
Pixel shader is dead brain simple: no branching, one texture
To get target pixels, implement ray-picking (you will need it anyway for any non-trivial interactive 3D-graphics program)
P.S. "Everything should be made as simple as possible, but not simpler." Albert Einstein.

GLSL - how to force rendering a fragment?

is there any way to force rendering a particuar fragment. As far as I know, fragment shaders are called only for pixels within rasterized triangles. What I need to do is to draw a mark(say a single red pixel) in a constant position on the viewport. I mean something like this:
void main(void) {
if(gl_FragCoord.x == vec2(300.5, 300.5)){
gl_FragColor = vec4(1.0,0.0,0.0,1.0);
}
else {
gl_FragColor = getColorFromSampler();
}
,while there's no quad nor nothing behind fragment (300.5, 300.5). I don't want to affect performance (no fake background and stuff). How to proceed in such situation?
Is there anything speaking against just rendering a single point on top of the other stuff?
So just render a single GL_POINTS primitive at the given pixel. Either use an appropriate orthographic projection to specify the vertex position directly in window space, or just compute the clip space position of the pixel and use an identity vertex transformation. And then all that you need is a simple passthrough fragment shader writing your color of choice.
While you say you want "nothing behind the fragment", I still think that single fragment rendered under the red mark doesn't hurt anyone, at least not more (rather less) than your branch inside the fragment shader just for a single pixel (or any other more elaborate technique using the stencil buffer). If you have any other more strict reason why you cannot render anything else at that position except for the mark, you might want to clarify your problem a bit more.

Use shader on texture instead of screen

I've written a simple GL fragment shader which performs an RGB gamma adjustment on an image:
uniform sampler2D tex;
uniform vec3 gamma;
void main()
{
vec3 texel = texture2D(tex, gl_TexCoord[0].st).rgb;
texel = pow(texel, gamma);
gl_FragColor.rgb = texel;
}
The texture paints most of the screen and it's occurred to me that this is applying the adjustment per output pixel on the screen, instead of per input pixel on the texture. Although this doesn't change its appearance, this texture is small compared to the screen.
For efficiency, how can I make the shader process the texture pixels instead of the screen pixels? If it helps, I am changing/reloading this texture's data on every frame anyway, so I don't mind if the texture gets permanently altered.
and it's occurred to me that this is applying the adjustment per output pixel on the screen
Almost. Fragment shaders are executed per output fragment (hence the name). A fragment is a the smallest unit of rasterization, before it's written into a pixel. Every pixel that's covered by a piece of visible rendered geometry is turned into one or more fragments (yes, there may be even more fragments than covered pixels, for example when drawing to an antialiased framebuffer).
For efficiency,
Modern GPUs won't even "notice" the slightly reduced load. This is a kind of microoptimization, that's on the brink of non-measureability. My advice: Don' worry about it.
how can I make the shader process the texture pixels instead of the screen pixels?
You could preprocess the texture, by first rendering it through a texture sized, not antialiased framebuffer object to a intermediate texture. However if your change is nonlinear, and a gamma adjustment is exactly that, then you should not do this. You want to process images in a linear color space and apply nonlinear transformation only as late as possible.

how to get gl_BackColor using webGL glsl?

In some other version of GLSL,gl_BackColor seems to provide the access to the color behind the current rendering fragment.This is useful for some custom alpha blending.But glsl for webgl seems not to support it.On the other hand,read from gl_FragColor before assign any value to it seems get the correct backColor, but only works in my Ubuntu.On my Mac Book Pro, it fails and seems to get only some useless random color.
So my question is,is there any direct way to gain access to the backColor behind the current rendering fragment?If not,how can I do it?
In some other version of GLSL,gl_BackColor seems to provide the access to the color behind the current rendering fragment.
No, this has never been the case. gl_BackColor was the backface color, for doing two-sided lighting. And it was never accessible from the fragment shader; it was a vertex shader variable.
For two-sided lighting, you wrote to both gl_FrontColor and gl_BackColor in the vertex shader. The fragment shader's gl_Color variable is filled in with which ever side's color is facing the camera. So if the back-face of the triangle is forward, then it gets the interpolated gl_BackColor.
What you are asking for has never been available in GLSL.
There is no direct way, as Nicol Bolas write.
However you can use an indirect way, by using a render to texture approach:
First render the opaque objects (if any) to a offscreen texture instead of the screen.
Render the offscreen texture to the screen
Render the transparent "custom blending" object to the screen using a shader that does the custom blending. (Since you are doing the blending manually the GL's Blend flag should not be enabled). You should add the offscreen texture as a uniform to the fragment-shader which let you sample the background color and calculate your custom blending.
If you need to render multiple transparent objects you can use two offscreen textures and ping-pong between them and finally render the result to the screen when all objects has been rendered.

OpenGL - Using texture's alpha channel AND a "global" opacity level

I'm trying to get a fairly simple effect; I'd like my sprites to be able to have their alpha channels used by GL (that is, translucent across parts of the image, but not all of it) as well as the entire sprite to have an "opacity" level that affects the entire sprite.
I've got the latter part, that was a simple enough matter of using GL_MODULATE and passing a color4d(opacity, opacity, opacity, opacity). Works like a dream.
But the problem is in the first part; partially translucent images. I'd thought that i could just fling out a glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); and enable blending, but unfortunately it doesn't do it. What it seems to do is "whiten" the color of the image in question, rather than making it translucent. Any other sprites passing under it behave as if it were a solid block of color, and get directly cut off.
For reference, i've disabled lighting, z-buffer, color material, and alpha test. Did shade model flat, just in case. But other than that, i'm using default ortho settings. I'm using teximage2d for the texture in question, and i've been sure the formats and GL_RGBA are all set correctly.
How can i get GL to consider the texture's alpha channel during blending?
The simplest and fastest solution is to have a fragment shader.
uniform float alpha;
uniform sampler2D texture;
main(){
gl_FragColor = texture2d(texture, gl_TexCoords);
gl_FragColor.a *= alpha;
}
GL_MODULATE is the way to tell GL to use the texture alpha for the final alpha of the fragment (it's also the default).
Your blending is also correct as to how to use that generated alpha in the blending stage.
So the problem lies elsewhere... Your description sounds like you did not in fact disable Z-test, and you do not render your sprites back to front. Alpha blending in GL will only do what you want if you draw your sprites back to front. Otherwise, the sprites get blended in the wrong order, and this does not produce the correct output.
It would be easier to verify this with a picture of what you observe though.