I can load an image in (png) no problem using SDL_image and also display it perfectly. What I would like to do though is gradually fade the image in from fully transparent to fully opaque. I have seen some tutorials mentioning SDL_SetAlpha but this is for use on a an SDL_Surface where as SDL_image loads as SDL_Texture for the hardware acceleration.
Can anyone help out with how this might be done while maintaining good performance?
So here is what I found out. To render a texture with alpha you simply need to make a call to SetTextureAlphaMod passing in a pointer to the texture and an integer in the range 0 - 255.
The function itself is documented here
NB. Alpha modulation is not always supported by the renderer, and the function will return -1 if this is the case (returns 0 for successful call). There is information in the function documentation as to how to detect if your renderer supports alpha modulation.
The next part of my problem was how to perform smooth fading from SDL_TRANSPARENT (0) to SDL_OPAQUE (255). I did this in my variable update function (game is using fixed and variable updates)
#define FADE_SPEED 0.07f;
void SomeClass::UpdateVariable(float elapsedTime)
{
// Check there is a texture
if (texture) {
// Set the alpha of the texture
SDL_SetTextureAlphaMod(texture, alpha);
}
// Update the alpha value
if (alpha < SDL_ALPHA_OPAQUE) {
alphaCalc += FADE_SPEED * elapsedTime;
alpha = alphaCalc;
}
// if alpha is above 255 clamp it
if (alpha >= SDL_ALPHA_OPAQUE) {
alpha = SDL_ALPHA_OPAQUE;
alphaCalc = (float)SDL_ALPHA_OPAQUE;
}
}
Note the use of two alpha variables. alpha is the actual value which is passed to the function for controlling the texture alpha and is a Uint8. alphaCalc is a float and is used to store the gradually increasing value from 0 to 255. Every call of the update function sees this value update by a scaler value FADE_SPEED multiplied by the frame time elapsed. It is then be assigned to the actual alpha value. This is done to maintain a smooth fade as integer rounding prevents a smooth fade using the alpha value alone.
Related
I'm loading a texture using OpenGL like this
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGBA,
texture.width,
texture.height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
texture.pixels.data());
The issue is that the color of the image looks different from the one I see when I open the file on the system image viewer.
On the screenshot you can see the yellow on the face displayed on the system image viewer has the color #FEDE57 but the one that is displayed in the OpenGL window is #FEE262
Is there any flag or format I could use to match the same color calibration?
Displaying this same image as a Vulkan texture looks fine, so I can discard there is not an issue in how I load the image data.
In the end it seems like the framebuffer in OpenGL doesn't gets color corrected, so you have to tell the OS to do it for you
#include <Cocoa/Cocoa.h>
void prepareNativeWindow(SDL_Window *sdlWindow)
{
SDL_SysWMinfo wmi;
SDL_VERSION(&wmi.version);
SDL_GetWindowWMInfo(sdlWindow, &wmi);
NSWindow *win = wmi.info.cocoa.window;
[win setColorSpace:[NSColorSpace sRGBColorSpace]];
}
I found this solution here https://github.com/google/filament/blob/main/libs/filamentapp/src/NativeWindowHelperCocoa.mm
#Tokenyet and #t.niese are pretty much correct.
You need to approximately power you final colour's rgb values by 1.0/2.2. Something on the line of this:
FragColor.rgb = pow(fragColor.rgb, vec3(1.0/gamma)); //gamma --> float = 2.2
Note: this should be the final/last statement in the fragment shader. Do all your lighting and colour calculations before this, or else the result will be weird because you will be mixing linear and non-linear lighting (calculations).
The reason you need to do gamma correction is because the human eye perceives colour differently to what the computer outputs.
If the light intensity (lux) increases by twice the amount, your eye indeed sees it twice as bright. However, the actual brightness, when increased by twice the amount, increases in a logarithmic (or exponential?, someone please correct me here) relationship. The constant of proportionality between the two lighting spaces is ^2.2 (or ^(1.0/2.2) if you want to go the inverse (which is what you are looking for.)).
For more info: Look at this great tutorial on gamma correction!
Note 2: This is an approximation. Each computer, program, API have their own auto gamma correction method. You system image viewer may have different gamma correction methods (or not even have any for that matter) compared to OpenGL
Note 3: Btw, if this does not work, there are manual methods to adjust the colour in the fragment shader, if you know.
#FEDE57 = RGB(254, 222, 87)
which converted into OpenGL colour coordinates is,
(254, 222, 87) / 255 = vec3(0.9961, 0.8706, 0.3412)
Both images and displays have a gamma value.
If GL_FRAMEBUFFER_SRGB is not enabled then:
the system assumes that the color written by the fragment shader is in whatever colorspace the image it is being written to is. Therefore, no colorspace correction is performed.
( khronos: Framebuffer - Colorspace )
So in that case you need to figure out what the gamma value of the image you read in is and what the one of the output medium is and do the corresponding conversion between those.
To get the one of the output medium is however not always easy.
Therefore it is preferred to enable GL_FRAMEBUFFER_SRGB
If GL_FRAMEBUFFER_SRGB is enabled however, then if the destination image is in the sRGB colorspace […], then it will assume the shader's output is in the linear RGB colorspace. It will therefore convert the output from linear RGB to sRGB.
( khronos: Framebuffer - Colorspace )
So in that case you only need to ensure that the colors you set in the fragment shader don't have gamma correction applied but are linear.
So what you normally do is to get the gamma information of the image, which is done with a certain function of the library you use to read the image.
If the gamma of the image you read is gamma you can calculate the value to invert it with inverseGamme = 1. / gamma, and then you can use pixelColor.channel = std::pow(pixelColor.channel, inverseGamme) for each of the color channels and each pixel to make the color space linear.
You will use this values in the linear color space as texture data.
You could also use something like GL_SRGB8 for the texture, but then you would need to convert the values of the pixels you read form the image to sRGB colorspace, which roughly is done by first linearizing it and then applying a gamma of 2.2
While working on a particle engine in SDL, I stumbled over the following problem:
After implementing a frame interpolation technique as described here (Step 6: Optimization, I'm basically drawing the last frame with an alpha of 254 to the current frame) so that it fades out), I noticed that some pixels which were supposed to gradually fade from white to black, ended up staying gray, with rgba values of 112 to be precise. After doing some math I've found what's causing this: every frame I multiply the rgba values of the last frame by 254/255, which works fine up to and excluding 112.
Then something funny happens. When you do the following: round(112/255*254)=112, the values will stay the same (I'm rounding the number because the end value is to be stored as an 8bit color value of a pixel), meaning that my fading technique using alpha doesn't work anymore. The problem is that I would like these gray pixels which stay at 112 to fade out further. The question is, how would I achieve such a thing within SDL?
I know I could just make the value of 254 lower so that this sort of minimum decreases but I would like my particles to fade out really gently (I'm making a 60fps game). I've also considered creating an OpenGL graphics engine myself (so that I could use floating point textures, which do have to precision I need in this case), but I'm simply not good enough at OpenGL and lack the time to do such a thing.
The reason I need to use a texture for storage is that I would like to have particles which emit trails (as if you stopped clearing your frame buffers and you moved an object, but instead of the screen becoming a mess the older pixels would fade out)
currentFrame->renderTo();//Draw to the current frame rendertarget
graphics.clear(Color(0,0,0,0)); //Clear the screen using the given color
graphics.scale=1.0f; //Manipulate pixel size (for visible pixel art)
SDL_SetTextureBlendMode(lastFrame->texture, SDL_BLENDMODE_NONE);
//Set blendmode to opaque
graphics.drawTexture(lastFrame,graphics.camera.position,Color(255,255,255,254));
//Draw the last frame
SDL_SetTextureBlendMode(lastFrame->texture, SDL_BLENDMODE_BLEND);
//Revert blendmode to alpha blending
I would use a global float variable called 'decay' to store the non-rounded alpha value.
float decay = 255.0;
[...]
decay = decay / 255.0 * 254.0;
Uint8 alpha = round(decay);
If you need one 'decay' value per pixel then you could declare a struct for a particle:
typedef struct {
int active;
float x,y;
float dx,dy; // directions for x and y
float decay;
Uint8 r,g,b,a; // colour
} s_dots;
s_dots dots[MAX_NUMBER_OF_DOTS];
I want to draw some textures into a fbo, some with alpha 0 and other with alpha 1 so I can use the alpha channel to store info for my shader.
I'm using this code for each texture I want with the alpha 0 rendered in the fbo
batch.begin();
batch.setColor(new Color(1,1,1,0));
batch.draw(texture,x,y);
batch.setColor(new Color(1,1,1,1));
batch.end();
The problem is that when I try to get the rgb color in my shader I only get black. It is like when I set alpha 0 tint it zeroes the other channels.
What Im doing wrong?
one easy way to do it
Sprite s=new Sprite(tex);
float alpha=0.5f; //or whatever you want
s.draw(batch,alpha); render the sprite with the given alpha without effecting other images
Here you will set the alpha value for that particular sprite only, while if you use
batch.setColor(new Color(1,1,1,0));
In this case, you set the value for the entire batch cycle and u have to change it again and can be tedious if u have the differnt alpha requirement for different sprites.
glAlphaFunc(GL_GEQUAL, 0.5) can display the image where alpha >= 0.5.
Can opengl display the accumulation of alpha?
Example:
2 images, they will not display some part separately, because alpha < 0.5.
Now some of them overlap, their alpha sum to 0.6, how to display this overlap part?
I try to make a metaball example use opengl, if you have any idea, please please give me some hint.
Thank you so much
I would render the images to separate buffer (with Alpha Test off, and adding their alpha). Then, render the buffer onto the screen with Alpha Test.
First, create an empty buffer and set it's alpha to 0 for every pixel.
Then, render all of your objects on said buffer, using your blend function on colors and adding alpha.
Then, re-render the buffen on screen with Alpha Test turned on.
And the answer to the "Can opengl display the accumulation of alpha ?" - yes. You can render alpha as grayscale, for example.
worked for me with glBlendFunc(GL_SRC_ALPHA, GL_ONE);
This is an HLSL question, although I'm using XNA if you want to reference that framework in your answer.
In XNA 4.0 we no longer have access to DX9's AlphaTest functionality.
I want to:
Render a texture to the backbuffer, only drawing the opaque pixels of the texture.
Render a texture, whose texels are only drawn in places where no opaque pixels from step 1 were drawn.
How can I accomplish this? If I need to use clip() in HLSL, how to I check the stencilbuffer that was drawn to in step 1, from within my HLSL code?
So far I have done the following:
_sparkStencil = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.GreaterEqual,
ReferenceStencil = 254,
DepthBufferEnable = true
};
DepthStencilState old = gd.DepthStencilState;
gd.DepthStencilState = _sparkStencil;
// Only opaque texels should be drawn.
DrawTexture1();
gd.DepthStencilState = old;
// Texels that were rendered from texture1 should
// prevent texels in texture 2 from appearing.
DrawTexture2();
Sounds like you want to only draw pixels that are within epsilon of full Alpha (1.0, 255) the first time, while not affecting pixels that are within epsilon of full Alpha the second.
I'm not a graphics expert and I'm operating on too little sleep, but you should be able to get there from here through an effect script file.
To write to the stencil buffer you must create a DepthStencilState that writes to the buffer, then draw any geometry that is to be drawn to the stencil buffer, then switch to a different DepthStencilState that uses the relevant CompareFunction.
If there is some limit on which alpha values are to be drawn to the stencil buffer, then use a shader in the first pass that calls the clip() intrinsic on floor(alpha - val) - 1 where val is a number in (0,1) that limits the alpha values drawn.
I have written a more detailed answer here:
Stencil testing in XNA 4