While working on a particle engine in SDL, I stumbled over the following problem:
After implementing a frame interpolation technique as described here (Step 6: Optimization, I'm basically drawing the last frame with an alpha of 254 to the current frame) so that it fades out), I noticed that some pixels which were supposed to gradually fade from white to black, ended up staying gray, with rgba values of 112 to be precise. After doing some math I've found what's causing this: every frame I multiply the rgba values of the last frame by 254/255, which works fine up to and excluding 112.
Then something funny happens. When you do the following: round(112/255*254)=112, the values will stay the same (I'm rounding the number because the end value is to be stored as an 8bit color value of a pixel), meaning that my fading technique using alpha doesn't work anymore. The problem is that I would like these gray pixels which stay at 112 to fade out further. The question is, how would I achieve such a thing within SDL?
I know I could just make the value of 254 lower so that this sort of minimum decreases but I would like my particles to fade out really gently (I'm making a 60fps game). I've also considered creating an OpenGL graphics engine myself (so that I could use floating point textures, which do have to precision I need in this case), but I'm simply not good enough at OpenGL and lack the time to do such a thing.
The reason I need to use a texture for storage is that I would like to have particles which emit trails (as if you stopped clearing your frame buffers and you moved an object, but instead of the screen becoming a mess the older pixels would fade out)
currentFrame->renderTo();//Draw to the current frame rendertarget
graphics.clear(Color(0,0,0,0)); //Clear the screen using the given color
graphics.scale=1.0f; //Manipulate pixel size (for visible pixel art)
SDL_SetTextureBlendMode(lastFrame->texture, SDL_BLENDMODE_NONE);
//Set blendmode to opaque
graphics.drawTexture(lastFrame,graphics.camera.position,Color(255,255,255,254));
//Draw the last frame
SDL_SetTextureBlendMode(lastFrame->texture, SDL_BLENDMODE_BLEND);
//Revert blendmode to alpha blending
I would use a global float variable called 'decay' to store the non-rounded alpha value.
float decay = 255.0;
[...]
decay = decay / 255.0 * 254.0;
Uint8 alpha = round(decay);
If you need one 'decay' value per pixel then you could declare a struct for a particle:
typedef struct {
int active;
float x,y;
float dx,dy; // directions for x and y
float decay;
Uint8 r,g,b,a; // colour
} s_dots;
s_dots dots[MAX_NUMBER_OF_DOTS];
Related
I am looking to reproduce the glow effect from this tutorial, if I understand well, we convert the first image to an "alpha texture" (black and white), and we blur the (rgb * a) texture.
How is it possible to create this alpha texture, so that some colors go to the white, and the other go to the black? I found this : How to render a texture with alpha? but I don't really know how to use these answers.
Thanks
It appears you are misunderstanding what that diagram is showing you. It is actually all one texture, but (a) shows the RGB color and (b) shows the alpha channel. (c) shows what happens when you multiply RGB by A.
Alpha is not actually "black and white", it is an abstract concept and amounts to a range of values between 0.0 and 1.0. For the human brain to make sense out of it, it interprets that as black (0.0) and white (1.0). In reality, alpha is whatever you want it to be and unrelated to color (though it can be used to do something to color).
Typically the alpha channel would be generated by a post-process image filter, that looks for areas of the texture with significantly above average luminance. In modern graphics engines HDR is used and any part of the scene with a color too bright to be displayed on a monitor is a candidate for glowing. The intensity of this glow is derived from just how much brighter the lighting at that point is than the monitor can display.
In this case, however, it appears to be human created. Think of the alpha channel like a mask, some artist looked at the UFO and decided that the areas that appear non-black in figure (b) were supposed to glow so a non-zero alpha value was assigned (with alpha = 1.0 glowing the brightest).
Incidentally, you should not be blurring the alpha mask. You want to blur the result of RGB * A. If you just blurred the alpha mask, then this would not resemble glowing at all. The idea is to blur the lit parts of the UFO that are supposed to glow and then add that on top of the base UFO color.
In a nutshell, when should the color buffer be cleared and when should the depth buffer be cleared? Is it always at the start of the drawing of the current scene/frame? Are there times when you would draw the next frame without clearing these? Are there other times when you would clear these?
Ouff... even though it's a simple question it's a bit hard to explain ^^
It should be cleared before drawing the scene again. Yes, every time if you want to avoid strange and nearly uncontrollable effect.
You don't want to clear the two Buffers after swapping when you bind the scene to a texture () but right after that there's no more use for it.
The Color Buffer is as the name says a buffer, storing the computed color data. For better understanding, imagine you draw on a piece of paper. Each Point on the Paper knows which color was drawn on top of it - and that's basically all of it.
But: without the Depth Buffer, your Color Buffer is (except some special cases like multiple renderpasses for FX effects) nearly useless. It's like a second piece of paper but in a gray scale. How dark the gray is decides how far the last drawn pixel is away from the screen (relative to the zNear and zFar clipping plane).
If you instruct OpenGl to draw another primitive, it goes pixel by pixel and checks, which depth value the pixel would have. If it's higher than the value stored in the Depth Buffer for that pixel, it draws over the Color Buffer in that position and does nothing if the value is lower.
To recap, the Color Buffer stores the picture to be drawn on your screen, and the Depth Buffer decides weather a part of your primitive get's drawn in this position.
So clearing the Buffers for a new scene is basically changing the Paper to draw in top. If you want to mess with the old picture, then keep it, but it's in better hands on the monitor (or on the wall^^).
I'm building a LIDAR simulator in OpenGL. This means that the fragment shader returns the length of the light vector (the distance) in place of one of the color channels, normalized by the distance to the far plane (so it'll be between 0 and 1). In other words, I use red to indicate light intensity and blue to indicate distance; and I set green to 0. Alpha is unused, but I keep it at 1.
Here's my test object, which happens to be a rock:
I then write the pixel data to a file and load it into a point cloud visualizer (one point per pixel) — basically the default. When I do that, it becomes clear that all of my points are in discrete planes each located at a different depth:
I tried plotting the same data in R. It doesn't show up initially with the default histogram because the density of the planes is pretty high. But when I set the breaks to about 60, I get this:
.
I've tried shrinking the distance between the near and far planes, in case it was a precision issue. First I was doing 1–1000, and now I'm at 1–500. It may have decreased the distance between planes, but I can't tell, because it means the camera has to be closer to the object.
Is there something I'm missing? Does this have to do with the fact that I disabled anti-aliasing? (Anti-aliasing was causing even worse periodic artifacts, but between the camera and the object instead. I disabled line smoothing, polygon smoothing, and multisampling, and that took care of that particular problem.)
Edit
These are the two places the distance calculation is performed:
The vertex shader calculates ec_pos, the position of the vertex relative to the camera.
The fragment shader calculates light_dir0 from ec_pos and the camera position and uses this to compute a distance.
Is it because I'm calculating ec_pos in the vertex shader? How can I calculate ec_pos in the fragment shader instead?
There are several possible issues I can think of.
(1) Your depth precision. The far plane has very little effect on resolution; the near plane is what's important. See Learning to Love your Z-Buffer.
(2) The more probable explanation, based on what you've provided, is the conversion/saving of the pixel data. The shader outputs floating point values, but these are stored in the framebuffer, which will typically have only 8bits per channel. For color, what that means is that your floats will be mapped to the underlying 8-bit (fixed width, integer) representation, therefore only possessing 256 values.
If you want to output pixel data as the true floats they are, you should make a 32-bit floating point RGBA FBO (with e.g. GL_RGBA32F or something similar). This will store actual floats. Then, when your data from the GPU, it will return the original shader values.
I suppose you could alternately encode a single float in a vec4 with some multiplication, if you don't have a FBO implementation handy.
I can load an image in (png) no problem using SDL_image and also display it perfectly. What I would like to do though is gradually fade the image in from fully transparent to fully opaque. I have seen some tutorials mentioning SDL_SetAlpha but this is for use on a an SDL_Surface where as SDL_image loads as SDL_Texture for the hardware acceleration.
Can anyone help out with how this might be done while maintaining good performance?
So here is what I found out. To render a texture with alpha you simply need to make a call to SetTextureAlphaMod passing in a pointer to the texture and an integer in the range 0 - 255.
The function itself is documented here
NB. Alpha modulation is not always supported by the renderer, and the function will return -1 if this is the case (returns 0 for successful call). There is information in the function documentation as to how to detect if your renderer supports alpha modulation.
The next part of my problem was how to perform smooth fading from SDL_TRANSPARENT (0) to SDL_OPAQUE (255). I did this in my variable update function (game is using fixed and variable updates)
#define FADE_SPEED 0.07f;
void SomeClass::UpdateVariable(float elapsedTime)
{
// Check there is a texture
if (texture) {
// Set the alpha of the texture
SDL_SetTextureAlphaMod(texture, alpha);
}
// Update the alpha value
if (alpha < SDL_ALPHA_OPAQUE) {
alphaCalc += FADE_SPEED * elapsedTime;
alpha = alphaCalc;
}
// if alpha is above 255 clamp it
if (alpha >= SDL_ALPHA_OPAQUE) {
alpha = SDL_ALPHA_OPAQUE;
alphaCalc = (float)SDL_ALPHA_OPAQUE;
}
}
Note the use of two alpha variables. alpha is the actual value which is passed to the function for controlling the texture alpha and is a Uint8. alphaCalc is a float and is used to store the gradually increasing value from 0 to 255. Every call of the update function sees this value update by a scaler value FADE_SPEED multiplied by the frame time elapsed. It is then be assigned to the actual alpha value. This is done to maintain a smooth fade as integer rounding prevents a smooth fade using the alpha value alone.
So we have an image. We want to draw a line that must definitely be seen. So how to draw a lines with colors that are inverted relatively to the surface it should be drawn on in each point?
The XOR trick is trivially different. It's not visually the most distinct, though, if only because it entirely ignores how human eyes work. For instance, on light greys, a saturated red color is visually quite distinct.
You might want to convert the color to HSV and check the saturation S. If low (greyscale), draw a red pixel. If the saturation is high, the hue is quite obvious, and a white or black pixel will stand out. Use black (V=0) if the the original pixel had a high V; use white if the original pixel had a low V (dark saturated color)
You can use the LineIterator method as suggested earlier.
(BTW, the XOR trick has quite bad cases too. 0x7F ^ 0xFF = 0x80. That's bloody hard to see)
Use a LineIterator and XOR the colour values of each pixel manually.
This is from the top of my head and I'm not a c++ dev, but it should be possible
to draw the line into a separate image and then mimic an invert blend mode...basically you need to get the 'negative'/inverted colour behind a pixel, which you get by subtracting the color bellow your line from the maximum colour value.
Something like:
uint invert(uint topPixel, uint bottomPixel) {
return (255 - bottomPixel);
}
Not sure how if colours are from 0 to 255 or from 0.0 to 1.0, but hopefully this illustrates the idea.