GLFW Window Hints: Colour Bit Depth - opengl

In my game, I am trying to create a glfw window with no depth buffer, stencil buffer or alpha buffer, because all I want it to do is render a 2D image to the screen, the result of a previous framebuffer.
So I use the following initialization code:
glfwDefaultWindowHints();
glfwWindowHint(GLFW_DEPTH_BITS, 0);
glfwWindowHint(GLFW_STENCIL_BITS, 0);
glfwWindowHint(GLFW_ALPHA_BITS, 0);
However, when I create my window and call glGetInteger(GL_ALPHA_BITS), It returns 8. The Depth bits and stencil bits are 0 however.
My question is, when I specify a 'hint' using glfwWindowHint(), is it a recommendation for how the window should be created or something that must be.

Yes, it is a recommendation. But having more alpha bits is not a problem, since alpha blending can be enabled and disabled with glEnable(GL_BLEND)/glDisable(GL_BLEND)

Related

LibGDX texture blending with OpenGL blending function

In libGdx, i'm trying to create a shaped texture: Take a fully-visible rectangle texture and mask it to obtain a shaped textured, as shown here:
Here I test it on rectangle, but i will want to use it on any shape. I have looked into this tutorial and came with an idea to first draw the texture, and then the mask with blanding function:
batch.setBlendFunction(GL20.GL_ZERO, GL20.GL_SRC_ALPHA);
GL20.GL_ZERO - because i really don't want to paint any pixels from the mask
GL20.GL_SRC_ALPHA - from original texture i want to paint only those pixels, where mask was visible (= white).
Crucial part of the test code:
batch0.enableBlending();
batch0.begin();
batch0.draw(original, 0, 0); //to see the original
batch0.draw(mask, width1, 0); //and the mask
batch0.draw(original, 0, height1); //base for the result
batch0.setBlendFunction(GL20.GL_ZERO, GL20.GL_SRC_ALPHA);
batch0.draw(mask, 0, height1); //draw mask on result
batch0.setBlendFunction(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
batch0.end();
The center ot the texture get's selected well, but instead of transparent color around, i see black:
Why is the result blank and not transparent?
(Full code - Warning: very messy)
What you're trying to do looks like a pretty clever use of blending. But I believe the exact way you apply it is "broken by design". Let's walk through the steps:
You render your background with red and green squares.
You render an opaque texture on top of you background.
You erase parts of the texture you rendered in step 2 by applying a mask.
The problem is that for the parts you erase in step 3, the previous background is not coming back. It really can't, because you wiped it out in step 2. The background of the whole texture area was replaced in step 2, and once it's gone there's no way to bring it back.
Now the question is of course how you can fix this. There are two conventional approaches I can think of:
You can combine the texture and mask by rendering them into an off-sreen framebuffer object (FBO). You perform steps 1 and 2 as you do now, but render into an FBO with a texture attachment. The texture you rendered into is then a texture with alpha values that reflect your mask, and you can use this texture to render into your default framebuffer with standard blending.
You can use a stencil buffer. Masking out parts of rendering is a primary application of stencil buffers, and using stencil would definitely be a very good solution for your use case. I won't elaborate on the details of how exactly to apply stencil buffers to your case in this answer. You should be able to find plenty of examples both online and in books, including in other answers on this site, if you search for "OpenGL stencil". For example this recent question deals with doing something similar using a stencil buffer: OpenGL stencil (Clip Entity).
So those would be the standard solutions. But inspired by the idea in your attempt, I think it's actually possible to get this to work with just blending. The approach that I came up with uses a slightly different sequence and different blend functions. I haven't tried this out, but I think it should work:
You render the background as before.
Render the mask. To prevent it from wiping out the background, disable writing to the color components of the framebuffer, and only write to the alpha component. This leaves the mask in the alpha component of the framebuffer.
Render the texture, using the alpha component from the framebuffer (DST_ALPHA) for blending.
You will need a framebuffer with an alpha component for this to work. Make sure that you request alpha bits for your framebuffer when setting up your context/surface.
The code sequence would look like this:
// Draw background.
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE);
glDisable(GL_BLEND);
// Draw mask.
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glEnable(GL_BLEND);
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
// Draw texture.
A very late answer, but with the current version this is very easy. You simply draw the mask, set the blending mode to use the source color to the destination and draw the original. You'll only see the original image where the mask is.
//create batch with blending
SpriteBatch maskBatch = new SpriteBatch();
maskBatch.enableBlending();
maskBatch.begin();
//draw the mask
maskBatch.draw(mask);
//store original blending and set correct blending
int src = maskBatch.getBlendSrcFunc();
int dst = maskBatch.getBlendDstFunc();
maskBatch.setBlendFunction(GL20.GL_ZERO, GL20.GL_SRC_COLOR);
//draw original
maskBatch.draw(original);
//reset blending
maskBatch.setBlendFunction(src, dst);
//end batch
maskBatch.end();
If you want more info on the blending options, check How to do blending in LibGDX

Opengl Stencil Buffer set when not transparent

I was trying to configure my stencil buffer so that, when enabled, it would set when the pixel drawn is not transparent (thus creating a map of pixels that light can collide with). What I've done is:
glClearStencil(0); //clear stencil
glStencilFunc(GL_EQUAL, 0xFF, 0x000000FF); //only where alpha (mask : 0x000000FF) is 0xFF (opaque)
glStencilOp(GL_INCR, GL_KEEP, GL_KEEP); //increment if passes (if it is opaque)
render(); //withing this method I sometimes disable the whole thing to draw the floor, for example
Then, i use the following code to test:
/* TURN OFF STENCIL */
glEnable(GL_STENCIL_TEST); //re-enable
glStencilFunc(GL_EQUAL, 0, 1); //if the test is equal to 1
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); //do not change stencil buffer
ImageInfo.drawColorSquare(0, 0, Configurations.SCREEN_WIDTH, Configurations.GAME_HEIGHT, Color.BLUE); //drwa blue square
glDisable(GL_STENCIL_TEST); //disable
However, there are two problems:
It doesn't seem to be ignoring transparent pixels, as it should;
If a region overlaps with another, then it reverses - for example, it sets to one, then another region is drawn in the same area, and it reset it to 0 again.
I don't know why that is happening. Probably something wrong with my mask, I guess - I wasn't absolutely sure how many pixels OpneGL used in the Color Buffer.
Also, GL_INCR should add up to the max, and not go back, according to the documentation. Since my stencil buffer size is one bit, it should set to one, try to increase again, fail, and keep on one (instead of reseting).
The Stencil Test is independent of what happens in the color buffer. Setting glStencilFunc
( http://www.opengl.org/sdk/docs/man/xhtml/glStencilFunc.xml )
you can specify how the Stencil Test interacts with what is already stored in the Stencil Buffer.
Setting glStencilOp ( http://www.opengl.org/sdk/docs/man/xhtml/glStencilOp.xml ) gives you the possibility of using the result of the depth test to perform the Stencil Test.
A good tutorial that explain the Stencil Test and a very instructive algorithm based upon it can be found here http://ogldev.atspace.co.uk/www/tutorial37/tutorial37.html
The stencil buffer is usually an 8-bit buffer that reserves a small portion of the memory normally used for the depth buffer and is used for advanced rejection of fragments; things like masking to arbitrary shapes rather than using rectangular scissor boxes. It has nothing to do with your color buffer, and to make sure that fragments that have a specific alpha value do not affect the pixels on screen you would use something called an alpha test.
In core OpenGL 3, the fixed-function alpha test is no longer supported, so you would have to implement it in a fragment shader and then discard if it failed to meet your condition.

Can I have a default Framebuffer without alpha and depth?

I am looking to save some video card memory by not allocating what I do not use. I am far from running out of memory, but it would feel 'cleaner' to me.
I can't really think of a reason to have an alpha value in the default framebuffer since my window is not going to alpha-blend with my desktop anyway. I was wondering if I could save a few bytes or have more color depth by removing its alpha.
Likewise, I am doing some deferred lighting and all my depth calculations occur in a framebuffer that is not the default one. Then I simply render a quad (two tris) to the default frame buffer with the texture in which I rendered my scene as well as a few GUI elements, none of which requires depth-testing. I call glDisable(GL_DEPTH_TEST) when rendering the default framebuffer, but I wouldn't mind not having a depth buffer at all instead of a depth buffer that I don't use.
Can I do that within OpenGl ? Or within SDL with which I create my OpenGl context ?
I try to create my OpenGl context with the following SDL attributes
sdl.GL_SetAttribute(sdl.GL_DOUBLEBUFFER, 1)
sdl.GL_SetAttribute(sdl.GL_DEPTH_SIZE, 0)
sdl.GL_SetAttribute(sdl.GL_ALPHA_SIZE, 0)
info := sdl.GetVideoInfo()
bpp := int(info.Vfmt.BitsPerPixel)
if screen := sdl.SetVideoMode(640, 480, bpp, sdl.OPENGL); screen == nil {
panic("Could not open SDL window: " + sdl.GetError())
}
if err := gl.Init(); err != nil {
panic(err)
}
Unfortunately, SDL's binding for GoLang lack the sdl_gl_GetAttribute function that would allow me to check whether my wishes are granted.
As I said, there is no emergency. I am mostly curious.
I can't really think of a reason to have an alpha value in the default framebuffer since my window is not going to alpha-blend with my desktop anyway.
That's good, because the default framebuffer having an alpha channel wouldn't actually do that (on Windows anyway).
The framebuffer alpha is there for use in blending operations. It is sometimes useful to do blending that is in some way based on a destination alpha color. For example, I once used the destination alpha as a "reflectivity" value for a reflective surface, when drawing the reflected objects after having drawn that reflective surface. It was necessary to do it in that order, because the reflective surface had to be drawn in multiple passes.
In any case, the low level WGL/GLX/etc APIs for creating OpenGL contexts do allow you to ask to not have alpha or depth. Note that if you ask for 0 alpha bits, that will almost certainly save you 0 memory, since it's more efficient to render to a 32-bit framebuffer than a 24-bit one. So you may as well keep it.
However, since you're using SDL, and the Go binding of SDL, that's up to SDL and it's Go binding. The sdl_gl_SetAttribute function should work, assuming SDL implements it correctly. If you want to verify this, you can just ask the framebuffer through OpenGL:
glBindFramebuffer(GL_FRAMEBUFFER, 0); //Use the default framebuffer.
GLint depthBits, alphaBits;
glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER, GL_DEPTH, GL_FRAMEBUFFER_ATTACHMENT_DEPTH_SIZE, &depthBits);
glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER, GL_BACK_LEFT, GL_FRAMEBUFFER_ATTACHMENT_ALPHA_SIZE, &alphaBits);

How to make fading-to-black effect with OpenGL?

Im trying to achieve fade-to-black effect, but i dont know how to do it. I tried several things but they fail due to how opengl works
I will explain how it would work:
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Edit: I would prefer to know non-shader way of doing this, but if its not possible then i can accept shader-way too.
Edit2: Since there is some confusion around here, I would like to tell that i can do this kind of effect already by drawing a black transparent quad over my whole scene. BUT, this does not work as i want it to work; there is a limit on the darkness the pixels can get, so it will always leave some of the pixels "visible" (above zero color value) because: 1*0.9 = 0.9 -> rounded to 1 again, etc. I can "fix" this by making the trail shorter, but i want to be able to adjust the trail lenght as much as possible and instead of bilinear (if thats the right word) interpolation i want linear (so it would always reduce -1 from each r,g,b value in 0-255 scale, instead of using a percent value).
Edit3: Still some confusion left, so lets be clear: i want to improve the effect that is done by disabling GL_COLOR_BUFFER_BIT from glClear(), i dont want to see the pixels on my screen FOREVER, so i want to make them darker in time, by drawing a quad over my scene that will reduce each of the pixels color value by 1 (in 0-255 scale).
Edit4: I'll make it simple, i want OpenGL method for this, the effect should use as little power, memory or bandwidth as possible. this effect is supposed to work without clearing the screen pixels, so if i draw a transparent quad over my scene, the previous pixels drawn will get darker etc. But as explained above few times, its not working very well. The big NO's are: 1) reading pixels from screen, modifying them one by one in a for loop and then uploading back. 2) rendering my objects X times with different darknesses to emulate the trail effect. 3) multiplying the color values is not an option since it wont make the pixels into black, they will stay on the screen forever at certain brightness (see explanation somewhere above).
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Before I explain how to do this, I would like to say that the visual effect you're going for is a terrible visual effect and you should not use it. Subtracting a value from each of the RGB colors will produce a different color, not a darker version of the same color. The RGB color (255,128,0), if you subtract 1 from it 128 times, will become (128, 0, 0). The first color is brown, the second is a dark red. These are not the same.
Now, since you haven't really explained this very well, I have to make some guesses. I am assuming that there are no "objects" in what you are rendering. There is no state. You're simply drawing stuff at arbitrary locations, and you don't remember what you drew where, nor do you want to remember what was drawn where.
To do what you want, you need two off-screen buffers. I recommend using FBOs and screen-sized textures for these. The basic algorithm is simple. You render the previous frame's image to the current image, using a blend mode that "subtracts 1" from the colors you write. Then you render the new stuff you want to the current image. Then you display that image. After that, you switch which image is previous and which is current, and do the process all over again.
Note: The following code will assume OpenGL 3.3 functionality.
Initialization
So first, during initialization (after OpenGL is initialized), you must create your screen-sized textures. You also need two screen-sized depth buffers.
GLuint screenTextures[2];
GLuint screenDepthbuffers[2];
GLuint fbos[2]; //Put these definitions somewhere useful.
glGenTextures(2, screenTextures);
glGenRenderbuffers(2, screenDepthbuffers);
glGenFramebuffers(2, fbos);
for(int i = 0; i < 2; ++i)
{
glBindTexture(GL_TEXTURE_2D, screenTextures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, SCREEN_WIDTH, SCREEN_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
glBindRenderbuffer(GL_RENDERBUFFER, screenDepthBuffers[i]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, SCREEN_WIDTH, SCREEN_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[i]);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, screenTextures[i], 0);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, screenDepthBuffers[i]);
if(glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
//Error out here.
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
}
Drawing Previous Frame
The next step will be drawing the previous frame's image to the current image.
To do this, we need to have the concept of a previous and current FBO. This is done by having two variables: currIndex and prevIndex. These values are indices into our GLuint arrays for textures, renderbuffers, and FBOs. They should be initialized (during initialization, not for each frame) as follows:
currIndex = 0;
prevIndex = 1;
In your drawing routine, the first step is to draw the previous frame, subtracting one (again, I strongly suggest using a real blend here).
This won't be full code; there will be pseudo-code that I expect you to fill in.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[currIndex]);
glClearColor(...);
glClearDepth(...);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, screenTextures[prevIndex]);
glUseProgram(BlenderProgramObject); //The shader will be talked about later.
RenderFullscreenQuadWithTexture();
glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);
The RenderFullscreenQuadWithTexture function does exactly what it says: renders a quad the size of the screen, using the currently bound texture. The program object BlenderProgramObject is a GLSL shader that does our blend operation. It fetches from the texture and does the blend. Again, I'm assuming you know how to set up a shader and so forth.
The fragment shader would have a main function that looks something like this:
shaderOutput = texture(prevImage, texCoord) - (1.0/255.0);
Again, I strongly advise this:
shaderOutput = texture(prevImage, texCoord) * (0.05);
If you don't know how to use shaders, then you should learn. But if you don't want to, then you can get the same effect using a glTexEnv function. And if you don't know what those are, I suggest learning shaders; it's so much easier in the long run.
Draw Stuff As Normal
Now, you just render everything you would as normal. Just don't unbind the FBO; we still want to render to it.
Display the Rendered Image on Screen
Normally, you would use a swapbuffer call to display the results of your rendering. But since we rendered to an FBO, we can't do that. Instead, we have to do something different. We must blit our image to the backbuffer and then swap buffers.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[currIndex]);
glBlitFramebuffer(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 0, SCREEN_WDITH, SCREEN_HEIGHT, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
//Do OpenGL swap buffers as normal
Switch Images
Now we need to do one more thing: switch the images that we're using. The previous image becomes current and vice versa:
std::swap(currIndex, prevIndex);
And you're done.
You may want to render a black rectangle with alpha going from 1.0 to 0.0 using glBlendFunc (GL_ONE, GL_SRC_ALPHA).
Edit in response to your comment (reply doesn't fit in a comment):
You cannot fade single pixels depending on their age with a simple fade-to-black operation. Usually a render target does not "remember" what has drawn to it in previous frames. I could think of a way to do this by alternatingly rendering to one of a pair of FBOs and using their alpha channel for it, but you needed a shader there. So what you would do is first render the FBO containing the pixels at their previous positions, decreasing their alpha value by one, dropping them when alpha == 0, otherwise darkening them whenever their alpha decreases, then render the pixels at their current positions with alpha == 255.
If you only have moving pixels:
render FBO 2 to FBO 1, darkening each pixel in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render FBO 2 to screen
If you want to modify some scene (i.e. have a scene and moving pixels in it):
set glBlendFunc (GL_ONE, GL_ZERO)
render FBO 2 to FBO 1, reducing each alpha > 0.0 in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render the scene to screen
set glBlendFunc (GL_ONE, GL_SRC_ALPHA)
render FBO 2 to screen
Actually the scale should be (float) / 255.0 / 255.0 to make the components equally fade away (and not one that started at a lower value become zero before the others do).
If you only have a few moving pixels, you could re-render the pixel at all previous positions up to 255 "ticks" back.
Since you need to re-render each of the pixels anyway, just render each one with the proper color gradient: Darker, the older the pixel is. If you have a real lot of pixels, the dual FBO approach
might work.
I am writing ticks, and not frames, because frames can take a varying amount of time depending on renderer and hardware, but you probably want to have the pixel trail fade away within a constant time. That means you need to dim each pixel only after so-and-so many milliseconds, keeping their color for the frames in between.
One non-shader way of doing this, especially if the fade to black is the only thing that is going on the screen is to grab the contents of the screen via readpixels iirc, pop those into a texture, and put a rectangle up onto the screen with that texture, then you can modulate the color of the rectangle to towards black to do the efect that you want to accomplish.
It is the drivers, Windows itself does not support OpenGL or only a low Version, I think 1.5. All newer versions come with drivers from ATI or NVIDIA, Intel etc.
Are you using different cards?
What version of OpenGL are you effectivly using?
It's situations like this that make it so I cannot use pure OpenGL. I am not sure if your project has room for it (which it may not if you're using another windowing API), or if the added complexity would be worth it, but adding a 2D library like SDL which works with OpenGL would allow you to directly work with the display surface's pixels in a reasonable fashion, as well as just pixels in general, which OpenGL generally doesn't make easy.
Then all you would need to do is run through the display surface's pixels before OpenGL renders it's geometry, and subtract 1 from each RGB component.
That's the easiest solution I can see anyway, if using additional libraries with OpenGL is an option.

problem saving openGL FBO larger than window

I'm rendering into an OpenGL offscreen framebuffer object and like to save it as an image. Note that the FBO is larger than the display size. I can render into the offscreen buffer and use it as texture, which works. I can "scroll" this larger texture through the display using an offset, which makes me confident, that I render into a larger context than the window.
If I save the offscreen buffer to an image file it always gets cropped. The code fragment for saving is:
void ofFBOTexture::saveImage(string fileName) {
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
// get the raw buffer from ofImage
unsigned char* pixels = imageSaver.getPixels();
glReadPixels(0, 0, 1024, 1024, GL_RGB, GL_UNSIGNED_BYTE, pixels);
imageSaver.saveImage(fileName);
}
Please note, that the image content is cropped, the visible part is saved correctly (which means no error in pixel formats, GL_RGB issues etc.), but the remaining space is filled with one color.
So, my question is - what am I doing wrong?
Finally I solved the issue.
I have to activate the fbo for saving its contents:
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
// save code
...
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
while only selecting the fbo for glReadPixels via
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
doesn't suffice.
(All other things where correct and tested, eg. viewport sizes, width and height of buffer, image texture etc.)
Two questions to start: How are you creating imageSaver and are you sure your width and height are correct (e.g. are you trying to save a 1024 x 1024 image? What size are you getting)?
You're not doing anything wrong, this has been quite common behaviour with OpenGL graphics drivers for a long time - I recall running up against exactly the same issue on nVidia Geforce 3 cards at least a decade ago, and I adopted a similar solution - rendering to an offscreen texture.
This sounds like you have the wrong viewport size or something similar.