Rendering to framebuffer and then to screen has reduced quality - opengl

I seem to have a problem with my rendering. When I render to a framebuffer and then to screen, the images just seem less vibrant and kind of faded. Even simple ones.
In the picture above, the pink box on the right is rendered directly onto the screen buffer and the ones on the left are first rendered onto a framebuffer and then onto the screen.
I am using a multisampled framebuffer and it seems to have made no difference. I tried only blending once by using GL_RGB on the framebuffer color texture that also didn't help. Any ideas?

The issue ended up being the size of the framebuffer. It was too small which made the quality suffer. I multiplied the width and height by 6 and the quality goes up.

Related

Opengl and DirectX Viewport and Rendertargets

So I am still trying to get the same result within OpenGL and DirectX.
Now my problem is using Rendertargets and Viewports.
What I learned so far:
DirectX's Backbuffer stretches if the Window is resized
-- Size of rendered Texture changes
OpenGL's Backbuffer resized if the Window is resized
-- Rendered Texture stays where it is rendered
Now what I did here was to change OpenGL's Viewport to the Window Size. Now both have the same result, the rendered Texture is stretched.
One Con:
-OpenGL's Viewport's Size cant be set like in DirectX because it is set to the Window Size.
Now when rendering a Rendertarget this happens:
DirectX: Size matters, if Size is greater than the Backbuffer the texture only takes a small place, if Size is lower than the Backbuffer the Texture takes a lot place
OpenGL: Size doesnt matter, rendered Context stays the same/stretches.
Now my question is:
How do I get the Same Result in OpenGL and in DirectX?
I want the same Result I have in DirectX within OpenGL. And is my Idea of doing it so far right or is there a better idea?
What I did: draw everything in OpenGL to a FrameBuffer and blit that to the backBuffer. Now the Content is rescaleable just like in DirectX.

Get pixel behind the current pixel

I'm coding a programm in C++ with glut, rendering a 3D model in a window.
I'm using glReadPixels to get the image of the scenery displayed in the windows.
And I would like to know how I can get, for a specific pixel (x, y), not directly its color but the color of the next object behind.
If I render a blue triangle, and a red triangle in front of it, glReadPixels gives me red colors from the red triangle.
I would like to know how I can get the colors from the blue triangle, the one I would get from glReadPixels if the red triangle wasn't here.
The default framebuffer only retains the topmost color. To get what you're suggesting would require a specific rendering pipeline.
For instance you could:
Create an offscreen framebuffer of the same dimensions as your target viewport
Render a depth-only pass to the offscreen framebuffer, storing the depth values in an attached texture
Re-render the scene with a special shader that only drew pixels where the post-transformation Z values was LESS than the value in the previously recorded depth buffer
The final result of the last render should be the original scene with the top layer stripped off.
Edit:
It would require only a small amount of new code to create the offscreen framebuffer and render a depth only version of the scene to it, and you could use your existing rendering pipeline in combination with that to execute steps 1 and 2.
However, I can't think of any way you could then re-render the scene to get the information you want in step 3 without a shader, because it both the standard depth test plus a test against the provided depth texture. That doesn't mean there isn't one, just that I'm not well versed in GL tricks to think of it.
I can think of other ways of trying to accomplish the same task for specific points on the screen by fiddling with the rendering system, but they're all far more convoluted than just writing a shader.

Rendering a Semi-Transparent Sprite to a Texture

I have a 1024x1024 background texture and am trying to render a 100x100 sprite (also stored in a texture) to the bottom left corner of the background texture.
I want to render the sprite at 50% opacity. This needs to be done in the CPU, not the GPU using a shader. Most examples I've found are using shaders to achieve this.
What's the best way to do this?
I suppose you mean from CPU-side opengl commands, therefore using the fixed function (or fixed pipeline). I deduce this from the "no shader" request.
Because "doing this on CPU" would actually really mean do a retrieval/mapping of the texture to access it on CPU, loop on pixels, and copy back result to graphic card using glTexImage or unmap the texture afterward. this last approach would be terribly inefficient.
So you just need to activate blending.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
and render in order: background, then a little quad with your 100x100 image after. it will take the alpha channel from your 100x100 image to make the blend. You could set it to a constant 50% from an image editing tool.

OpenGL: Weird transparency blending result

I'm working on creating a transparent GUI in OpenGL, and am trying to get text rendered over some semi-transparent quads, but the results are odd.
If I render the text by itself, with nothing behind it, it looks fine:
However, if I render a semi-transparent quad behind it (rendering the quad before rendering the text), I get this:
I have blending set to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). The font texture is an all-white texture with the character shapes in the alpha channel.
Do I need to be doing something special when performing alpha-transparency over an existing layer of transparency? Or is there something else I need to check?
The alpha value of your font texture seems to be off. It should be 0 for texels that you want to be invisible and 1 (or 255 in bytes) for visible texels. You should check the texture and make sure alpha values are correct.
Instead of alpha blending, you can use alpha testing. This will completely get rid of fragments, that have a alpha value below a certain threshold and is often much faster than blending.
glDisbale(GL_BLEND);
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.96f); // Or some fitting threshold for your texture
This might work even if your texture's alpha is off in some places, but doesn't look like it is the case here, as the 's' and 't' seem to have a low alpha in places where it should be 1.
Thanks for the responses. There was nothing wrong with my font texture, but your suggestions led me to try a few other things. Turns out the problem wasn't the transparency at all. There was a problem with rendering the background quad, which caused it to also render the text quads, but using the background texture. Bah...

JOGL mipmaps and texture shimmering

I've a wall and a brick texture in my OpenGL 2 scene that keeps shimmering and flashing no matter what I set. When I'm zoomed in close (and can see clearly the texture), then the flashing and shimmering stops. But when I'm zoomed out and moving around the scene, the flashing and shimmering is very pronounced. This is the texture code for the brick wall:
brickwall.setTexParameteri(gl, GL2.GL_TEXTURE_WRAP_S, GL2.GL_REPEAT);
brickwall.setTexParameteri(gl, GL2.GL_TEXTURE_WRAP_T, GL2.GL_REPEAT);
brickwall.setTexParameteri(gl, GL2.GL_TEXTURE_MAG_FILTER,GL2.GL_NEAREST);
brickwall.setTexParameteri(gl, GL2.GL_TEXTURE_MIN_FILTER,GL2.GL_LINEAR);
gl.glGenerateMipmap(GL2.GL_TEXTURE_2D);
brickwall.enable(gl);
brickwall.bind(gl);
//...
brickwall.disable(gl);
From what I've googled, it seems that this is a problem that mipmapping solves. But my question is, how does one do this? Do I have to create, load and set parameters for all the various power of 2 sized images? Can anyone give me an example for loading and displaying a JOGL2 texture using mipmaps that won't flicker and shimmer zooming and moving about a scene?
You are generating the mipmap chain with glGenerateMipmap, but you didn't set an appropiate MIN filter:
brickwall.setTexParameteri(gl, GL2.GL_TEXTURE_MIN_FILTER,GL2.GL_LINEAR_MIPMAP_LINEAR);
The *MIPMAP* filters use mipmaps, the other texture filters don't.