So I have a question about if there is a way to get a correct alpha result when drawing something with alpha to coverage in OpenGL. I am drawing some stuff in a buffer that is going to be composited on top of a video, so it is writing to a black transparent buffer and the un-premultiplying it to composite
However some of the objects are drawn with Alpha to Coverage. The issue is that in order for the coverage resolve to produce the correct alpha value for each of the samples, the alpha of the sample needs to be 1. But the alpha that is output is for example 0.75 writing to a transparent backing, in order for alpha to coverage to work, and if you write 0.75 to the alpha, it will then average 0.75 among the samples to give 0.5625.
So basically i'm wondering if there is some way to output and alpha of 1 to the samples I am writing to, or if not is there another way to achieve the result I want (Ideally still using alpha to coverage because I need the order independent transparency)
I don't mind using super modern opengl stuff or nvidia extensions for this due to the very specific hardware requirements
Ok I figured it out - there is
GL_SAMPLE_ALPHA_TO_ONE
which does exactly what I want - sets the alpha to the max value no matter what you output
Related
We are currently using GL_SRGB8_ALPHA8 for FBO color correction but it causes significant color banding in darker scenes.
Is there a version of GL_SRGB8_ALPHA8 that has 10 bit per color channel (e.g. GL_RGB10_A2)? If not, what workarounds are there for this use case?
The attached image has been added contrast to make it more visible but it's still noticeable in the source as well.
On the surface Direct3D 9 seems to support this because it doesn't encode sRGB into formats. Instead it sets the sampler to decode to linear space and so I can't see how it doesn't work properly on D3D9 or else the textures would be filtered incorrectly. I don't know how it implements it otherwise. Even with GL_EXT_texture_sRGB_decode it was decided (there's a note) not to enable this D3D9 like behavior.
As usual OpenGL seems to always be chasing the ghost of this now old API. It could just have something like D3DSAMP_SRGBTEXTURE and it would have parity with it. Presumably the hardware implements it. Any banding could depend on the monitor since ultimately it has to be down-converted to the monitor's color depth which is probably much lower than 10 bits.
In the end we ended up with linear 16-bit floating-point format
I suspect drivers use it internally for SRGB anyway
At any rate, as #mick-p noted, we're always limited to display color depth
What does setting this variable do? For instance, if I set it to 4, what does that mean?
I read a description on glfw.org (see here: GLFW Window Guide) under the "Framebuffer related hints" section. The manual says "GLFW_SAMPLES specifies the desired number of samples to use for multisampling. Zero disables multisampling. GLFW_DONT_CARE means the application has no preference."
I also read a description of multisampling in general (see here: Multisampling by Shawn Hargreaves).
I have a rough idea of what multisampling means: when resizing and redrawing an image, the number of points used to redraw the image should be close enough together that what we see is an accurate representation of the image. The same idea pops up with digital oscilloscopes---say you're sampling a sinusoidal signal. If the sampling rate just so happens to be exactly equal to the frequency (f) of the wave, the scope displays a constant voltage, which is much different than the input signal you're hoping to see. To avoid that, the Nyquist Theorem tells us that we should sample at a rate of at least 2f. So I see how a problem can arise in computer graphics, but I don't know what exactly the function
glfwWindowHint(GLFW_SAMPLES, 4); does.
What does setting this variable do? For instance, if I set it to 4, what does that mean?
GLFW_SAMPLES is used to enable multisampling. So glfwWindowHint(GLFW_SAMPLES, 4) is a way to enable 4x MSAA in your application.
4x MSAA means that each pixel of the window's buffer consists of 4 subsamples, which means that each pixel consists of 4 pixels so to speak. Thus a buffer with the size of 200x100 pixels would actually be 800x400 pixels.
If you were to create an additional framebuffer that is 4 times bigger than the screen. Then using it as a texture sampler with GL_LINEAR as the filter, would basically achieve the same result. Note that this is only the case for 4x MSAA, as GL_LINEAR only takes 4 samples closest to the pixel in question.
When it comes to anti-aliasing, then using MSAA is a really effective but expensive solution. If you want a very clean and good looking result, then it's definitely the way to go. 4x MSAA is usually chosen as there's a good balance between quality and performance using it.
The cheaper alternative in terms of performance is to use FXAA. Which is done as a post-processing step and usually comes at no cost. The difference is that MSAA renders at a bigger size and downsamples to to the wanted size, not loosing any quality. Where as FXAA simply averages the pixels as is, basically bluring the image. FXAA usually gives a really decent result.
Also your driver will most likely enables it by default. But if it doesn't then use glEnable(GL_MULTISAMPLE).
Lastly if you haven't already, then I highly recommend reading LearnOpenGL's Anti-Aliasing tutorial. It gives a really in-depth explanation of all of this.
I am experiencing the issue described in this article where the second color ramp is effectively being gamma-corrected twice, resulting in overbright and washed-out colors. This is in part a result of my using an sRGB framebuffer, but that is not the actual reason for the problem.
I'm testing textures in my test app on iOS8, and in particular I am currently using a PNG image file and using GLKTextureLoader to load it in as a cubemap.
By default, textures are treated NOT as being in sRGB space (which they are invariably saved in by the image editing software used to build the texture).
The consequence of this is that Apple has made GLKTextureLoader do the glTexImage2D call for you, and they invariably are calling it with the GL_RGB8 setting, whereas for actual correctness in future color operations we have to uncorrect the gamma in order to obtain linear brightness values in our textures for our shaders to sample.
Now I can actually see the argument that it is not required of most mobile applications to be pedantic about color operations and color correctness as applied to advanced 3D techniques involving color blending. Part of the issue is that it's unrealistic to use the precious shared device RAM to store textures at any bit depth greater than 8 bits per channel, and if we read our JPG/PNG/TGA/TIFF and gamma-uncorrect its 8 bits of sRGB into 8 bits linear, we're going to degrade quality.
So the process for most apps is just happily toss linear color correctness out the window, and just ignore gamma correction anyway and do blending in the SRGB space. This suits Angry Birds very well, as it is a game that has no shading or blending, so it's perfectly sensible to do all operations in gamma-corrected color space.
So this brings me to the problem that I have now. I need to use EXT_sRGB and GLKit makes it easy for me to set up an sRGB framebuffer, and this works great on last-3-or-so-generation devices that are running iOS 7 or later. In doing this I address the dark and unnatural shadow appearance of an uncorrected render pipeline. This allows my lambertian and blinn-phong stuff to actually look good. It lets me store sRGB in render buffers so I can do post-processing passes while leveraging the improved perceptual color resolution provided by storing the buffers in this color space.
But the problem now as I start working with textures is that it seems like I can't even use GLKTextureLoader as it was intended, as I just get a mysterious error (code 18) when I set the options flag for SRGB (GLKTextureLoaderSRGB). And it's impossible to debug as there's no source code to go with it.
So I was thinking I could go build my texture loading pipeline back up with glTexImage2D and use GL_SRGB8 to specify that I want to gamma-uncorrect my textures before I sample them in the shader. However a quick look at GL ES 2.0 docs reveals that GL ES 2.0 is not even sRGB-aware.
At last I find the EXT_sRGB spec, which says
Add Section 3.7.14, sRGB Texture Color Conversion
If the currently bound texture's internal format is one of SRGB_EXT or
SRGB_ALPHA_EXT the red, green, and blue components are converted from an
sRGB color space to a linear color space as part of filtering described in
sections 3.7.7 and 3.7.8. Any alpha component is left unchanged. Ideally,
implementations should perform this color conversion on each sample prior
to filtering but implementations are allowed to perform this conversion
after filtering (though this post-filtering approach is inferior to
converting from sRGB prior to filtering).
The conversion from an sRGB encoded component, cs, to a linear component,
cl, is as follows.
{ cs / 12.92, cs <= 0.04045
cl = {
{ ((cs + 0.055)/1.055)^2.4, cs > 0.04045
Assume cs is the sRGB component in the range [0,1]."
Since I've never dug this deep when implementing a game engine for desktop hardware (which I would expect color resolution considerations to be essentially moot when using render buffers of 16 bit depth per channel or higher) my understanding of how this works is unclear, but this paragraph does go some way toward reassuring me that I can have my cake and eat it too with respect to retaining all 8 bits of color information if I am to load in the textures using SRGB_EXT image storage format.
Here in OpenGL ES 2.0 with this extension I can use SRGB_EXT or SRGB_ALPHA_EXT rather than the analogous SRGB or SRGB8_ALPHA from vanilla GL.
My apologies for not presenting a simple answerable question. Let it be this one: Am I barking up the wrong tree here or are my assumptions more or less correct? Feels like I've been staring at these specs for far too long now. Another way to answer my question is if you can shed some light on the GLKTextureLoader error 18 that I get when I try to set the sRGB option.
It seems like there is yet more reading for me to do as I have to decide whether to start to branch my code to get one codepath that uses GL ES 2.0 with EXT_sRGB, and the other using GL ES 3.0, which certainly looks very promising by comparing the documentation for glTexImage2D with other GL versions and appears closer to OpenGL 4 than the others, so I am really liking that ES 3 will be bringing mobile devices a lot closer to the API used on the desktop.
Am I barking up the wrong tree here or are my assumptions more or less
correct?
Your assumptions are correct. If the GL_EXT_sRGB OpenGL ES extension is supported, both sRGB framebuffers (with automatic conversion from linear to gamma-corrected sRGB) and sRGB texture formats (with automatic conversion from sRGB to linear RGB when sampling from it) are available, so that is definitively the way to go, if you want to work in a linear color space.
I can't help with that GLKit issue, no idea about that.
First of all:Windows XP SP3, 2GB RAM, Intel core 2 Duo 2.33 GHz, nVidia 9600GT 1GB RAM. OpenGL 3.3 fully updated.
Short description of what I am doing:Ideally I need to put ONE single pixel in a GL texture (A) using glTexSubImage2D every frame.Then, modify the texture inside a shader-FBO-quadfacingcamera setup and replace the original image with the resulting FBO.
Of course, I don't want a FBO Feedback Loop, so instead I put the modified version inside a temporary texture and do the update separately with glCopyTexSubImage2D.
The sequence is now:
1) Put one pixel in a GL texture (A) using glTexSubImage2D every frame (with width=height=1).2) This modified version A is to be used/modified inside a shader-FBO-quad setup to be rendered into a different texture (B).3) The resulting texture B is to be overwritten over A using glCopyTexSubImage2D.4) Repeat...
By repeating this loop I want to achieve a slow fading effect by multiplying the color values in the shader by 0.99 every frame.
2 things are badly wrong:1) with a fading factor of 0.99 repeated every frame, the fading stops at RGB 48,48,48. Thus, leaving a trail of greyish pixels not fully faded out.2) the program runs at 100 FPS. Very bad. Because if I comment out the glCopyTexSubImage2D the program goes at 1000 FPS!!
I achieve 1000 FPS also by commenting out just glTexSubImage2D and leaving alone glCopyTexSubImage2D. This fact to clarify that glTexSubImage2D and glCopyTexSubImage2D are NOT the bottleneck by themselves (I tried to replace glCopyTexSubImage2D with a secondary FBO to do the copying, same results).
Observation: the bottleneck shows when both those commands are working!
Hard mode: no PBOs pls.
Link with source and exe:http://www.mediafire.com/?ymu4v042a1aaha3(CodeBlocks and SDL used)FPS counts are written into stdout.txt
I ask for a workaround for the 2 things exposed up there.Expected results: full fade-out effect to plain black at 800-1000 FPS.
To problem 1:
You are experiencing some precision (and quantization) issues here. I assume you are using some 8 Bit UNORM framebuffer format, so anything you write to it will be rounded the next discrete step out of 256 levels. Think about it: 48*0.99 = 47.52, which will end up as 48 again, so it will not get any darker that. Using some real floating point format would be a solution, but it is likely to greatly decrease overall performance...
The fade out operation you chose is simply not the best choice, it might be better to add some linear term to guarantee that you decrease the value by at least 1/255.
To problem 2:
It is hard to say what the actual bottleneck here is. As you are not using PBOs, you are limited to synchronous texture updates.
However, why do you need to do that copy operation at all? The standard approach to this kind of things would be some texture/FBO/color buffer "ping-pong", where you just swap the "role" of the textures after each iteration. So you get the sequence:
update A
render into B (reading from A)
update B
render into A (reading from B)
Problem 2: splatting arbitrary pixels into a texture as fast as possible.
Since probably the absolute fastest way to dynamically upload data to the GPU from main memory consists in Vertex Arrays or VBOs, then the solution to problem 2 gets trivial:
1) create Vertex Array and Color Array
(or interleave coordinates and colors, performance/bandwidth may vary);
2) Z component =0. We want points to lie on the floor;
3) camera pointing downwards with orthographic projection
(being sure to match exactly the screen size with coordinate ranges);
4) render to texture with FBO using GL_POINTS w/ glPointSize=1 and GL_POINT_SMOOTH disabled.
Pretty standard. Now the program runs at 750 fps. Close enough. My dreams were all like "Hey mom look! I'm running glTexSubImage2D at 1000 fps!" and then meh.Though glCopyTexSubImage2D is very fast. Would recommend.
Not sure if this is the best way to GPU-accelerate fadings but given the results one must note a strong concentration of Force with this one. Anyway the problem with the fading stopping half-way is fixed by setting a minimum constant decrement variable, so even if the exponential curve fails the fading will finish no matter what.
I am rewriting an opengl-based gis/mapping program. Among other things, the program allows you to load raster images of nautical charts, fix them to lon/lat coordinates and zoom and pan around on them.
The previous version of the program uses a custom tiling system, where in essence it manually creates mipmaps of the original image, in the form of 256x256-pixel tiles at various power-of-two zoom levels. A tile for zoom level n - 1 is constructed from four tiles from zoom level n, using a simple average-of-four-points algorithm. So, it turns off opengl mipmapping, and instead when it comes time to draw some part of the chart at some zoom level, it uses the tiles from the nearest-match zoom level (i.e., the tiles are in power-of-two zoom levels but the program allows arbitrary zoom levels) and then scales the tiles to match the actual zoom level. And of course it has to manage a cache of all these tiles at various levels.
It seemed to me that this tiling system was overly complex. It seemed like I should be able to let the graphics hardware do all of this mipmapping work for me. So in the new program, when I read in an image, I chop it into textures of 1024x1024 pixels each. Then I fix each texture to its lon/lat coordinates, and then I let opengl handle the rest as I zoom and pan around.
It works, but the problem is: My results are a bit blurrier than the original program, which matters for this application because you want to be able to read text on the charts as early as possible, zoom-wise. So it's seeming like the simple average-of-four-points algorithm the original program uses gives better results than opengl + my GPU, in terms of sharpness.
I know there are several glTexParameter settings to control some aspects of how mipmaps work. I've tried various combinations of GL_TEXTURE_MAX_LEVEL (anywhere from 0 to 10) with various settings for GL_TEXTURE_MIN_FILTER. When I set GL_TEXTURE_MAX_LEVEL to 0 (no mipmaps), I certainly get "sharp" results, but they are too sharp, in the sense that pixels just get dropped here and there, so the numbers are unreadable at intermediate zooms. When I set GL_TEXTURE_MAX_LEVEL to a higher value, the image looks quite good when you are zoomed far out (e.g., when the whole chart fits on the screen), but as you zoom in to intermediate zooms, you notice the blurriness especially when looking at text on the charts. (I.e., if it weren't for the text you might think "wow, opengl is doing a nice job of smoothly scaling my image." but with the text you think "why is this chart out of focus?")
My understanding is that basically you tell opengl to generate mipmaps, and then as you zoom in it picks the appropriate mipmaps to use, and there are some limited options for interpolating between the two closest mipmap levels, and either using the closest pixels or averaging the nearby pixels. However, as I say, none of these combinations seem to give quite as clear results, at the same zoom level on the chart (i.e., a zoom level where text is small but not minuscule, like the equivalent of "7 point" or "8 point" size), as the previous tile-based version.
My conclusion is that the mipmaps that opengl creates are simply blurrier than the ones the previous program created with the average-four-point algorithm, and no amount of choosing the right mipmap or LINEAR vs NEAREST is going to get the sharpness I need.
Specific questions:
(1) Does it seem right that opengl is in fact making blurrier mipmaps than the average-four-points algorithm from the original program?
(2) Is there something I might have overlooked in my use of glTexParameter that could give sharper results using the mipmaps opengl is making?
(3) Is there some way I can get opengl to make sharper mipmaps in the first place, such as by using a "cubic" filter or otherwise controlling the mipmap creation process? Or for that matter it seems like I could use the same average-four-points code to manually generate the mipmaps and hand them off to opengl. But I don't know how to do that...
(1) it seems unlikely; I'd expect it just to use a box filter, which is average four points in effect. Possibly it's just switching from one texture to a higher resolution one at a different moment — e.g. it "Chooses the mipmap that most closely matches the size of the pixel being textured", so a 256x256 map will be used to texture a 383x383 area, whereas the manual system it replaces may always have scaled down from 512x512 until the target size was 256x256 or less.
(2) not that I'm aware of in base GL, but if you were to switch to GLSL and the programmable pipeline then you could use the 'bias' parameter to texture2D if the problem is that the lower resolution map is being used when you don't want it to be. Similarly, the GL_EXT_texture_lod_bias extension can do the same in the fixed pipeline. It's an NVidia extension from a decade ago and is something all programmable cards could do, so it's reasonably likely you'll have it.
(EDIT: reading the extension more thoroughly, texture bias migrated into the core spec of OpenGL in version 1.4; clearly my man pages are very out of date. Checking the 1.4 spec, page 279, you can supply a GL_TEXTURE_LOD_BIAS)
(3) yes — if you disable GL_GENERATE_MIPMAP then you can use glTexImage2D to supply whatever image you like for every level of scale, that being what the 'level' parameter dictates. So you can supply completely unrelated mip maps if you want.
To answer your specific points, the four-point filtering you mention is equivalent to box-filtering. This is less blurry than higher-order filters, but can result in aliasing patterns. One of the best filters is the Lanczos filter. I suggest you calculate all of your mipmap levels from the base texture using a Lanczos filter and crank up the anisotropic filtering settings on your graphics card.
I assume that the original code managed textures itself because it was designed to view data sets that are too large to fit into graphics memory. This was probably a bigger problem in the past, but is still a concern.