Color mapping a texture in opengl - opengl

I am displaying a texture that I want to manipulate without out affecting the image data. I want to be able to clamp the texel values so that anything below the lower value becomes 0, anything above the upper value becomes 0, and anything between is linearly mapped from 0 to 1.
Originally, to display my image I was using glDrawPixels. And to solve the problem above I would create a color map using glPixelMap. This worked beautifully. However, for performance reasons I have begun using textures to display my image. The glPixelMap approach no longer seems to work. Well that approach may work but I was unable to get it working.
I then tried using glPixelTransfer to set scales and bias'. This seemed to have some sort of effect (not necessarily the desired) on first pass, but when the upper and lower constraints were changed no effect was visible.
I was then told that fragment shaders would work. But after a call to glGetString(GL_EXTENSIONS), I found that GL_ARB_fragment_shader was not supported. Plus, a call to glCreateShaderObjectARB cause a nullreferenceexception.
So now I am at a loss. What should I do? Please Help.
What ever might work I am willing to try. The vendor is Intel and the renderer is Intel 945G. I am unfortunately confined to a graphics card that is integrated on the motherboard, and only has gl 1.4.
Thanks for your response thus far.

Unless you have a pretty old graphics-card, it's surprising that you don't have fragment-shader support. I'd suggest you try double-checking using this.
Also, are you sure you want anything above the max value to be 0? Perhaps you meant 1? If you did mean 1 and not 0 then are quite long-winded ways to do what you're asking.
The condensed answer is that you use multiple rendering-passes. First you render the image at normal intensity. Then you use subtractive blending (look up glBlendEquation) to subtract your minimum value. Then you use additive blending to multiply everything up by 1/(max-min) (which may need multiple passes).
If you really want to do this, please post back the GL_VENDOR and GL_RENDERER for your graphics-card.
Edit: Hmm. Intel 945G don't have ARB_fragment_shader, but it does have ARB_fragment_program which will also do the trick.
Your fragment-code should look something like this (but it's been a while since I wrote any so it's probably bugged)
!!ARBfp1.0
ATTRIB tex = fragment.texcoord[0]
PARAM cbias = program.local[0]
PARAM cscale = program.local[1]
OUTPUT cout = result.color
TEMP tmp
TXP tmp, tex, texture[0], 2D
SUB tmp, tmp, cbias
MUL cout, tmp, cscale
END
You load this into OpenGL like so:
GLuint prog;
glEnable(GL_FRAGMENT_PROGRAM_ARB);
glGenProgramsARB(1, &prog);
glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, prog);
glProgramStringARB(GL_FRAGMENT_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, strlen(src), src);
glDisable(GL_FRAGMENT_PROGRAM_ARB);
Then, before rendering your geometry, you do this:
glEnable(GL_FRAGMENT_PROGRAM_ARB);
glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, prog);
colour4f cbias = cmin;
colour4f cscale = 1.0f / (cmax-cmin);
glProgramLocalParameter4fARB(GL_FRAGMENT_PROGRAM_ARB, 0, cbias.r, cbias.g, cbias.b, cbias.a);
glProgramLocalParameter4fARB(GL_FRAGMENT_PROGRAM_ARB, 1, cscale.r, cscale.g, cscale.b, cscale.a);
//Draw your textured geometry
glDisable(GL_FRAGMENT_PROGRAM_ARB);

Also see if the GL_ARB_fragment_program extension is supported. That extension supports the ASM style fragment programs. That is supposed to be supported in OpenGL 1.4.

It's really unfortunate that you're using such an ancient version of OpenGL. Can you upgrade with your card?
For a more modern OGL 2.x, this is exactly the kind of program that GLSL is for. Great documentation can be found here:
OpenGL Documentation
OpenGL Shading Langauge

Related

sRGB correction for textures in OpenGL on iOS

I am experiencing the issue described in this article where the second color ramp is effectively being gamma-corrected twice, resulting in overbright and washed-out colors. This is in part a result of my using an sRGB framebuffer, but that is not the actual reason for the problem.
I'm testing textures in my test app on iOS8, and in particular I am currently using a PNG image file and using GLKTextureLoader to load it in as a cubemap.
By default, textures are treated NOT as being in sRGB space (which they are invariably saved in by the image editing software used to build the texture).
The consequence of this is that Apple has made GLKTextureLoader do the glTexImage2D call for you, and they invariably are calling it with the GL_RGB8 setting, whereas for actual correctness in future color operations we have to uncorrect the gamma in order to obtain linear brightness values in our textures for our shaders to sample.
Now I can actually see the argument that it is not required of most mobile applications to be pedantic about color operations and color correctness as applied to advanced 3D techniques involving color blending. Part of the issue is that it's unrealistic to use the precious shared device RAM to store textures at any bit depth greater than 8 bits per channel, and if we read our JPG/PNG/TGA/TIFF and gamma-uncorrect its 8 bits of sRGB into 8 bits linear, we're going to degrade quality.
So the process for most apps is just happily toss linear color correctness out the window, and just ignore gamma correction anyway and do blending in the SRGB space. This suits Angry Birds very well, as it is a game that has no shading or blending, so it's perfectly sensible to do all operations in gamma-corrected color space.
So this brings me to the problem that I have now. I need to use EXT_sRGB and GLKit makes it easy for me to set up an sRGB framebuffer, and this works great on last-3-or-so-generation devices that are running iOS 7 or later. In doing this I address the dark and unnatural shadow appearance of an uncorrected render pipeline. This allows my lambertian and blinn-phong stuff to actually look good. It lets me store sRGB in render buffers so I can do post-processing passes while leveraging the improved perceptual color resolution provided by storing the buffers in this color space.
But the problem now as I start working with textures is that it seems like I can't even use GLKTextureLoader as it was intended, as I just get a mysterious error (code 18) when I set the options flag for SRGB (GLKTextureLoaderSRGB). And it's impossible to debug as there's no source code to go with it.
So I was thinking I could go build my texture loading pipeline back up with glTexImage2D and use GL_SRGB8 to specify that I want to gamma-uncorrect my textures before I sample them in the shader. However a quick look at GL ES 2.0 docs reveals that GL ES 2.0 is not even sRGB-aware.
At last I find the EXT_sRGB spec, which says
Add Section 3.7.14, sRGB Texture Color Conversion
If the currently bound texture's internal format is one of SRGB_EXT or
SRGB_ALPHA_EXT the red, green, and blue components are converted from an
sRGB color space to a linear color space as part of filtering described in
sections 3.7.7 and 3.7.8. Any alpha component is left unchanged. Ideally,
implementations should perform this color conversion on each sample prior
to filtering but implementations are allowed to perform this conversion
after filtering (though this post-filtering approach is inferior to
converting from sRGB prior to filtering).
The conversion from an sRGB encoded component, cs, to a linear component,
cl, is as follows.
{ cs / 12.92, cs <= 0.04045
cl = {
{ ((cs + 0.055)/1.055)^2.4, cs > 0.04045
Assume cs is the sRGB component in the range [0,1]."
Since I've never dug this deep when implementing a game engine for desktop hardware (which I would expect color resolution considerations to be essentially moot when using render buffers of 16 bit depth per channel or higher) my understanding of how this works is unclear, but this paragraph does go some way toward reassuring me that I can have my cake and eat it too with respect to retaining all 8 bits of color information if I am to load in the textures using SRGB_EXT image storage format.
Here in OpenGL ES 2.0 with this extension I can use SRGB_EXT or SRGB_ALPHA_EXT rather than the analogous SRGB or SRGB8_ALPHA from vanilla GL.
My apologies for not presenting a simple answerable question. Let it be this one: Am I barking up the wrong tree here or are my assumptions more or less correct? Feels like I've been staring at these specs for far too long now. Another way to answer my question is if you can shed some light on the GLKTextureLoader error 18 that I get when I try to set the sRGB option.
It seems like there is yet more reading for me to do as I have to decide whether to start to branch my code to get one codepath that uses GL ES 2.0 with EXT_sRGB, and the other using GL ES 3.0, which certainly looks very promising by comparing the documentation for glTexImage2D with other GL versions and appears closer to OpenGL 4 than the others, so I am really liking that ES 3 will be bringing mobile devices a lot closer to the API used on the desktop.
Am I barking up the wrong tree here or are my assumptions more or less
correct?
Your assumptions are correct. If the GL_EXT_sRGB OpenGL ES extension is supported, both sRGB framebuffers (with automatic conversion from linear to gamma-corrected sRGB) and sRGB texture formats (with automatic conversion from sRGB to linear RGB when sampling from it) are available, so that is definitively the way to go, if you want to work in a linear color space.
I can't help with that GLKit issue, no idea about that.

OpenGL: How to select correct mipmapping method automatically?

I'm having problem at mipmapping the textures on different hardware. I use the following code:
char *exts = (char *)glGetString(GL_EXTENSIONS);
if(strstr(exts, "SGIS_generate_mipmap") == NULL){
// use gluBuild2DMipmaps()
}else{
// use GL_GENERATE_MIPMAP
}
But on some cards it says GL_GENERATE_MIPMAP is supported when its not, thus the gfx card is trying to read memory from where the mipamp is supposed to be, thus the card renders other textures to those miplevels.
I tried glGenerateMipmapEXT(GL_TEXTURE_2D) but it makes all my textures white, i enabled GL_TEXTURE_2D before using that function (as it was told to).
I could as well just use gluBuild2DMipmaps() for everyone, since it works. But i dont want to make new cards load 10x slower because theres 2 users who have really old cards.
So how do you choose mipmap method correctly?
glGenerateMipmap is supported at least by OpenGL 3.3 as a part of functionality, not as extension.
You have following options:
Check OpenGL version, if it is more recent that the first one that ever supported glGenerateMipmap, use glGenerateMipmap.
(I'd recommend this one) OpenGL 1.4..2.1 supports texParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE)(see this) Which will generate mipmaps from base level. This probably became "deprecated" in OpenGL 3, but you should be able to use it.
Use GLEE or GLEW ans use glewIsSupported / gleeIsSupported call to check for extension.
Also I think that instead of using extensions, it should be easier to stick with OpenGL specifications. A lot of hardware supports OpenGL 3, so you should be able get most of required functionality (shaders, mipmaps, framebuffer objects, geometry shaders) as part of OpenGL specification, not as extension.
If drivers lie, there's not much you can do about it. Also remember that glGenerateMipmapEXT is part of the GL_EXT_framebuffer_object extension.
What you are doing wrong is checking for the SGIS_generate_mipmap extension and using GL_GENERATE_MIPMAP, since this enum belongs to core OpenGL, but that's not really the problem.
The issue you describe sounds like a very horrible OpenGL implementation bug, i would bypass it using gluBuild2DMipmaps on those cards (having a list and checking at startup).

Antialiasing an entire scene after resterization

I ran into an issue while compiling an openGl code. The thing is that i want to achieve full scene anti-aliasing and i don't know how. I turned on force-antialiasing from the Nvidia control-panel and that was what i really meant to gain. I do it now with GL_POLYGON_SMOOTH. Obviously it is not efficient and good-looking. Here are the questions
1) Should i use multi sampling?
2) Where in the pipeline does openGl blend the colors for antialiasing?
3) What alternatives do exist besides GL_*_SMOOTH and multisampling?
GL_POLYGON_SMOOTH is not a method to do Full-screen AA (FSAA).
Not sure what you mean by "not efficient" in this context, but it certainly is not good looking, because of its tendency to blend in the middle of meshes (at the triangle edges).
Now, with respect to FSAA and your questions:
Multisampling (aka MSAA) is the standard way today to do FSAA. The usual alternative is super-sampling (SSAA), that consists in rendering at a higher resolution, and downsample at the end. It's much more expensive.
The specification says that logically, the GL keeps a sample buffer (4x the size of the pixel buffer, for 4xMSAA), and a pixel buffer (for a total of 5x the memory), and on each sample write to the sample buffer, updates the pixel buffer with the resolved value from the current 4 samples in the sample buffer (It's not called blending, by the way. Blending is what happens at the time of the write into the sample buffer, controlled by glBlendFunc et al.). In practice, this is not what happens in hardware though. Typically, you write only to the sample buffer (and the hardware usually tries to compress the data), and when comes the time to use it, the GL implementation will resolve the full buffer at once, before the usage happens. This also helps if you actually use the sample buffer directly (no need to resolve at all, then).
I covered SSAA and its cost. The latest technique is called Morphological anti-aliasing (MLAA), and is actively being researched. The idea is to do a post-processing pass on the fully rendered image, and anti-alias what looks like sharp edges. Bottom line is, it's not implemented by the GL itself, you have to code it as a post-processing pass. I include it for reference, but it can cost quite a lot.
I wrote a post about this here: Getting smooth, big points in OpenGL
You have to specify WGL_SAMPLE_BUFFERS and WGL_SAMPLES (or GLX prefix for XOrg/GLX) before creating your OpenGL context, when selecting a pixel format or visual.
On Windows, make sure that you use wglChoosePixelFormatARB() if you want a pixel format with extended traits, NOT ChoosePixelFormat() from GDI/GDI+. wglChoosePixelFormatARB has to be queried with wglGetProcAddress from the ICD driver, so you need to create a dummy OpenGL context beforehand. WGL function pointers are valid even after the OpenGL context is destroyed.
WGL_SAMPLE_BUFFERS is a boolean (1 or 0) that toggles multisampling. WGL_SAMPLES is the number of buffers you want. Typically 2,4 or 8.

OpenGL equivalent of GDI's HatchBrush or PatternBrush?

I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL.
I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush.
More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on.
Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"?
I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled.
Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn?
Thanks in advance for any help.
If I understood correctly, you're looking for glPolygonStipple() or glLineStipple().
PolygonStipple is very limited as it allows only 32x32 pattern but it should work like PatternBrush. I have no idea how to implement it in VB though.
First of all, are you sure it's the drawing operation itself that is the bottleneck here? Visual Basic is known for being very slow (Especially if your program is compiled to intermediary VM code - which is the default AFAIRC. Be sure you check the option to compile to native code!), and if it is your code that is the bottleneck, then OpenGL won't help you much - you'll need to rewrite your code in some other language - probably C or C++, but any .NET lang should also do.
OpenGL contains functions that allow you to draw stippled lines and polygons, but you shouldn't use them. They're deprecated for a long time, and got removed from OpenGL in version 3.1 of the spec. And that's for a reason - these functions don't map well to the modern rendering paradigm and are not supported by modern graphics hardware - meaning you will most likely get a slow software fallback if you use them.
The way to go is to use a small texture as a mask, and tile it over the drawn polygons. The texture will get stretched or compressed to match the texture coordinates you specify with the vertices. You have to set the wrapping mode to GL_REPEAT for both texture coordinates, and calculate the right coordinates for each vertex so that the texture appears at its original size, repeated the right amount of times.
You could also use the stencil buffer as you described, but... how would you fill that buffer with the pattern, and do it fast? You would need a texture anyway. Remember that you need to clear the stencil buffer every frame, before you start drawing. Not doing so could cost you a massive performance hit (the exact value of "massive" depending on the graphics hardware and driver version).
It's also possible to achieve the desired effect using a fragment shader, but learning shaders for that would be an overkill for an OpenGL beginner like yourself :-).
Ah, I think I've found it! I can make a stencil across the entire viewport in the shape of the pattern I want (or its mask, I guess), and then enable that stencil when I want to draw with that pattern.
You could just use a texture. Put the pattern in as in image and turn on texture repeating and you are good to go.
Figured this out a a year or two ago.

OpenGL: Enabling multisampling draws messed up edges for polygons at high zoom levels

When im using this following code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 6);
and then i enable multisampling, i notice that my program no longer cares about the max mip level.
Edit: It renders the last miplevels as well, that is the problem, i dont want them being rendered.
Edit3:
I tested and confirmed that it doesnt forget mip limits at all, so it does follow my GL_TEXTURE_MAX_LEVEL setting. ...So the problem isnt mipmap related, i guess...
Edit2: Screenshots, this is the world map zoomed out a lot and using low angle to make the effect shown the worst possible way, also there is rendered water plane under the map, so theres no possibility to take black pixels from anywhere else than map textures:
alt text http://img511.imageshack.us/img511/6635/multisamplingtexturelim.png
Edit4: All those pics should look like the top right corner pic (just smoother edges depending on multisampling). But apparently theres something horribly wrong in my code. I have to use mipmaps, the mipmaps arent the problem, they work perfectly.
What im doing wrong, or how can i fix this?
Ok. So the problem was not TEXTURE_MAX_LEVEL after all. Funny how a simple test helped figure that out.
I had 2 theories that were about the LOD being picked differently, and both of those seem to be disproved by the solid color test.
Onto a third theory then. If I understand correctly your scene, you have a model that's using a texture atlas, and what we're observing is that some polygons that should fetch from a specific item of the atlas actually fetch from a different one. Is that right ?
This can be explained by the fact that a multisampled fragment usually get sampled at the middle of the pixel. Even when that center is not inside the triangle that generated the sample. See the bottom of this page for an illustration.
The usual way to get around that is called centroid sampling (this page has nice illustrations of the issue too). It forces the sampling to bring back the sampling point inside the triangle.
Now the bad news: I'm not aware of any way to turn on centroid filtering outside of the programmable pipeline, and you're not using it. Do you think you want to switch to get access to that feature ?
Edit to add:
Also, not using texture atlases would be a way to work around this. The reason it is so visible is because you start fetching from another part of the atlas with the "out-of-triangle" sampling pattern.
Check also what have you set for the MIN_FILTER:
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, ... );
Try the different settings ( a list is here ).
However, if you're dissatisfied with the results of gluBuild2DMipmaps I advise you to look at alternatives:
glGenerateMipmap/glGenerateMipmapEXT (yes, it works without FBO's)
SGIS_generate_mipmap extension (widely supported)
Especially the latter is highly customizable. And what was not mentioned, this extension is fired up by setting GL_GENERATE_MIPMAP to true. It is automatical so you don't need to do recalculation if data changes.
You should enable multi-sampling through your application, not the nvidia control panel, if you want your rendering to always have it. That might even fix your issue.
As for the GL_TEXTURE_MAX_LEVEL setting being ignored when using the control panel multisampling, it sounds like a driver bug/feature. It's weird because this feature can be used to limit what you actually load in the texture (the so-called texture completeness criteria). What if you don't load the lowest mipmap levels at all ? What gets rendered ?
Edit: From the picture you're showing, it does not really look like it ignores the setting. For one thing, MAX_LEVEL=0 is different from MAX_LEVEL=6. Now, considering the noise in your textures, I don't even get why your MAX_LEVEL=6/MS off looks that clean. It should be noisy based on the MAX_LEVEL=16/MS off picture. At this point, I'd advise to put distinct solid colors in each mip level of your diffuse texture (and not rely on GL to build your mips), to see exactly which mip levels you're getting.