OpenGL colorize filters - opengl

I have an open GL quad that is rendered with a grayscale gradient. I would like to colorize it by applying a filter, something like:
If color = 0,0,0 then set color to 255,255,255
If color = 0,0,1 then set color to 255,255,254
etc, or some scheme I decide on.
Note the reason I do this in grayscale because the algorithm I'm using was designed to be drawn in grayscale and then colorized since the colors may not be known immediately.
This would be similar to the java LookupOp http://download.oracle.com/javase/6/docs/api/java/awt/image/LookupOp.html.
Is there a way to do this in openGL?
thanks,
Jeff

You could interpret those colours from the grayscale gradient as 1-D texture coordinates and then specify your look-up table as a 1-D texture. This seems to fit your situation.
Alternatively, you can use a fragment program (shader) to perform arbitrary colour transformations on individual pixels.
Some more explanation: What is a texture? A texture, conceptually, is some kind of lookup function, with some additional logic on top.
A 2-D texture is something which for any pair of coordinates (s,t) or (x,y) in the range of [0,0] - [1,1] yields a specific colour (RGB, RGBA, L, whatever). Additionally it has some settings like warping or filtering.
Underneath, a texture is described by discrete data of a given "density" - perhaps 16x16, perhaps 256x512. The filtering process makes it possible to specify a colour for any real number between [0,0] and [1,1] (by mixing/interpolating neighbouring texels or just taking the nearest one).
A 1-D texture is identical, except that it maps just a single real value to a colour. Therefore, it can be thought of as a specific type of a "lookup table". You can consider it equivalent to a 2-D texture based on a 1xN image.
If you have a grayscale gradient, you may render it directly by treating the gradient value as a colour - or you can treat it as texture coordinates (= indices in the lookup table) and using the 1-D texture for an arbitrary colour space transform.
You'd just need to translate the gradient values (from 0..255 range) to the [0..1] range of texture indices. I'd recommend something like out = (in+0.5)/256.0. The 0.5 makes for the half-texel offset as we want to point to the middle of a texel (a value inside a texture), not to a corner between 2 values.
To only have the exact RGB values from lookup table (= 1-D texture), also set the texture filters to GL_NEAREST.
BTW: Note that if you already need another texture to draw the gradient, then it gets a bit more complicated, because you'd want to treat the values received from one texture as coordinates for another texture - and I believe you'd need pixel shaders for that. Not that shaders are complicated or anything... they are extremely handy when you learn the basics.

Related

Can I control minifacation in OpenGL?

Say I have a 512x512 pixels texture that I am displaying on 256x256 pixels on the screen.
In that case "the level-of-detail function used when sampling from the texture determines that the texture should be minified" according to my GL_TEXTURE_MIN_FILTER which is GL_LINEAR.
As a result 2x2 pixels will be minified to 1 pixel (distance weighted linear average).
Is there some way that I can control the minification?
Say I instead want 4x4 or 8x8 pixels to be minified to 1 pixel since I prefer a coarse or rasterized image ;-).
Alternatively is there some way I can achieve the same effect in the shader code?
If you want to precisely control the filtering, write an appropriate fragment shader, use the texelFetch function to access the unfiltered texture data, then implement the filter in the shader.
If you're going for a Taylor approximation of the filtering kernel, keep in mind, that you can make use of bilinear mipmap filtering (i.e. GL_TEXTURE_MIN_FILTER := GL_LINEAR_MIPMAP_LINEAR) to implement the 0th and 1st order terms of the Taylor expansion.

Why do we need texture filtering in OpenGL?

When mapping texture to a geometry when we can choose the filtering method between GL_NEAREST and GL_LINEAR.
In the examples we have a texture coordinate surrounded by the texels like so:
And it's explained how each algorithm chooses what color the fragment be, for example linear interpolate all the neighboring texels based on distance from the texture coordinate.
Isn't each texture coordinate is essentially the fragment position which are mapped to pixel on screen? So how these coordinates are smaller than the texels which are essentially pixels and the same size as fragments?
A (2D) texture can be looked at as a function t(u, v), whose output is a "color" value. This is a pure function, so it will return the same value for the same u and v values. The value comes from a lookup table stored in memory, indexed by u and v, rather than through some kind of computation.
Texture "mapping" is the process whereby you associate a particular location on a surface with a particular location in the space of a texture. That is, you "map" a surface location to a location in a texture. As such, the inputs to the texture function t are often called "texture coordinates". Some surface locations may map to the same position on a texture, and some texture positions may not have surface locations mapped to them. It all depends on the mapping
An actual texture image is not a smooth function; it is a discrete function. It has a value at the texel locations (0, 0), and another value at (1, 0), but the value of a texture at (0.5, 0) is undefined. In image space, u and v are integers.
Your picture of a zoomed in part of the texture is incorrect. There are no values "between" the texels, because "between the texels" is not possible. There is no number between 0 and 1 on an integer number line.
However, any useful mapping from surface to the texture function is going to need to happen in a continuous space, not a discrete space. After all, it's unlikely that every fragment will land exactly on a location that maps to an exact integer within a texture. After all, especially in shader-based rendering, a shader can just invent a mapping arbitrarily. The "mapping" could be based on light directions (projective texturing), the elevation of a fragment relative to some surface, or anything a user might want. To a fragment shader, a texture is just a function t(u, v) which can be evaluated to produce a value.
So we really want that function to be in a continuous space.
The purpose of filtering is to create a continuous function t by inventing values in-between the discrete texels. This allows you to declare that u and v are floating-point values, rather than integers. We also get to normalize the texture coordinates, so that they're on the range [0, 1] rather than being based on the texture's size.
Texture filtering does not decide what color the fragment should be. This is what the fragment shader does. However, the fragment shader may sample a texture at a given position to get a color. It may directly return that color or it can process it (e.g. add shading etc.)
Texture filtering happens at sampling. The texture coordinates are not necessarily perfect pixel positions. E.g., the texture could be the material of a 3D model that you show in a perspective view. Then a fragment may cover more than a single texel or it may cover less. Or it might not be aligned with the texture grid. In all cases you need some kind of filtering.
For applications that render a sprite at its original size without any deformation, you usually don't need filtering as you have a 1:1 mapping from screen pixels to texels.

What is, in simple terms, textureGrad()?

I read the Khronos wiki on this, but I don't really understand what it is saying. What exactly does textureGrad do?
I think it samples multiple mipmap levels and computes some color mixing using the explicit derivative vectors given to it, but I am not sure.
When you sample a texture, you need the specific texture coordinates to sample the texture data at. For sake of simplicity, I'm going to assume a 2D texture, so the texture coordinates are a 2D vector (s,t). (The explanation is analogous for other dimensionalities).
If you want to texture-map a triangle, one typically uses one of two strategies to get to the texture coordinates:
The texture coordinates are part of the model. Every vertex contains the 2D texture coordinates as a vertex attribute. During rasterization, those texture coordinates are interpolated across the primitive.
You specify a mathematic mapping. For example, you could define some function mapping the 3D object coordinates to some 2D texture coordinates. You can for example define some projection, and project the texture onto a surface, just like a real projector would project an image onto some real-world objects.
In either case, each fragment generated when rasterizing the typically gets different texture coordinates, so each drawn pixel on the screen will get a different part of the texture.
The key point is this: each fragment has 2D pixel coordinates (x,y) as well as 2D texture coordinates (s,t), so we can basically interpret this relationship as a mathematical function:
(s,t) = T(x,y)
Since this is a vector function in the 2D pixel position vector (x,y), we can also build the partial derivatives along x direction (to the right), and y direction (upwards), which are telling use the rate of change of the texture coordinates along those directions.
And the dTdx and dTdy in textureGrad are just that.
So what does the GPU need this for?
When you want to actually filter the texture (in contrast to simple point sampling), you need to know the pixel footprint in texture space. Each single fragment represents the area of one pixel on the screen, and you are going to use a single color value from the texture to represent the whole pixel (multisampling aside). The pixel footprint now represent the actual area the pixel would have in texture space. We could calculate it by interpolating the texcoords not for the pixel center, but for the 4 pixel corners. The resulting texcoords would form a trapezoid in texture space.
When you minify the texture, several texels are mapped to the same pixel (so the pixel footprint is large in texture space). When you maginify it, each pixel will represent only a fraction of the corresponding texel (so the footprint is quiete small).
The texture footprint tells you:
if the texture is minified or magnified (GL has different filter settings for each case)
how many texels would be mapped to each pixel, so which mipmap level would be appropriate
how much anisotropy there is in the pixel footprint. Each pixel on the screen and each texel in texture space is basically a square, but the pixel footprint might significantly deviate from than, and can be much taller than wide or the over way around (especially in situations with high perspective distortion). Classic bilinear or trilinear texture filters always use a square filter footprint, but the anisotropic texture filter will uses this information to
actually generate a filter footprint which more closely matches that of the actual pixel footprint (to avoid to mix in texel data which shouldn't really belong to the pixel).
Instead of calculating the texture coordinates at all pixel corners, we are going to use the partial derivatives at the fragment center as an approximation for the pixel footprint.
The following diagram shows the geometric relationship:
This represents the footprint of four neighboring pixels (2x2) in texture space, so the uniform grid are the texels, and the 4 trapezoids represent the 4 pixel footprints.
Now calculating the actual derivatives would imply that we have some more or less explicit formula T(x,y) as described above. GPUs usually use another approximation:
the just look at the actual texcoords the the neighboring fragments (which are going to be calculated anyway) in each 2x2 pixel block, and just approximate the footprint by finite differencing - the just subtracting the actual texcoords for neighboring fragments from each other.
The result is shown as the dotted parallelogram in the diagram.
In hardware, this is implemented so that always 2x2 pixel quads are shaded in parallel in the same warp/wavefront/SIMD-Group. The GLSL derivative functions like dFdx and dFdy simply work by subtracting the actual values of the neighboring fragments. And the standard texture function just internally uses this mechanism on the texture coordinate argument. The textureGrad functions bypass that and allow you to specify your own values, which means you control the what pixel footprint the GPU assumes when doing the actual filtering / mipmap level selection.

Texture Mapping without OpenGL

So I'm supposed to Texture Map a specific model I've loaded into a scene (with a Framebuffer and a Planar Pinhole Camera), however I'm not allowed to use OpenGL and I have no idea how to do it otherwise (we do use glDrawPixels for other functionality, but that's the only function we can use).
Is anyone here able enough to give me a run-through on how to texture map without OpenGL functionality?
I'm supposed to use these slides: https://www.cs.purdue.edu/cgvlab/courses/334/Fall_2014/Lectures/TMapping.pdf
But they make very little sense to me.
What I've gathered so far is the following:
You iterate over a model, and assign each triangle "texture coordinates" (which I'm not sure what those are), and then use "model space interpolation" (again, I don't understand what that is) to apply the texture with the right perspective.
I currently have my program doing the following:
TL;DR:
1. What is model space interpolation/how do I do it?
2. What explicitly are texture coordinates?
3. How, on a high level (in layman's terms) do I texture map a model without using OpenGL.
OK, let's start by making sure we're both on the same page about how the color interpolation works. Lines 125 through 143 set up three vectors redABC, greenABC and blueABC that are used to interpolate the colors across the triangle. They work one color component at a time, and each of the three vectors helps interpolate one color component.
By convention, s,t coordinates are in source texture space. As provided in the mesh data, they specify the position within the texture of that particular vertex of the triangle. The crucial thing to understand is that s,t coordinates need to be interpolated across the triangle just like colors.
So, what you want to do is set up two more ABC vectors: sABC and tABC, exactly duplicating the logic used to set up redABC, but instead of using the color components of each vertex, you just use the s,t coordinates of each vertex. Then for each pixel, instead of computing ssiRed etc. as unsigned int values, you compute ssis and ssit as floats, they should be in the range 0.0f through 1.0f assuming your source s,t values are well behaved.
Now that you have an interpolated s,t coordinate, multiply ssis by the texel width of the texture, and ssit by the texel height, and use those coordinates to fetch the texel. Then just put that on the screen.
Since you are not using OpenGL I assume you wrote your own software renderer to render that teapot?
A texture is simply an image. A texture coordinate is a 2D position in the texture. So (0,0) is bottom-left and (1,1) is top-right. For every vertex of your 3D model you should store a 2D position (u,v) in the texture. That means that at that vertex, you should use the colour the texture has at that point.
To know the UV texture coordinate of a pixel in between vertices you need to interpolate the texture coordinates of the vertices around it. Then you can use that UV to look up the colour in the texture.

How to generate linear RGB map from XYZ in order to colorize the mesh?

What I want is, I want to apply colors to the mesh linearly. It could be from Vertex[0] to Vertex[n] or from -Min.x to Max.x. The Min.x should have dark red color then with the increment of vertices position, the red color should transform into green then again increment and end with blue.
The second option could be, if I specify any random Vertex[any] and start color mapping from that vertex, then the color map should be transform into RGB with the incremental order.
You can call it Color Map from XYZ to RGB. How can I do this ? Any idea ?
First how can I transform XYZ into RGB, then how can I make color map or gradient or whatever u can say it.
The attached figure can give you an idea. In this figure, the the normalized vertices X=R, Y=G, Z=B are rendered as colors. Its just a test.
Give me an idea how can I make it perfect linear map.
First off, you must choose a range to map color values from since the RGB components are limited whilst you may have a huge mesh. Consider using the bounding box of your mesh. Given p1 and p2 are the min and max corners of your box, respectively, such mapping could be:
color[i] = 255 * (position[i] - p1) / (p2 - p1);
You can either perform this mapping in your vertex shader or with the CPU to get an array for the color attribute of your vertices, which you may use directly with OpenGL with smooth interpolation. Note that smooth interpolation should be enabled by default. With OpenGL 2.1, use glShadeModel (GL_SMOOTH) to enable it.
If your mesh is UV mapped and you want your color mapping to get onto a texture, render your mesh using texture coordinates as position and then retrieve the rendered image using either glReadPixels() or FBOs.