Am I doing something wrong with this CG program? - c++

I am using Ogre3D as the graphics engine.
I create a mesh manually which works fine, uvs are correct and are set to represent grid coordinates (for this example the grid is a 10 x 10)
I do nothing in the vertex program and have a very simple fragment program. I have included both programs plus the material file to explain.
My problem is, that even with filtering set to none the colours don't seem to come out the same as my original image (this is just a test image im using because I was having problems with creating the texture manually in ogre). It turns out that the problem is not my code in ogre but more likely something to do with either the material file or the fragment/vertex programs.
I have also included a screenshot of the output on the left and the original image on the right. The fragment shader also draws a simple grid over the top so I could make sure that uv coordinates were being passed across correctly. Which they seem to be.
Any insight would be much appreciated as I am really unsure what im doing wrong.
Material file:
// CG Vertex shader definition
vertex_program PlainTexture_VS cg
{
// Look in this source file for shader code
source GameObjStandard.cg
// Use this function for the vertex shader
entry_point main_plain_texture_vp
// Compile the shader to vs_1_1 format
profiles arbvp1
// This block saves us from manually setting parameters in code
default_params
{
// Ogre will put the worldviewproj into our 'worldViewProj' parameter for us.
param_named_auto worldViewProj worldviewproj_matrix
// Note that 'worldViewProj' is a parameter in the cg code.
}
}
// CG Pixel shader definition
fragment_program PlainTexture_PS cg
{
// Look in this source file for shader code
source GameObjStandard.cg
// Use this function for the pixel shader
entry_point main_plain_texture_fp
// Compile to ps_1_1 format
profiles arbfp1
}
material PlainTexture
{
// Material has one technique
technique
{
// This technique has one pass
pass
{
// Make this pass use the vertex shader defined above
vertex_program_ref PlainTexture_VS
{
}
// Make this pass use the pixel shader defined above
fragment_program_ref PlainTexture_PS
{
}
texture_unit 0
{
filtering none
// This pass will use this 2D texture as its input
texture test.png 2d
}
texture_unit 1
{
texture textureatlas.png 2d
tex_address_mode clamp
filtering none
}
}
}
}
CG File:
void main_plain_texture_vp(
// Vertex Inputs
float4 position : POSITION, // Vertex position in model space
float2 texCoord0 : TEXCOORD0, // Texture UV set 0
// Outputs
out float4 oPosition : POSITION, // Transformed vertex position
out float2 uv0 : TEXCOORD0, // UV0
// Model Level Inputs
uniform float4x4 worldViewProj)
{
// Calculate output position
oPosition = mul(worldViewProj, position);
// Simply copy the input vertex UV to the output
uv0 = texCoord0;
}
void main_plain_texture_fp(
// Pixel Inputs
float2 uv0 : TEXCOORD0, // UV interpolated for current pixel
// Outputs
out float4 color : COLOR, // Output color we want to write
// Model Level Inputs
uniform sampler2D Tex0: TEXUNIT0,
uniform sampler2D Tex1: TEXUNIT1) // Texture we're going to use
{
//get the index position by truncating the uv coordinates
float2 flooredIndexes = floor(uv0);
if((uv0.x > 0.9 && uv0.x < 1.1)
|| (uv0.x > 1.9 && uv0.x < 2.1)
|| (uv0.x > 2.9 && uv0.x < 3.1)
|| (uv0.x > 3.9 && uv0.x < 4.1)
|| (uv0.x > 4.9 && uv0.x < 5.1)
|| (uv0.x > 5.9 && uv0.x < 6.1)
|| (uv0.x > 6.9 && uv0.x < 7.1)
|| (uv0.x > 7.9 && uv0.x < 8.1)
|| (uv0.x > 8.9 && uv0.x < 9.1)) {
float4 color1 = {1.0,0,0,0};
color = color1;
} else if((uv0.y > 0.9 && uv0.y < 1.1)
|| (uv0.y > 1.9 && uv0.y < 2.1)
|| (uv0.y > 2.9 && uv0.y < 3.1)
|| (uv0.y > 3.9 && uv0.y < 4.1)
|| (uv0.y > 4.9 && uv0.y < 5.1)
|| (uv0.y > 5.9 && uv0.y < 6.1)
|| (uv0.y > 6.9 && uv0.y < 7.1)
|| (uv0.y > 7.9 && uv0.y < 8.1)
|| (uv0.y > 8.9 && uv0.y < 9.1)) {
float4 color1 = {1.0,0,0,0};
color = color1;
} else {
//get the colour of the index texture Tex0 at this floored coordinate
float4 indexColour = tex2D(Tex0, (1.0/10)*flooredIndexes);
color = indexColour;
}
}

Ok so its been a while since I found the solution to my problems unfortunately not been online so hope this helps anyone with similar issues.
When creating any texture you should always make textures a size in texels 2^n * 2^m where m and n are the width and height of the texture. This was my first mistake although I did not realise it at the time.
The reason I had not spotted this was because my main texture atlas was based on this principle and was a 1024 x 1024 texture. What I had not taken into account was the size of the texture I wasd creating as the texture index. Since my map was 10 x 10 I was creating a 10 x 10 texture for the indexes, this was I presume then being stretched somehow (not sure how it works in the backend) to be either 16 x 16 or 8 x 8, blending the texels together as it did it.
The first thing that gave me the clue was when I scaled my canvas in photoshop and found that the blended colours it was creating were the same as the ones I was getting on my ogre3d output.
Anyway moving on..
Once I had this figured out I was able to create the texture in Ogre and pass it across as follows
//Create index material
Ogre::TexturePtr indexTexture = Ogre::TextureManager::getSingleton().createManual("indexTexture","General",Ogre::TextureType::TEX_TYPE_2D, 16, 16, 0, Ogre::PixelFormat::PF_BYTE_BGRA, Ogre::TU_DEFAULT);
Ogre::HardwarePixelBufferSharedPtr pixelBuffer = indexTexture->getBuffer();
pixelBuffer->lock(Ogre::HardwareBuffer::HBL_NORMAL);
const Ogre::PixelBox& pixelBox = pixelBuffer->getCurrentLock();
Ogre::uint8* pDest = static_cast<Ogre::uint8*>(pixelBox.data);
Ogre::uint8 counter = 0;
for (size_t j = 0; j < 16; j++) {
for(size_t i = 0; i < 16; i++)
{
if(i==8 || i==7) {
*pDest++ = 3; // B
*pDest++ = 0; // G
*pDest++ = 0; // R
*pDest++ = 0; // A
} else {
*pDest++ = 1; // B
*pDest++ = 0; // G
*pDest++ = 0; // R
*pDest++ = 0; // A
}
counter++;
}
}
pixelBuffer->unlock();
So now I have a texture I can use as an index with some values I added in for testing, these values eventually will be populated at runtime by clicking on the tile.
Now to pass this texture across I had to pass it to the correct technique and pass in my material, this was done as follows:
Ogre::MaterialPtr material = Ogre::MaterialPtr(Ogre::MaterialManager::getSingleton().getByName("PlainTexture"));
float mapSize = 16;
float tas = 2;
material->getTechnique(0)->getPass(0)->getFragmentProgramParameters()->setNamedConstant("mapSize",mapSize);
material->getTechnique(0)->getPass(0)->getFragmentProgramParameters()->setNamedConstant("tas",tas);
material->getTechnique(0)->getPass(0)->getTextureUnitState(0)->setTextureName("indexTexture");
This passes two values too, mapSize being the size of the map itself in tiles (assuming its a square) and tas being the texture atlas size (number of different texture squares across the width of the atlas).
To allow my material to understand what I just passed in I needed to modify my material file slightly as follows:
// CG Pixel shader definition
fragment_program PlainTexture_PS cg
{
source GameObjStandard.cg
entry_point main_plain_texture_fp
profiles arbfp1
default_params
{
param_named tas float
param_named
}
}
And my pass was redefined slightly too
pass
{
// Make this pass use the vertex shader defined above
vertex_program_ref PlainTexture_VS
{
}
// Make this pass use the pixel shader defined above
fragment_program_ref PlainTexture_PS
{
}
texture_unit 0
{
filtering none
}
texture_unit 1
{
texture textureatlas.png 2d
tex_address_mode clamp
filtering anisotropic
}
}
I then rewrote the cg texture fragment program to take into account the changes I had made.
void main_plain_texture_fp(
float2 uv0 : TEXCOORD0, // UV interpolated for current pixel
out float4 color : COLOR, // Output color we want to write
uniform float tas,
uniform float mapSize,
// Model Level Inputs
uniform sampler2D Tex0: TEXUNIT0,
uniform sampler2D Tex1: TEXUNIT1)
{
//get the index position by truncating the uv coordinates
float2 flooredIndexes = floor(uv0);
//get the colour of the index texture Tex0 at this floored coordinate
float4 indexColour = tex2D(Tex0, ((1.0/mapSize) * flooredIndexes)+(0.5/mapSize));
//calculate the uv offset required for texture atlas range = 0 - 255
float indexValue = (255 * indexColour.b) + (255 * indexColour.g) + (255 * indexColour.r);
//float indexValue = (tas * tas) - indexValue0;
if(indexValue < tas*tas) {
float row = floor(indexValue/tas);
float col = frac(indexValue/tas) * tas;
float uvFraction = 1.0/tas;
float uBase = col * uvFraction;
float vBase = 1 - ((tas - row) * uvFraction);
float uOffset = frac(uv0.x)/tas;
float vOffset = (frac(uv0.y))/tas;
float uNew = uBase + uOffset;
float vNew = vBase + vOffset;
float2 uvNew = {uNew, vNew};
if(frac(uv0.x) > 0.99 || frac(uv0.x) < 0.01) {
float4 color1 = {1,1,1,0};
color = (0.2*color1) + (0.8*tex2D(Tex1,uvNew));
} else if(frac(uv0.y) > 0.99 || frac(uv0.y) < 0.01) {
float4 color1 = {1,1,1,0};
color = (0.2*color1) + (0.8*tex2D(Tex1,uvNew));
} else {
color = tex2D(Tex1,uvNew);
}
} else {
float4 color2 = {0.0,0,0,0};
color = color2;
}
}
This calculates the correct texel needed from the texture atlas, it also overlays a faint grid over the top by combining 80% texel color and 20% white.
If the texture atlas does not have the index of the colour specified by the index texture then it just outputs black (This is mainly so its very easy to spot.
Below is an example of the output using a 2 x 2 texture atlas.

Related

What could be the source of this aliasing?

I am manually raytracing a 3D image. I have noticed that, the farther from the 3D image I am, the bigger the aliasing.
This 3D image is basically a voxelized representation of the stanford dragon. I have placed volume centered at the origin (the diagonals cross at (0,0,0)), meaning that one of the corners is at (-cube_dim, -cube_dim, -cube_dim) and the other is at (cube_dim, cube_dim, cube_dim).
At close range the image is fine:
(The minor "aliasing" you see here is due to me doing a ray marching algorithm, this is not the aliasing I am worried about, this was expected and acceptabel)
However if we get far away enough some aliasing starts to be seen:
(This is a completely different kind of aliasing)
The fragment shader sued to generate the image is this:
#version 430
in vec2 f_coord;
out vec4 fragment_color;
uniform layout(binding=0, rgba8) image3D volume_data;
uniform vec3 camera_pos;
uniform float aspect_ratio;
uniform float cube_dim;
uniform int voxel_resolution;
#define EPSILON 0.01
// Check whether the position is inside of the specified box
bool inBoxBounds(vec3 corner, float size, vec3 position)
{
bool inside = true;
//Put the position in the coordinate frame of the box
position-=corner;
//The point is inside only if all of it's components are inside
for(int i=0; i<3; i++)
{
inside = inside && (position[i] > -EPSILON);
inside = inside && (position[i] < size+EPSILON);
}
return inside;
}
//Calculate the distance to the intersection to a box, or inifnity if the bos cannot be hit
float boxIntersection(vec3 origin, vec3 dir, vec3 corner0, float size)
{
//dir = normalize(dir);
//calculate opposite corner
vec3 corner1 = corner0 + vec3(size,size,size);
//Set the ray plane intersections
float coeffs[6];
coeffs[0] = (corner0.x - origin.x)/(dir.x);
coeffs[1] = (corner0.y - origin.y)/(dir.y);
coeffs[2] = (corner0.z - origin.z)/(dir.z);
coeffs[3] = (corner1.x - origin.x)/(dir.x);
coeffs[4] = (corner1.y - origin.y)/(dir.y);
coeffs[5] = (corner1.z - origin.z)/(dir.z);
float t = 1.f/0.f;
//Check for the smallest valid intersection distance
//We allow negative values up to -size to create correct sorting if the origin is
//inside the box
for(uint i=0; i<6; i++)
t = (coeffs[i]>=0) && inBoxBounds(corner0,size,origin+dir*coeffs[i])?
min(coeffs[i],t) : t;
return t;
}
void main()
{
float v_size = cube_dim/voxel_resolution;
vec3 r = (vec3(f_coord.xy,1.f/tan(radians(40))));
r.y /= aspect_ratio;
vec3 dir = normalize(r);//;*v_size*0.5;
r+= camera_pos;
float t = boxIntersection(r, dir, -vec3(cube_dim), cube_dim*2);
if(isinf(t))
discard;
if(!((r.x>=-cube_dim) && (r.x<=cube_dim) && (r.y>=-cube_dim) &&
(r.y<=cube_dim) && (r.z>=-cube_dim) && (r.z<=cube_dim)))
r += dir*t;
vec4 color = vec4(0);
int c=0;
while((r.x>=-cube_dim) && (r.x<=cube_dim) && (r.y>=-cube_dim) &&
(r.y<=cube_dim) && (r.z>=-cube_dim) && (r.z<=cube_dim))
{
r += dir*v_size*0.5;
vec4 val = imageLoad(volume_data, ivec3(((r)*0.5/cube_dim+vec3(0.5))*(voxel_resolution-1)));
if(val.w > 0)
{
color = val;
break;
}
c++;
}
fragment_color = color;
}
Understanding the algorithm
First, we create a ray based on the screen coordiantes (we use the standard raytracing ttechnique, were the focal length is 1/tan(angle)).
We then start the ray at the camera's current position
We check intersection of the ray with the box containing our object (we basically assume that our 3D texture is a big cube in the scene and we check whether we hit it).
If we donlt hit it we discard the fragment. If we do hit it and we're outside we move along the ray until we are at the surface of the box. If we hit it and are inside we stay where we are.
At this point we are guranteed that the position of our ray is inside the box.
Now we move by small segments along the ray until we either find a non zero value or we hit the end of the box.

glsl texture access synchronization, openCL vs glsl image processing

This might be a trivial question.
I am curious about how glsl would synchronize when accessing texture data via a fragment shader.
Say I have a code like below in a fragment shader.
void main() {
vec3 texCoord = in_texCoord;
vec4 out_voxel_intensity = texture(image, vec3(texCoord.x , texCoord.y, texCoord.z));
out_voxel = float(out_voxel_intensity) ;
if(out_voxel <= threshold)
{
out_voxel = 0.0;
return;
}
for(int i = -int(kernalSize); i <= int(kernalSize);++i)
for(int j = -int(kernalSize); j <= int(kernalSize); ++j)
for(int k = -int(kernalSize); k <= int(kernalSize); ++k)
{
float x_o = texCoord.x + i / (imageSize.x);
float y_o = texCoord.y + j / (imageSize.y);
float z_o = texCoord.z + k / (imageSize.z);
if(x_o < 0.0 || x_o > 1.0
|| y_o < 0. || y_o > 1.0
|| z_o < 0. || z_o > 1.0)
continue;
if(float(texture(image, vec3(x_o, y_o, z_o))) <= threshold)
{
out_voxel = 0.0;
return;
}
}
}
as the code above access not only the current texture coordinate, but the values around it with the specified kernel size, how glsl takes care that no other parallel process access the same texture coordinates.
W.r.t that question, does the code above performs efficiently in a fragment shader given it access neighboring texture data or using openCL better?
Thanks

vtk6.1 shaders in/attribute variable

I have a vtkPolyData filled with points and cells that I want to draw on the screen. My polydata represents brain fibers (list of lines in 3D). A cell is a fiber. It's working, but I need to add colors between all points. We decided to color the polydata using a shader because there will be a lot of coloring methods. My vertex shader is:
vtkShader2 *shader = vtkShader2::New();
shader->SetType(VTK_SHADER_TYPE_VERTEX);
shader->SetSourceCode(R"VertexShader(
#version 120
attribute vec3 next_point;
varying vec3 vColor; // Pass to fragment shader
void main() {
float r = gl_Vertex.x - next_point.x;
float g = gl_Vertex.y - next_point.y;
float b = gl_Vertex.z - next_point.z;
if (r < 0.0) { r *= -1.0; }
if (g < 0.0) { g *= -1.0; }
if (b < 0.0) { b *= -1.0; }
const float norm = 1.0 / sqrt(r*r + g*g + b*b);
vColor = vec3(r * norm, g * norm, b * norm);
gl_Position = ftransform();
}
)VertexShader");
shader->SetContext(shader_program->GetContext());
shader_program->GetShaders()->AddItem(shader);
The goal here is, for each point, get the next point to calculate the color of the line between them. The problem is that I can't find a way to set the value of "next_point". I'm pretty sure it's always filled with 0.0 because the output image is red, blue and green on the sides.
I tried using vtkProperty::AddShaderVariable() but I never saw any change and the method's documentation hints about a "uniform variable" so it's probably not the right way.
// Splitted in 3 because I'm not sure how to pass a vtkPoints object to AddShaderVariable
fibersActor->GetProperty()->AddShaderVariable("next_x", nb_points, next_x);
fibersActor->GetProperty()->AddShaderVariable("next_y", nb_points, next_y);
fibersActor->GetProperty()->AddShaderVariable("next_z", nb_points, next_z);
I also tried using a vtkFloatArray filled with my points, then setting it as a data array.
vtkFloatArray *next_point = vtkFloatArray::New();
next_point->SetName("next_point");
next_point->SetNumberOfComponents(3);
next_point->Resize(nb_points);
// Fill next_point ...
polydata->GetPointData()->AddArray(next_point);
// Tried the vtkAssignAttribute class. Did nothing.
tl;dr Can you please tell me how to pass a list of points into a GLSL attribute variable? Thanks for your time.

How do I use texture-mapping in a simple ray tracer?

I am attempting to add features to a ray tracer in C++. Namely, I am trying to add texture mapping to the spheres. For simplicity, I am using an array to store the texture data. I obtained the texture data by using a hex editor and copying the correct byte values into an array in my code. This was just for my testing purposes. When the values of this array correspond to an image that is simply red, it appears to work close to what is expected except there is no shading.
first image http://dl.dropbox.com/u/367232/Texture.jpg
The bottom right of the image shows what a correct sphere should look like. This sphere's colour using one set colour, not a texture map.
Another problem is that when the texture map is of something other than just one colour pixels, it turns white. My test image is a picture of water, and when it maps, it shows only one ring of bluish pixels surrounding the white colour.
bmp http://dl.dropbox.com/u/367232/vPoolWater.bmp
When this is done, it simply appears as this:
second image http://dl.dropbox.com/u/367232/texture2.jpg
Here are a few code snippets:
Color getColor(const Object *object,const Ray *ray, float *t)
{
if (object->materialType == TEXTDIF || object->materialType == TEXTMATTE) {
float distance = *t;
Point pnt = ray->origin + ray->direction * distance;
Point oc = object->center;
Vector ve = Point(oc.x,oc.y,oc.z+1) - oc;
Normalize(&ve);
Vector vn = Point(oc.x,oc.y+1,oc.z) - oc;
Normalize(&vn);
Vector vp = pnt - oc;
Normalize(&vp);
double phi = acos(-vn.dot(vp));
float v = phi / M_PI;
float u;
float num1 = (float)acos(vp.dot(ve));
float num = (num1 /(float) sin(phi));
float theta = num /(float) (2 * M_PI);
if (theta < 0 || theta == NAN) {theta = 0;}
if (vn.cross(ve).dot(vp) > 0) {
u = theta;
}
else {
u = 1 - theta;
}
int x = (u * IMAGE_WIDTH) -1;
int y = (v * IMAGE_WIDTH) -1;
int p = (y * IMAGE_WIDTH + x)*3;
return Color(TEXT_DATA[p+2],TEXT_DATA[p+1],TEXT_DATA[p]);
}
else {
return object->color;
}
};
I call the colour code here in Trace:
if (object->materialType == MATTE)
return getColor(object, ray, &t);
Ray shadowRay;
int isInShadow = 0;
shadowRay.origin.x = pHit.x + nHit.x * bias;
shadowRay.origin.y = pHit.y + nHit.y * bias;
shadowRay.origin.z = pHit.z + nHit.z * bias;
shadowRay.direction = light->object->center - pHit;
float len = shadowRay.direction.length();
Normalize(&shadowRay.direction);
float LdotN = shadowRay.direction.dot(nHit);
if (LdotN < 0)
return 0;
Color lightColor = light->object->color;
for (int k = 0; k < numObjects; k++) {
if (Intersect(objects[k], &shadowRay, &t) && !objects[k]->isLight) {
if (objects[k]->materialType == GLASS)
lightColor *= getColor(objects[k], &shadowRay, &t); // attenuate light color by glass color
else
isInShadow = 1;
break;
}
}
lightColor *= 1.f/(len*len);
return (isInShadow) ? 0 : getColor(object, &shadowRay, &t) * lightColor * LdotN;
}
I left out the rest of the code as to not bog down the post, but it can be seen here. Any help is greatly appreciated. The only portion not included in the code, is where I define the texture data, which as I said, is simply taken straight from a bitmap file of the above image.
Thanks.
It could be that the texture is just washed out because the light is so bright and so close. Notice how in the solid red case, there doesn't seem to be any gradation around the sphere. The red looks like it's saturated.
Your u,v mapping looks right, but there could be a mistake there. I'd add some assert statements to make sure u and v and really between 0 and 1 and that the p index into your TEXT_DATA array is also within range.
If you're debugging your textures, you should use a constant material whose color is determined only by the texture and not the lights. That way you can make sure you are correctly mapping your texture to your primitive and filtering it properly before doing any lighting on it. Then you know that part isn't the problem.

how to implement grayscale rendering in OpenGL?

When rendering a scene of textured polygons, I'd like to be able to switch between rendering in the original colors and a "grayscale" mode. I've been trying to achieve this using blending and color matrix operations; none of it worked (with blending I couldn't find a glBlendFunc() that achieved something remotely resembling to what I wanted, and color matrix operations ...are discussed here).
A solution that comes to mind (but also is rather expensive) is to capture the screen every frame and convert the resulting texture to a grayscale one and display that instead... (Where I said grayscale I actually meant anything with a low saturation, but I'm guessing for most of the possible solutions it won't differ all that much from grayscale).
What other options do I have?
The default OpenGL framebuffer uses the RGB colour-space, which doesn't store an explicit saturation. You need an approach for extracting the saturation, modifying it, and change it back again.
My previous suggestion which simply used the RGB vector length to represent 0 in luminance was incorrect, as it didn't take scaling into account, I apologize.
Credit for the new short snippet goes to the regular user "RTFM_FTW" from ##opengl and ##opengl3 on FreeNode/IRC, and it lets you modify the saturation directly without computing the costly RGB->HSV->RGB conversion, which is exactly what you want. Though the HSV code is inferior with respect to your question, I let it stay.
void main( void )
{
vec3 R0 = texture2DRect( S, gl_TexCoord[0].st ).rgb;
gl_FragColor = vec4( mix( vec3( dot( R0, vec3( 0.2125, 0.7154, 0.0721 ) ) ),
R0, T ), gl_Color.a );
}
If you want more control than just the saturation, you need to convert to HSL or HSV colour-space. As shown below by using a GLSL fragment shader.
Read the OpenGL 3.0 and GLSL 1.30 specification available on http://www.opengl.org/registry to learn how to use GLSL v1.30 functionality.
#version 130
#define RED 0
#define GREEN 1
#define BLUE 2
in vec4 vertexIn;
in vec4 colorIn;
in vec2 tcoordIn;
out vec4 pixel;
Sampler2D tex;
vec4 texel;
const float epsilon = 1e-6;
vec3 RGBtoHSV(vec3 color)
{
/* hue, saturation and value are all in the range [0,1> here, as opposed to their
normal ranges of: hue: [0,360>, sat: [0, 100] and value: [0, 256> */
int sortindex[3] = {RED,GREEN,BLUE};
float rgbArr[3] = float[3](color.r, color.g, color.b);
float hue, saturation, value, diff;
float minCol, maxCol;
int minIndex, maxIndex;
if(color.g < color.r)
swap(sortindex[0], sortindex[1]);
if(color.b < color.g)
swap(sortindex[1], sortindex[2]);
if(color.r < color.b)
swap(sortindex[2], sortindex[0]);
minIndex = sortindex[0];
maxIndex = sortindex[2];
minCol = rgbArr[minIndex];
maxCol = rgbArr[maxIndex];
diff = maxCol - minCol;
/* Hue */
if( diff < epsilon){
hue = 0.0;
}
else if(maxIndex == RED){
hue = ((1.0/6.0) * ( (color.g - color.b) / diff )) + 1.0;
hue = fract(hue);
}
else if(maxIndex == GREEN){
hue = ((1.0/6.0) * ( (color.b - color.r) / diff )) + (1.0/3.0);
}
else if(maxIndex == BLUE){
hue = ((1.0/6.0) * ( (color.r - color.g) / diff )) + (2.0/3.0);
}
/* Saturation */
if(maxCol < epsilon)
saturation = 0;
else
saturation = (maxCol - minCol) / maxCol;
/* Value */
value = maxCol;
return vec3(hue, saturation, value);
}
vec3 HSVtoRGB(vec3 color)
{
float f,p,q,t, hueRound;
int hueIndex;
float hue, saturation, value;
vec3 result;
/* just for clarity */
hue = color.r;
saturation = color.g;
value = color.b;
hueRound = floor(hue * 6.0);
hueIndex = int(hueRound) % 6;
f = (hue * 6.0) - hueRound;
p = value * (1.0 - saturation);
q = value * (1.0 - f*saturation);
t = value * (1.0 - (1.0 - f)*saturation);
switch(hueIndex)
{
case 0:
result = vec3(value,t,p);
break;
case 1:
result = vec3(q,value,p);
break;
case 2:
result = vec3(p,value,t);
break;
case 3:
result = vec3(p,q,value);
break;
case 4:
result = vec3(t,p,value);
break;
default:
result = vec3(value,p,q);
break;
}
return result;
}
void main(void)
{
vec4 srcColor;
vec3 hsvColor;
vec3 rgbColor;
texel = Texture2D(tex, tcoordIn);
srcColor = texel*colorIn;
hsvColor = RGBtoHSV(srcColor.rgb);
/* You can do further changes here, if you want. */
hsvColor.g = 0; /* Set saturation to zero */
rgbColor = HSVtoRGB(hsvColor);
pixel = vec4(rgbColor.r, rgbColor.g, rgbColor.b, srcColor.a);
}
If you're working against a modern-enough OpenGL, I would say pixel shaders is a very suitable solution here. Either by hooking into each polygon's shading as they render, or by doing a single full-screen quad in a second pass that just reads each pixel, converts to grayscale, and writes it back. Unless your resolution, graphics hardware, and target framerate are somehow "extrem", that should be doable these days in most cases.
For most Desktops Render-To-Texture isn't that expensive anymore, all of compiz, aero, etc and effects like bloom or depth of field seen in recent titles depend on it.
Actually you don't convert the screen texture per se to grayscale, you would want to draw a scree-sized quad with the texture and a fragment shader transforming the valures to grayscale.
Another option is to have two sets of fragment shaders for your triangles, one just copying the gl_FrontColor attribute as the fixed function pieline would, and another that writes grayscale values to the screen buffer.
A third option might be indexed color modes, if you set uüp a grayscale palette, but that mode might be deprecated and poorly supported by now; plus you lose a lot of functionality like blending, if I remember correctly.