loading texture from bitmap - c++

It appears that i am loading the image wrong. The image appears all scrambled. What am i doing wrong here?
int DrawGLScene(GLvoid) // Here's Where We Do All The Drawing
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
glLoadIdentity(); // Reset The Current Modelview Matrix
glEnable(GL_TEXTURE_2D);
GLuint texture = 0;
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glBindTexture(GL_TEXTURE_2D, texture);
FIBITMAP* bitmap = FreeImage_Load(FreeImage_GetFileType("C:/Untitled.bmp", 0), "C:/Untitled.bmp", BMP_DEFAULT);
FIBITMAP *pImage = FreeImage_ConvertTo32Bits(bitmap);
int nWidth = FreeImage_GetWidth(pImage);
int nHeight = FreeImage_GetHeight(pImage);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, nWidth, nHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, (void*)FreeImage_GetBits(pImage));
FreeImage_Unload(pImage);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, -1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0f, 1.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f(1.0f, 1.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f(1.0f, -1.0f);
glEnd();
RECT desktop;
const HWND hDesktop = GetDesktopWindow();
GetWindowRect(hDesktop, &desktop);
long horizontal = desktop.right;
long vertical = desktop.bottom;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glDisable(GL_TEXTURE_2D);
return TRUE; // Keep Going
}
what i see when the program runs
what i expect to see when the program runs (bitmap i am loading)

Part of the problem here seems to be a mismatch between the format of the pixel data you're getting from FreeImage and the format expected by glTexImage2D.
FreeImage_ConvertTo32Bits is going to return an image with 32-bit pixel data. I haven't used FreeImage but usually that would mean that each pixel would have 4 8-bit components representing red, green, blue and alpha (opacity). The format of the data you get from the BMP file depends on the file and may or may not have an alpha channel in it. However it seems safe to assume that FreeImage_ConvertTo32Bits will return data with alpha set to 255 if your original image is fully opaque (either because your BMP file format lacked an alpha channel or because you set it to opaque when you created the file).
This is all fine, but the problem comes with the call to glTexImage2D. This function can take pixel data in lots of different formats. The format it uses for interpreting the data is determined by the parameters type and format as decribed here. The value of format in your code is GL_RGB. This says that each pixel has three components describing its red, green and blue components which appear in that order in the pixel data. The value of type (GL_UNSIGNED_BYTE) says each component is a single byte in size. The problem is that the alpha channel is missing, so glTexImage2D reads the pixel data from the wrong place. Also it seems that the order of the colours that FreeImage produces is blue-green-red, as pointed out by Roger Rowland. The fix is simple: set the format to GL_BGRA to tell glTexImage2D about the alpha components and the order of the data:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, nWidth, nHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, (void*)FreeImage_GetBits(pImage));
To see if this problem could explain the weird outputs, think about the big blocks of blue colour in your original image. According to GIMP this is largely blue with a smaller amount of green (R=0, G=162, B=232). So we have something like this:
B G R A B G R A B G R A ... meaning of bytes from FreeImage_GetBits
1 - 0 1 1 - 0 1 1 - 0 1 ... "values" of bytes (0 -> 0, - -> 162, 1 -> 232 or 255)
R G B R G B R G B R G B ... what glTexImage2D thinks the bytes mean
The texture pixel colours here are: orange, light yellow, bright cyan and a purply colour. These repeat because reading 4 pixels consumes 12 bytes which is exactly equal to 3 of the original pixels. This explains the pattern you see running over much of your output image. (Originally I had gone through this using the RGBA order but the GBRA version is a better fit for the image - the difference is especially clear at the start of the rows).
However I must admit I'm not sure what's going on at the beginning and how to relate it to the pink square! So there might be other problems here, but hopefully using GL_BGRA will be a step in the right direction.

You are making a 32 bpp image, so you need to have the correct texture declarations:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, nWidth, nHeight,
0, GL_BGRA, GL_UNSIGNED_BYTE, (void*)FreeImage_GetBits(pImage));
Also, the FreeImage_GetBits function returns data which is normally aligned so that each row ends on a 4 byte boundary. So, if your image is 32 bpp (as your is), or if your image width is a power of two, it will be ok but otherwise, you will need to set the unpack alignment appropriately.

Related

openGL Transparent pixels unexpectedly White

I noticed a big problem in my openGL texture rendering:
Assumedly transparent pixels are rendered as solid white. According to most solutions to similar issues discussed on StackOverflow, I need to set glBlend / the proper functions, but I have already set the necessary gl state and am positive that textures are loaded correctly as far as I can tell. My texture load function is below:
GLboolean GL_texture_load(Texture* texture_id, const char* const path, const GLboolean alpha, const GLint param_edge_x, const GLint param_edge_y)
{
// load image
SDL_Surface* img = nullptr;
if (!(img = IMG_Load(path))) {
fprintf(stderr, "SDL_image could not be loaded %s, SDL_image Error: %s\n",
path, IMG_GetError());
return GL_FALSE;
}
glBindTexture(GL_TEXTURE_2D, *texture_id);
// image assignment
GLuint format = (alpha) ? GL_RGBA : GL_RGB;
glTexImage2D(GL_TEXTURE_2D, 0, format, img->w, img->h, 0, format, GL_UNSIGNED_BYTE, img->pixels);
// wrapping behavior
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, param_edge_x);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, param_edge_y);
// texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
// free the surface
SDL_FreeSurface(img);
return GL_TRUE;
}
I use Adobe Photoshop to export "for the web" 24-bit + transparency .png files -- 72 pixels/inch, 6400 x 720. I am not sure how to set the color mode (8, 16, 32), but this might have something to do with the issue. I also use the default sRGB color profile, but I thought to remove the color profile at one point. This didn't do anything.
No matter what, a png exported from Photoshop displays as solid white over transparent pixels.
If I create an image in e.g. Gimp, I have correct transparency. Importing the Adobe .psd or .png does not seem to work, and in any case I prefer to use Photoshop for editing purposes.
Has anyone experienced this issue? I imagine that Photoshop must add some strange metadata or I am not using the correct color modes--or both.
(I am concerned that this goes beyond the scope of Stack Overflow, but my issue intersects image editing and programming. Regardless, please let me know if this is not the right place.)
EDIT:
In both Photoshop and Gimp I created a test case-- 8 pixels (red, green, transparent, blue) clockwise.
In Photoshop, the transparent square is read as 1, 1, 1, 0 and displays as white.
In Gimp, the transparent square is 0, 0, 0, 0.
I also checked my fragment shader to see whether transparency works at all. Varying the alpha over time does increase transparency, so the alpha isn't outright ignored. For some reason 1, 1, 1, 0 counts as solid.
In addition, setting the background color to black with glClearColor seems to prevent the alpha from increasing transparency.
I don't know how to explain some of these behaviors, but something seems off. 0 alpha should be the same regardless of color, shouldn't it?
(Note that I render a few shapes on top of each other, but I've tried just rendering one for testing purposes.)
The best I can do is post more of my setup code (with bits omitted):
// vertex array and buffers setup
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
// I think that the blend function may be wrong (GL_ONE that is).
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glDepthRange(0, 1);
glDepthFunc(GL_LEQUAL);
Texture tex0;
// same function as above, but generates one texture id for me
if (GL_texture_gen_and_load_1(&tex0, "./textures/sq2.png", GL_TRUE, GL_CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE) == GL_FALSE) {
return EXIT_FAILURE;
}
glUseProgram(shader_2d);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex0);
glUniform1i(glGetUniformLocation(shader_2d, "tex0"), 0);
bool active = true;
while (active) {
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// uniforms, game logic, etc.
glDrawElements(GL_TRIANGLES, tri_data.i_count, GL_UNSIGNED_INT, (void*)0);
}
I don't know how to explain some of these behaviors, but something seems off. 0 alpha should be the same regardless of color, shouldn't it?
If you want to get an identical result for an alpha channel of 0.0, independent on the red, green and blue channels, the you have to change the blend function. See glBlendFunc.
Use:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This cause tha the the red, green and blue channel are multiplied by the alpha channel.
If the alpha channel is 0.0, the resulting RGB color is (0, 0, 0).
If the alpha channel is 1.0, the RGB color channels keep unchanged.
See further Alpha Compositing, OpenGL Blending and Premultiplied Alpha

Immediate mode texturing weird output

I'm trying to simply draw an image with OpenGL's immediate mode functions.
However, my output is kinda weird. I tried a few Texture parameters - but i get the same result, sometimes with different colors. I kinda can't figure out the problem, but i figure it's either the image loading or texture setup. I'll cut to the case, here is how i generate my texture (in a Texture2D class):
glBindTexture(GL_TEXTURE_2D, id);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
A call to glTexImage2D follows, like this: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _width, _height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);, where data is an array of unsigned char set to 255, white. I then load a PNG file with CImg, like this:
CImg<unsigned char> image(_filename.c_str());
image.resize(m_Width, m_Height);
glBindTexture(GL_TEXTURE_2D, m_ID);
if(image.spectrum() == 4)
{
unsigned char* data = image.data();
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_Width, m_Height, GL_RGBA, GL_UNSIGNED_BYTE, data);
}
else if(image.spectrum() == 3)
{
unsigned char* data = image.data();
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_Width, m_Height, GL_RGB, GL_UNSIGNED_BYTE, data);
}
glBindTexture(GL_TEXTURE_2D, 0);
But when i try to draw the texture in immediate mode like this (origin is upper left corner, notice the red rectangle arund the texture is intentional):
void SimpleRenderer::drawTexture(Texture2D* _texture, float _x, float _y, float _width, float _height)
{
_texture->setActive(0);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(_x, _y);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(_x + _width, _y);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(_x + _width, _y + _height);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(_x, _y + _height);
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
glColor3ub(strokeR, strokeG, strokeB);
glBegin(GL_LINE_LOOP);
glVertex2f((GLfloat)_x, (GLfloat)_y);
glVertex2f((GLfloat)_x+_width, (GLfloat)_y);
glVertex2f((GLfloat)_x+_width, (GLfloat)_y+_height);
glVertex2f((GLfloat)_x, (GLfloat)_y+_height);
glEnd();
}
I expect an output like this within the rectangle (debugging the same PNG within the same program/thread with CImg):
But i get this:
Can anyone spot the problem with my code?
From How pixel data are stored with CImg:
The values are not interleaved, and are ordered first along the X,Y,Z and V axis respectively (corresponding to the width,height,depth,dim dimensions), starting from the upper-left pixel to the bottom-right pixel of the instane image, with a classical scanline run.So, a color image with dim=3 and depth=1, will be stored in memory as :R1R2R3R4R5R6......G1G2G3G4G5G6.......B1B2B3B4B5B6.... (i.e following a 'planar' structure)and not as R1G1B1R2G2B2R3G3B3... (interleaved channels),
OpenGL does not work with planar data, it expects interleaved pixel data. Easiest solution is to use a library other than CImg.
This does not fix the scaling issues, but it's a good place to start.
Use CImg<T>::permute_axes() to transform your image buffer to interleaved format, like this :
CImg<unsigned char> img("imageRGB.png");
img.permute_axes("cxyz"); // Convert to interleaved representation
// Now, use img.data() here as a pointer for OpenGL functions.
img.permute_axes("yzcx"); // Go back to planar representation (if needed).
CImg is indeed a great library, so all these kind of transformations have been planned in the library.

Matrix extraction using OpenCV

I ran into some trouble while extracting a matrix (cropping) using OpenCV. What's funny is if I don't execute the line to "crop" the image everything works fine. But if I do, I see horizontal multi-coloured lines in the place of the image.
This is to show that the cropping takes place correctly.
cv::imshow("before", newimg);
//the line that "crops" the image
newimg = newimg(cv::Rect(leftcol, toprow, rightcol - leftcol, bottomrow - toprow));
cv::imshow("after", newimg);
The code that follows is where I bind the image to a texture so that I can use it in OpenGL.
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, newimg.cols, newimg.rows,
0, GL_BGR, GL_UNSIGNED_BYTE, newimg.ptr());
glBindTexture(GL_TEXTURE_2D, 0);
And later to draw . . .
float h = size;
float w = size * aspectRatio; // of the image. aspectRatio = width / height
glEnable(GL_TEXTURE_2D);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f(x, y, z);
glTexCoord2f(0.0f, 1.0f); glVertex3f(x, y + h, z);
glTexCoord2f(1.0f, 1.0f); glVertex3f(x + w, y + h, z);
glTexCoord2f(1.0f, 0.0f); glVertex3f(x + w, y, z);
glEnd();
glDisable(GL_TEXTURE_2D);
All of this works well and I see the proper image drawn in the OpenGL window when I comment out that line in which I had cropped the image. I have checked the image type before and after the cropping, but the only difference seems to be the reduced number of rows and columns in the final image.
Here is the image that gets drawn when cropping has been done.
After a bit of research I found out a way to solve the problem. The reason the image was looking distorted was because though the image had been cropped, newimg.step / newimg.elemSize() still showed the original size of the image. This meant that only the values of rows and columns had changed in the output image but pixels with no data in them which were no part of the image remained. That is possibly why the "after" image has a gray area on the right. I might be wrong about this theory since I don't have a deep understanding of the subject but it all started working properly once I inserted this line before calling glTexImage2D:
glPixelStorei(GL_UNPACK_ROW_LENGTH, newimg.step / newimg.elemSize());
Note: Since we are manipulating the pixel store of OpenGL, it's best to push the state before doing so:
glPushClientAttrib(GL_CLIENT_PIXEL_STORE_BIT);
And pop it out after you are done:
glPopClientAttrib();
This is to avoid the memory from getting corrupted. I thank genpfault for pointing me in the right direction.
See 8. Know your pixel store state for more details.
https://www.opengl.org/archives/resources/features/KilgardTechniques/oglpitfall/
This post was also very useful.

Unable to render a texture on a quad

I am trying to program a simple game in C++ using OpenGL for graphics. In my game, I have objects that are rendered onscreen as a white square. I would like to be able to bind an image as a texture to these objects, so that I can render an image onscreen instead of the white square. There's no restriction on the format of the image, though I've been using .png or .bmp for testing.
One of my object classes, Character.h, stores a GLuint* member variable called _texture and an unsigned char* member variable called _data (as an image handle and pixel data, respectively). Within Character.h is this function meant to bind the image as a texture to the Character object:
void loadTexture(const char* fileName)
{
/* read pixel data */
FILE* file;
file = fopen(fileName, "rb");
if (file == NULL) {cout << "ERROR: Image file not found!" << endl;}
_data = (unsigned char*)malloc(_dimension * _dimension * 3);
fread(_data, _dimension * _dimension * 3, 1, file);
fclose(file);
/* bind texture */
glGenTextures(1, &_texture); // _texture is a member variable of type GLuint
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _dimension, _dimension, 0, GL_RGB, GL_UNSIGNED_BYTE, _data); // _dimension = 64.0f.
}
Once the image is bound, I then attempt to render the Character object with this function (which is a static function found in a separate class):
static void drawTexturedRectangle(float x, float y, float w, float h, GLuint texture, unsigned char* data)
{
/* fractions are calculated to determine the position onscreen that the object is drawn at */
float xFraction = (x / GlobalConstants::SCREEN_WIDTH) * 2;
float yFraction = (y / GlobalConstants::SCREEN_HEIGHT) * 2;
float wFraction = (w / GlobalConstants::SCREEN_WIDTH) * 2;
float hFraction = (h / GlobalConstants::SCREEN_HEIGHT) * 2;
glEnable(GL_TEXTURE_2D);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glPushMatrix();
glEnable(GL_BLEND);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f(xFraction, yFraction); // bottom left corner
glTexCoord2f(0.0f, 1.0f); glVertex2f(xFraction, yFraction + hFraction); // top left corner
glTexCoord2f(1.0f, 1.0f); glVertex2f(xFraction + wFraction, yFraction + hFraction); // top right corner
glTexCoord2f(1.0f, 0.0f); glVertex2f(xFraction + wFraction, yFraction); // bottom right corner
glEnd();
glDisable(GL_BLEND);
glPopMatrix();
}
No matter how much I mess around with this, I cannot seem to get my program to do anything other than render a plain white square with no texture on it. I have checked to ensure that variables file, _texture, _dimension, and _data all contain values. Can anybody spot what I might be doing wrong?
i dont think you can just input a raw file to glTexImage2D, except if you store texture files in that format (which you probably dont). glTexImage2D expects a huge array bytes (representing texel colors), but file formats typically dont store images like that. Even bmp has some header information in it.
Use a 3rd party image loader library like DevIL or SOIL to load your textures! These libraries can convert most common image file formats (png, jpg, tga, ...) into a byte stream that you can input to OpenGL!
On a side note:
After you call glTexImage2D(..., data), the content of data is copied into the GPU memory (VRAM). From this point you only have to bind the texture using it's id, as it is already in the VRAM. Calling glTexImage2D() in each frame will reupload the whole texture in each frame into the vram, causing a big performance penalty.

Apply hue/saturation filters to image with OpenGL

I have an image in OpenGL that I am attempting to apply a simple HSB filter to. The user selects a hue value, I shade the image appropriately, display it, and everyone is happy. The problem I am running into is that the code I have inherited that worked on a previous system (Solaris, presuming OpenGL 2.1) does not work on our current system (RHEL 5, OpenGL 3.0).
Right now, the image appears in grey-scale, no matter what saturation is set to. However, brightness does seem to be acting appropriately. The relevant code has been reproduced below:
// imageData - unsigned char[3*width*height]
// (red|green|blue)Channel - unsigned char[width*height]
// brightnessBias - float in range [-1/3,1/3]
// hsMatrix - float[4][4] Described by algorithm from
// http://www.graficaobscura.com/matrix/index.html
// (see Hue Rotation While Preserving Luminance)
glDrawPixels(width, height, format, GL_UNSIGNED_BYTE, imageData);
// Split into RGB channels
glReadPixels(0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, redChannel);
glReadPixels(0, 0, width, height, GL_GREEN, GL_UNSIGNED_BYTE, greenChannel);
glReadPixels(0, 0, width, height, GL_BLUE, GL_UNSIGNED_BYTE, blueChannel);
// Redraw and blend RGB channels with scaling and bias
glPixelZoom(1.0, 1.0);
glRasterPos2i(0, height);
glPixelTransferf(GL_RED_BIAS, brightnessBias);
glPixelTransferf(GL_GREEN_BIAS, brightnessBias);
glPixelTransferf(GL_BLUE_BIAS, brightnessBias);
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][0]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][0]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][0]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, redChannel);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][1]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][1]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][1]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, greenChannel);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][2]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][2]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][2]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, blueChannel);
// Reset pixel transfer parameters
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, 1.0f);
glPixelTransferf(GL_GREEN_SCALE, 1.0f);
glPixelTransferf(GL_BLUE_SCALE, 1.0f);
glPixelTransferf(GL_RED_BIAS, 0.0f);
glPixelTransferf(GL_GREEN_BIAS, 0.0f);
glPixelTransferf(GL_BLUE_BIAS, 0.0f);
The brightness control works as intended, however, when the glPixelTransferf(GL_*_SCALE) calls are left in, the image is displayed in greyscale. Compounding all of this is the fact that I have no prior experience with OpenGL, so I find a lot of links for what I presume are more modern techniques that I simply can't make sense of.
EDIT:
I believe the theory behind what was being done was a hack at doing the matrix multiplication through the draw calls, because GL_LUMINANCE treats the one value as the value for all three components, so if you follow the components through that drawing, you expect
// After glDrawPixels(..., redChannel)
new_red = red*hsMatrix[0][0]
new_green = red*hsMatrix[1][0]
new_blue = red*hsMatrix[2][0]
// After glDrawPixels(..., greenChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1]
// After glDrawPixels(..., blueChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1] + blue*hsMatrix[0][2]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1] + blue*hsMatrix[1][2]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1] + blue*hsMatrix[2][2]
Because it was turning out greyscale anyway and from a similar-ish example, I had thought that I might have needed to do the glPixelTransfer calls before calling glDrawPixels, but that was amazingly slow.
Wow, what the hell is that ?!
For your question, I'd replace GL_LUMINANCE in your 3 glDrawPixels by GL_RED, GL_GREEN and GL_BLUE respectively.
However :
glPixelTransfer is bad
glDrawPixels is bad
Is there a single reason why you're not using a super-simple fragment shader to do the conversion ? It's a simple matrix multiplication, and you're under ogl3.0...
Create a texture from imageData, this needs to be done only once.
Make a shader that reads the color from the texture, multiply it by the color conversion matrix, and display it
Bind the computed color matrix
Draw a fullscreen quad. Even an 5 year old card will get 500 fps out of this.