Unable to render a texture on a quad - c++

I am trying to program a simple game in C++ using OpenGL for graphics. In my game, I have objects that are rendered onscreen as a white square. I would like to be able to bind an image as a texture to these objects, so that I can render an image onscreen instead of the white square. There's no restriction on the format of the image, though I've been using .png or .bmp for testing.
One of my object classes, Character.h, stores a GLuint* member variable called _texture and an unsigned char* member variable called _data (as an image handle and pixel data, respectively). Within Character.h is this function meant to bind the image as a texture to the Character object:
void loadTexture(const char* fileName)
{
/* read pixel data */
FILE* file;
file = fopen(fileName, "rb");
if (file == NULL) {cout << "ERROR: Image file not found!" << endl;}
_data = (unsigned char*)malloc(_dimension * _dimension * 3);
fread(_data, _dimension * _dimension * 3, 1, file);
fclose(file);
/* bind texture */
glGenTextures(1, &_texture); // _texture is a member variable of type GLuint
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _dimension, _dimension, 0, GL_RGB, GL_UNSIGNED_BYTE, _data); // _dimension = 64.0f.
}
Once the image is bound, I then attempt to render the Character object with this function (which is a static function found in a separate class):
static void drawTexturedRectangle(float x, float y, float w, float h, GLuint texture, unsigned char* data)
{
/* fractions are calculated to determine the position onscreen that the object is drawn at */
float xFraction = (x / GlobalConstants::SCREEN_WIDTH) * 2;
float yFraction = (y / GlobalConstants::SCREEN_HEIGHT) * 2;
float wFraction = (w / GlobalConstants::SCREEN_WIDTH) * 2;
float hFraction = (h / GlobalConstants::SCREEN_HEIGHT) * 2;
glEnable(GL_TEXTURE_2D);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glPushMatrix();
glEnable(GL_BLEND);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f(xFraction, yFraction); // bottom left corner
glTexCoord2f(0.0f, 1.0f); glVertex2f(xFraction, yFraction + hFraction); // top left corner
glTexCoord2f(1.0f, 1.0f); glVertex2f(xFraction + wFraction, yFraction + hFraction); // top right corner
glTexCoord2f(1.0f, 0.0f); glVertex2f(xFraction + wFraction, yFraction); // bottom right corner
glEnd();
glDisable(GL_BLEND);
glPopMatrix();
}
No matter how much I mess around with this, I cannot seem to get my program to do anything other than render a plain white square with no texture on it. I have checked to ensure that variables file, _texture, _dimension, and _data all contain values. Can anybody spot what I might be doing wrong?

i dont think you can just input a raw file to glTexImage2D, except if you store texture files in that format (which you probably dont). glTexImage2D expects a huge array bytes (representing texel colors), but file formats typically dont store images like that. Even bmp has some header information in it.
Use a 3rd party image loader library like DevIL or SOIL to load your textures! These libraries can convert most common image file formats (png, jpg, tga, ...) into a byte stream that you can input to OpenGL!
On a side note:
After you call glTexImage2D(..., data), the content of data is copied into the GPU memory (VRAM). From this point you only have to bind the texture using it's id, as it is already in the VRAM. Calling glTexImage2D() in each frame will reupload the whole texture in each frame into the vram, causing a big performance penalty.

Related

Immediate mode texturing weird output

I'm trying to simply draw an image with OpenGL's immediate mode functions.
However, my output is kinda weird. I tried a few Texture parameters - but i get the same result, sometimes with different colors. I kinda can't figure out the problem, but i figure it's either the image loading or texture setup. I'll cut to the case, here is how i generate my texture (in a Texture2D class):
glBindTexture(GL_TEXTURE_2D, id);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
A call to glTexImage2D follows, like this: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _width, _height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);, where data is an array of unsigned char set to 255, white. I then load a PNG file with CImg, like this:
CImg<unsigned char> image(_filename.c_str());
image.resize(m_Width, m_Height);
glBindTexture(GL_TEXTURE_2D, m_ID);
if(image.spectrum() == 4)
{
unsigned char* data = image.data();
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_Width, m_Height, GL_RGBA, GL_UNSIGNED_BYTE, data);
}
else if(image.spectrum() == 3)
{
unsigned char* data = image.data();
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_Width, m_Height, GL_RGB, GL_UNSIGNED_BYTE, data);
}
glBindTexture(GL_TEXTURE_2D, 0);
But when i try to draw the texture in immediate mode like this (origin is upper left corner, notice the red rectangle arund the texture is intentional):
void SimpleRenderer::drawTexture(Texture2D* _texture, float _x, float _y, float _width, float _height)
{
_texture->setActive(0);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(_x, _y);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(_x + _width, _y);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(_x + _width, _y + _height);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(_x, _y + _height);
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
glColor3ub(strokeR, strokeG, strokeB);
glBegin(GL_LINE_LOOP);
glVertex2f((GLfloat)_x, (GLfloat)_y);
glVertex2f((GLfloat)_x+_width, (GLfloat)_y);
glVertex2f((GLfloat)_x+_width, (GLfloat)_y+_height);
glVertex2f((GLfloat)_x, (GLfloat)_y+_height);
glEnd();
}
I expect an output like this within the rectangle (debugging the same PNG within the same program/thread with CImg):
But i get this:
Can anyone spot the problem with my code?
From How pixel data are stored with CImg:
The values are not interleaved, and are ordered first along the X,Y,Z and V axis respectively (corresponding to the width,height,depth,dim dimensions), starting from the upper-left pixel to the bottom-right pixel of the instane image, with a classical scanline run.So, a color image with dim=3 and depth=1, will be stored in memory as :R1R2R3R4R5R6......G1G2G3G4G5G6.......B1B2B3B4B5B6.... (i.e following a 'planar' structure)and not as R1G1B1R2G2B2R3G3B3... (interleaved channels),
OpenGL does not work with planar data, it expects interleaved pixel data. Easiest solution is to use a library other than CImg.
This does not fix the scaling issues, but it's a good place to start.
Use CImg<T>::permute_axes() to transform your image buffer to interleaved format, like this :
CImg<unsigned char> img("imageRGB.png");
img.permute_axes("cxyz"); // Convert to interleaved representation
// Now, use img.data() here as a pointer for OpenGL functions.
img.permute_axes("yzcx"); // Go back to planar representation (if needed).
CImg is indeed a great library, so all these kind of transformations have been planned in the library.

loading texture from bitmap

It appears that i am loading the image wrong. The image appears all scrambled. What am i doing wrong here?
int DrawGLScene(GLvoid) // Here's Where We Do All The Drawing
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
glLoadIdentity(); // Reset The Current Modelview Matrix
glEnable(GL_TEXTURE_2D);
GLuint texture = 0;
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glBindTexture(GL_TEXTURE_2D, texture);
FIBITMAP* bitmap = FreeImage_Load(FreeImage_GetFileType("C:/Untitled.bmp", 0), "C:/Untitled.bmp", BMP_DEFAULT);
FIBITMAP *pImage = FreeImage_ConvertTo32Bits(bitmap);
int nWidth = FreeImage_GetWidth(pImage);
int nHeight = FreeImage_GetHeight(pImage);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, nWidth, nHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, (void*)FreeImage_GetBits(pImage));
FreeImage_Unload(pImage);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, -1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0f, 1.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f(1.0f, 1.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f(1.0f, -1.0f);
glEnd();
RECT desktop;
const HWND hDesktop = GetDesktopWindow();
GetWindowRect(hDesktop, &desktop);
long horizontal = desktop.right;
long vertical = desktop.bottom;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glDisable(GL_TEXTURE_2D);
return TRUE; // Keep Going
}
what i see when the program runs
what i expect to see when the program runs (bitmap i am loading)
Part of the problem here seems to be a mismatch between the format of the pixel data you're getting from FreeImage and the format expected by glTexImage2D.
FreeImage_ConvertTo32Bits is going to return an image with 32-bit pixel data. I haven't used FreeImage but usually that would mean that each pixel would have 4 8-bit components representing red, green, blue and alpha (opacity). The format of the data you get from the BMP file depends on the file and may or may not have an alpha channel in it. However it seems safe to assume that FreeImage_ConvertTo32Bits will return data with alpha set to 255 if your original image is fully opaque (either because your BMP file format lacked an alpha channel or because you set it to opaque when you created the file).
This is all fine, but the problem comes with the call to glTexImage2D. This function can take pixel data in lots of different formats. The format it uses for interpreting the data is determined by the parameters type and format as decribed here. The value of format in your code is GL_RGB. This says that each pixel has three components describing its red, green and blue components which appear in that order in the pixel data. The value of type (GL_UNSIGNED_BYTE) says each component is a single byte in size. The problem is that the alpha channel is missing, so glTexImage2D reads the pixel data from the wrong place. Also it seems that the order of the colours that FreeImage produces is blue-green-red, as pointed out by Roger Rowland. The fix is simple: set the format to GL_BGRA to tell glTexImage2D about the alpha components and the order of the data:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, nWidth, nHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, (void*)FreeImage_GetBits(pImage));
To see if this problem could explain the weird outputs, think about the big blocks of blue colour in your original image. According to GIMP this is largely blue with a smaller amount of green (R=0, G=162, B=232). So we have something like this:
B G R A B G R A B G R A ... meaning of bytes from FreeImage_GetBits
1 - 0 1 1 - 0 1 1 - 0 1 ... "values" of bytes (0 -> 0, - -> 162, 1 -> 232 or 255)
R G B R G B R G B R G B ... what glTexImage2D thinks the bytes mean
The texture pixel colours here are: orange, light yellow, bright cyan and a purply colour. These repeat because reading 4 pixels consumes 12 bytes which is exactly equal to 3 of the original pixels. This explains the pattern you see running over much of your output image. (Originally I had gone through this using the RGBA order but the GBRA version is a better fit for the image - the difference is especially clear at the start of the rows).
However I must admit I'm not sure what's going on at the beginning and how to relate it to the pink square! So there might be other problems here, but hopefully using GL_BGRA will be a step in the right direction.
You are making a 32 bpp image, so you need to have the correct texture declarations:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, nWidth, nHeight,
0, GL_BGRA, GL_UNSIGNED_BYTE, (void*)FreeImage_GetBits(pImage));
Also, the FreeImage_GetBits function returns data which is normally aligned so that each row ends on a 4 byte boundary. So, if your image is 32 bpp (as your is), or if your image width is a power of two, it will be ok but otherwise, you will need to set the unpack alignment appropriately.

OpenGL Texture messed up

Here is my code to load a texture. I have tried to load a file using this example; it is a gif file. Can I ask if gif files can be loaded, or is it only raw files can be loaded?
void setUpTextures()
{
printf("Set up Textures\n");
//This is the array that will contain the image color information.
// 3 represents red, green and blue color info.
// 512 is the height and width of texture.
unsigned char earth[512 * 512 * 3];
// This opens your image file.
FILE* f = fopen("/Users/Raaj/Desktop/earth.gif", "r");
if (f){
printf("file loaded\n");
}else{
printf("no load\n");
fclose(f);
return;
}
fread(earth, 512 * 512 * 3, 1, f);
fclose(f);
glEnable(GL_TEXTURE_2D);
//Here 1 is the texture id
//The texture id is different for each texture (duh?)
glBindTexture(GL_TEXTURE_2D, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
//In this line you only supply the last argument which is your color info array,
//and the dimensions of the texture (512)
glTexImage2D(GL_TEXTURE_2D, 0, 3, 512, 512, 0, GL_RGB, GL_UNSIGNED_BYTE,earth);
glDisable(GL_TEXTURE_2D);
}
void Draw()
{
glEnable(GL_TEXTURE_2D);
// Here you specify WHICH texture you will bind to your coordinates.
glBindTexture(GL_TEXTURE_2D,1);
glColor3f(1,1,1);
double n=6;
glBegin(GL_QUADS);
glTexCoord2d(0,50); glVertex2f(n/2, n/2);
glTexCoord2d(50,0); glVertex2f(n/2, -n/2);
glTexCoord2d(50,50); glVertex2f(-n/2, -n/2);
glTexCoord2d(0,50); glVertex2f(-n/2, n/2);
glEnd();
// Do not forget this line, as then the rest of the colors in your
// Program will get messed up!!!
glDisable(GL_TEXTURE_2D);
}
And all I get is this:
Can I know why?
Basically, no, you can't just give arbitrary texture formats to GL - it only wants pixel data, not encoded files.
Your code, as posted, clearly declares an array for 24-bit RGB data, but then you open and attempt to read that much data from a GIF file. GIF is a compressed and palettised format, complete with header information etc., so that's never doing to work.
You need to use an image loader to decompress the file into raw pixels.
Also, your texture coordinates don't look right. There are four vertices, but only 3 distinct coordinates used, and 2 adjacent coordinates are diagonally opposite each other. Even if your texture was loaded correctly, that's unlikely to be what you want.

OpenGL Tiles/Blitting/Clipping

In OpenGL, how can I select an area from an image-file that was loaded using IMG_Load()?
(I am working on a tilemap for a simple 2D game)
I'm using the following principle to load an image-file into a texture:
GLuint loadTexture( const std::string &fileName ) {
SDL_Surface *image = IMG_Load(fileName.c_str());
unsigned object(0);
glGenTextures(1, &object);
glBindTexture(GL_TEXTURE_2D, object);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image->w, image->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, image->pixels);
SDL_FreeSurface(image);
return object;
}
I then use the following to actually draw the texture in my rendering-part:
glColor4ub(255,255,255,255);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex2f(x,y);
glTexCoord2d(1,0); glVertex2f(x+w,y);
glTexCoord2d(1,1); glVertex2f(x+w,y+h);
glTexCoord2d(0,1); glVertex2f(x,y+h);
glEnd();
Now what I need is a function that allows me to select certain rectangular parts from the GLuint that I get from calling loadTexture( const std::string &fileName ), such that I can then use the above code to bind these parts to rectangles and then draw them to the screen. Something like:
GLuint getTileTexture( GLuint spritesheet, int x, int y, int w, int h )
Go ahead and load the entire collage into a texture. Then select a subset of it using glTexCoord when you render your geometry.
glTexSubImage2D will not help in any way. It allows you to add more than one file to a single texture, not create multiple textures from a single file.
Example code:
void RenderSprite( GLuint spritesheet, unsigned spritex, unsigned spritey, unsigned texturew, unsigned textureh, int x, int y, int w, int h )
{
glColor4ub(255,255,255,255);
glBindTexture(GL_TEXTURE_2D, spritesheet);
glBegin(GL_QUADS);
glTexCoord2d(spritex/(double)texturew,spritey/(double)textureh);
glVertex2f(x,y);
glTexCoord2d((spritex+w)/(double)texturew,spritey/(double)textureh);
glVertex2f(x+w,y);
glTexCoord2d((spritex+w)/(double)texturew,(spritey+h)/(double)textureh);
glVertex2f(x+w,y+h);
glTexCoord2d(spritex/(double)texturew,(spritey+h)/(double)textureh);
glVertex2f(x,y+h);
glEnd();
}
Although Ben Voigt's answer is the usual way to go, if you really want an extra texture for the tiles (which may help with filtering at the edges) you can use glGetTexImage and play a bit with the glPixelStore parameters:
GLuint getTileTexture(GLuint spritesheet, int x, int y, int w, int h)
{
glBindTexture(GL_TEXTURE_2D, spritesheet);
// first we fetch the complete texture
GLint width, height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
GLubyte *data = new GLubyte[width*height*4];
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
// now we take only a sub-rectangle from this data
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, /*filter+wrapping*/, /*whatever*/);
glPixelStorei(GL_UNPACK_ROW_LENGTH, width);
glTexImage2D(GL_TEXTURE_2D, 0, RGBA, w, h, 0,
GL_RGBA, GL_UNSIGNED_BYTE, data+4*(y*width+x));
// clean up
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
delete[] data;
return texture;
}
But keep in mind, that this function always reads the whole texture atlas into CPU memory and then copies a sub-part into the new smaller texture. So it would be a good idea to create all needed sprite textures in one go and only read the data in once. In this case you can also just drop the atlas texture completely and only read the image into system memory with IMG_Load to distribute it into the individual sprite textures. Or, if you really need the large texture, then at least use a PBO to copy its data into (with GL_DYNAMIC_COPY usage or something the like), so it need not leave the GPU memory.

Apply hue/saturation filters to image with OpenGL

I have an image in OpenGL that I am attempting to apply a simple HSB filter to. The user selects a hue value, I shade the image appropriately, display it, and everyone is happy. The problem I am running into is that the code I have inherited that worked on a previous system (Solaris, presuming OpenGL 2.1) does not work on our current system (RHEL 5, OpenGL 3.0).
Right now, the image appears in grey-scale, no matter what saturation is set to. However, brightness does seem to be acting appropriately. The relevant code has been reproduced below:
// imageData - unsigned char[3*width*height]
// (red|green|blue)Channel - unsigned char[width*height]
// brightnessBias - float in range [-1/3,1/3]
// hsMatrix - float[4][4] Described by algorithm from
// http://www.graficaobscura.com/matrix/index.html
// (see Hue Rotation While Preserving Luminance)
glDrawPixels(width, height, format, GL_UNSIGNED_BYTE, imageData);
// Split into RGB channels
glReadPixels(0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, redChannel);
glReadPixels(0, 0, width, height, GL_GREEN, GL_UNSIGNED_BYTE, greenChannel);
glReadPixels(0, 0, width, height, GL_BLUE, GL_UNSIGNED_BYTE, blueChannel);
// Redraw and blend RGB channels with scaling and bias
glPixelZoom(1.0, 1.0);
glRasterPos2i(0, height);
glPixelTransferf(GL_RED_BIAS, brightnessBias);
glPixelTransferf(GL_GREEN_BIAS, brightnessBias);
glPixelTransferf(GL_BLUE_BIAS, brightnessBias);
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][0]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][0]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][0]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, redChannel);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][1]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][1]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][1]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, greenChannel);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][2]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][2]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][2]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, blueChannel);
// Reset pixel transfer parameters
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, 1.0f);
glPixelTransferf(GL_GREEN_SCALE, 1.0f);
glPixelTransferf(GL_BLUE_SCALE, 1.0f);
glPixelTransferf(GL_RED_BIAS, 0.0f);
glPixelTransferf(GL_GREEN_BIAS, 0.0f);
glPixelTransferf(GL_BLUE_BIAS, 0.0f);
The brightness control works as intended, however, when the glPixelTransferf(GL_*_SCALE) calls are left in, the image is displayed in greyscale. Compounding all of this is the fact that I have no prior experience with OpenGL, so I find a lot of links for what I presume are more modern techniques that I simply can't make sense of.
EDIT:
I believe the theory behind what was being done was a hack at doing the matrix multiplication through the draw calls, because GL_LUMINANCE treats the one value as the value for all three components, so if you follow the components through that drawing, you expect
// After glDrawPixels(..., redChannel)
new_red = red*hsMatrix[0][0]
new_green = red*hsMatrix[1][0]
new_blue = red*hsMatrix[2][0]
// After glDrawPixels(..., greenChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1]
// After glDrawPixels(..., blueChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1] + blue*hsMatrix[0][2]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1] + blue*hsMatrix[1][2]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1] + blue*hsMatrix[2][2]
Because it was turning out greyscale anyway and from a similar-ish example, I had thought that I might have needed to do the glPixelTransfer calls before calling glDrawPixels, but that was amazingly slow.
Wow, what the hell is that ?!
For your question, I'd replace GL_LUMINANCE in your 3 glDrawPixels by GL_RED, GL_GREEN and GL_BLUE respectively.
However :
glPixelTransfer is bad
glDrawPixels is bad
Is there a single reason why you're not using a super-simple fragment shader to do the conversion ? It's a simple matrix multiplication, and you're under ogl3.0...
Create a texture from imageData, this needs to be done only once.
Make a shader that reads the color from the texture, multiply it by the color conversion matrix, and display it
Bind the computed color matrix
Draw a fullscreen quad. Even an 5 year old card will get 500 fps out of this.