I'm making a game using pygame + pyopengl, and right now i'm trying to make a video player on this context. To do so I use ffmpeg to load different video formats, then convert each frame to an opengl texture, as designed below, and then play the video.
class Texture(object):
def __init__(self, data, w=0, h=0):
"""
Initialize the texture from 3 diferents types of data:
filename = open the image, get its string and produce texture
surface = get its string and produce texture
string surface = gets it texture and use w and h provided
"""
if type(data) == str:
texture_data = self.load_image(data)
elif type(data) == pygame.Surface:
texture_data = pygame.image.tostring(data, "RGBA", True)
self.w, self.h = data.get_size()
elif type(data) == bytes:
self.w, self.h = w, h
texture_data = data
self.texID = 0
self.load_texture(texture_data)
def load_image(self, data):
texture_surface = pygame.image.load(data).convert_alpha()
texture_data = pygame.image.tostring(texture_surface, "RGBA", True)
self.w, self.h = texture_surface.get_size()
return texture_data
def load_texture(self, texture_data):
self.texID = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, self.texID)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, self.w,
self.h, 0, GL_RGBA, GL_UNSIGNED_BYTE,
texture_data)
Problem is that when i load all the textures of a given video, my RAM goes off the ceiling, about 800mb. But it's possible to work around this by blitting each texture as it loads, like shown below.
def render():
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
glLoadIdentity()
glDisable(GL_LIGHTING)
glEnable(GL_TEXTURE_2D)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
glClearColor(0, 0, 0, 1.0)
def Draw(texture, top, left, bottom, right):
"""
Draw the image on the Opengl Screen
"""
# Make sure he is looking at the position (0,0,0)
glBindTexture(GL_TEXTURE_2D, texture.texID)
glBegin(GL_QUADS)
# The top left of the image must be the indicated position
glTexCoord2f(0.0, 1.0)
glVertex2f(left, top)
glTexCoord2f(1.0, 1.0)
glVertex2f(right, top)
glTexCoord2f(1.0, 0.0)
glVertex2f(right, bottom)
glTexCoord2f(0.0, 0.0)
glVertex2f(left, bottom)
glEnd()
def update(t):
render()
Draw(t, -0.5, -0.5, 0.5, 0.5)
# Check for basic Events on the pygame interface
for event in pygame.event.get():
BASIC_Game.QUIT_Event(event)
pygame.display.flip()
Although this reduces the RAM consumption to an acceptable value it makes the loading time bigger than the video length.
I really don't understand why opengl works this way, but is there a way to make a texture efficient without blitting it first?
I can't tell for sure based off the code you have in your question right now, but I'm going to guess it's because you're creating a new Texture instance for each frame, which means that you're calling glGenTextures(1) for every frame of your video. This allocates a new buffer in memory for every frame of your video, and then stores a full, uncompressed version of the frame.
When you blit the image, you're not generating a new texture, but just overwriting the old one. This is the solution you want, but the way you're implementing it is inefficient.
There are a number of ways you can change the data in a texture without blitting on the CPU (assuming pygame blitting) to make things go faster, some are listed in this answer:
https://stackoverflow.com/a/13248668/1122135
Related
I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). I am going for retro 2D pixel graphics.
I have achieved this by setting the internal resolution via glOrtho():
glOrtho(0, 320, 240, 0, 0, 1);
And then I scale up the output resolution by a factor of 3, like this:
glViewport(0,0,960,720);
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL);
I draw rectangles like this:
glBegin(GL_LINE_LOOP);
glVertex2f(rect_x, rect_y);
glVertex2f(rect_x + rect_w, rect_y);
glVertex2f(rect_x + dst_w, dst_y + dst_h);
glVertex2f(rect_x, rect_y + rect_h);
glEnd();
It works perfectly at 320x240 (not scaled):
When I scale up to 960x720, the pixel rendering all works just fine! However it seems the GL_Line_Loop is not drawn on a 320x240 canvas and scaled up, but drawn on the final 960x720 canvas. The result is 1px lines in a 3px world :(
How do I draw lines to the 320x240 glOrtho canvas, instead of the 960x720 output canvas?
There is no "320x240 glOrtho canvas". There is only the window's actual resolution: 960x720.
All you are doing is scaling up the coordinates of the primitives you render. So, your code says to render a line from, for example, (20, 20) to (40, 40). And OpenGL (eventually) scales those coordinates by 3 in each dimension: (60, 60) and (120x120).
But that's only dealing with the end points. What happens in the middle is still based on the fact that you're rendering at the window's actual resolution.
Even if you employed glLineWidth to change the width of your lines, that would only fix the line widths. It would not fix the fact that the rasterization of lines is based on the actual resolution you're rendering at. So diagonal lines won't have the pixelated appearance you likely want.
The only way to do this properly is to, well, do it properly. Render to an image that is actual 320x240, then draw it to the window's actual resolution.
You'll have to create a texture of that size, then attach it to a framebuffer object. Bind the FBO for rendering and render to it (with the viewport set to the image's size). Then unbind the FBO, and draw that texture to the window (with the viewport set to the window's resolution).
As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:
set low res view
do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)
copy rendered screen into texture
set target resolution view
render the texture
do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.
Here C++ source for this:
void gl_draw()
{
// render resolution and multiplier
const int xs=320,ys=200,m=2;
// [low res render pass]
glViewport(0,0,xs,ys);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
// 50 random lines
RandSeed=0x12345678;
glColor3f(1.0,1.0,1.0);
glBegin(GL_LINES);
for (int i=0;i<100;i++)
glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
glEnd();
// [multiply resiolution render pass]
static bool _init=true;
GLuint txrid=0; // texture id
BYTE map[xs*ys*3]; // RGB
// init texture
if (_init) // you should also delte the texture on exit of app ...
{
// create texture
_init=false;
glGenTextures(1,&txrid);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST); // must be nearest !!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
glDisable(GL_TEXTURE_2D);
}
// copy low res screen to CPU memory
glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
// and then to GPU texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
// set multiplied resolution view
glViewport(0,0,m*xs,m*ys);
glClear(GL_COLOR_BUFFER_BIT);
// render low res screen as texture
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // swap buffers only here !!!
}
And preview:
I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).
I want to get depth buffer images that have been captured from different views of a 3D object. To do this with pyOpenGL I use the following code
def get_depth(LookAt_x, LookAt_y, LookAt_z)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glLoadIdentity()
gluLookAt(LookAt_x, LookAt_y, LookAt_z, 0, 0, 0, 0, 1, 0)
draw_object() // glBegin() for ... glvertex3f........ glEnd()
glReadBuffer(GL_FRONT)
depth_image = glReadPixels(0, 0, image_size, image_size, GL_DEPTH_COMPONENT, GL_FLOAT)
def draw_object(self):
glBegin(GL_TRIANGLES)
for tri in self.get_triangles():
glNormal3f(tri.normal.x,tri.normal.y,tri.normal.z)
glVertex3f(tri.points[0].x,tri.points[0].y,tri.points[0].z)
glVertex3f(tri.points[1].x,tri.points[1].y,tri.points[1].z)
glVertex3f(tri.points[2].x,tri.points[2].y,tri.points[2].z)
glEnd()
code link: https://www.linux.com/blog/python-stl-model-loading-and-display-opengl
I call this function with different viewpoints LookAt_x, LookAt_y, LookAt_z. However, to draw the object every time costs too much time.
Is there a possible way, once the object is drawn, just change viewpoints to get depth images?
I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). I am going for retro 2D pixel graphics.
I have achieved this by setting the internal resolution via glOrtho():
glOrtho(0, 320, 240, 0, 0, 1);
And then I scale up the output resolution by a factor of 3, like this:
glViewport(0,0,960,720);
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL);
I draw rectangles like this:
glBegin(GL_LINE_LOOP);
glVertex2f(rect_x, rect_y);
glVertex2f(rect_x + rect_w, rect_y);
glVertex2f(rect_x + dst_w, dst_y + dst_h);
glVertex2f(rect_x, rect_y + rect_h);
glEnd();
It works perfectly at 320x240 (not scaled):
When I scale up to 960x720, the pixel rendering all works just fine! However it seems the GL_Line_Loop is not drawn on a 320x240 canvas and scaled up, but drawn on the final 960x720 canvas. The result is 1px lines in a 3px world :(
How do I draw lines to the 320x240 glOrtho canvas, instead of the 960x720 output canvas?
There is no "320x240 glOrtho canvas". There is only the window's actual resolution: 960x720.
All you are doing is scaling up the coordinates of the primitives you render. So, your code says to render a line from, for example, (20, 20) to (40, 40). And OpenGL (eventually) scales those coordinates by 3 in each dimension: (60, 60) and (120x120).
But that's only dealing with the end points. What happens in the middle is still based on the fact that you're rendering at the window's actual resolution.
Even if you employed glLineWidth to change the width of your lines, that would only fix the line widths. It would not fix the fact that the rasterization of lines is based on the actual resolution you're rendering at. So diagonal lines won't have the pixelated appearance you likely want.
The only way to do this properly is to, well, do it properly. Render to an image that is actual 320x240, then draw it to the window's actual resolution.
You'll have to create a texture of that size, then attach it to a framebuffer object. Bind the FBO for rendering and render to it (with the viewport set to the image's size). Then unbind the FBO, and draw that texture to the window (with the viewport set to the window's resolution).
As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:
set low res view
do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)
copy rendered screen into texture
set target resolution view
render the texture
do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.
Here C++ source for this:
void gl_draw()
{
// render resolution and multiplier
const int xs=320,ys=200,m=2;
// [low res render pass]
glViewport(0,0,xs,ys);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
// 50 random lines
RandSeed=0x12345678;
glColor3f(1.0,1.0,1.0);
glBegin(GL_LINES);
for (int i=0;i<100;i++)
glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
glEnd();
// [multiply resiolution render pass]
static bool _init=true;
GLuint txrid=0; // texture id
BYTE map[xs*ys*3]; // RGB
// init texture
if (_init) // you should also delte the texture on exit of app ...
{
// create texture
_init=false;
glGenTextures(1,&txrid);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST); // must be nearest !!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
glDisable(GL_TEXTURE_2D);
}
// copy low res screen to CPU memory
glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
// and then to GPU texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
// set multiplied resolution view
glViewport(0,0,m*xs,m*ys);
glClear(GL_COLOR_BUFFER_BIT);
// render low res screen as texture
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // swap buffers only here !!!
}
And preview:
I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).
I would like to know how to make openGL to not "blur" an upscaled texture, as it seems that the bluring is set to default for transformations. The texure is a POT png file. The code used to define a texture and put it on the screen is this:
class Texture():
# simple texture class
# designed for 32 bit png images (with alpha channel)
def __init__(self,fileName):
self.texID=0
self.LoadTexture(fileName)
def LoadTexture(self,fileName):
try:
textureSurface = pygame.image.load(fileName).convert_alpha()
textureData = pygame.image.tostring(textureSurface, "RGBA", True)
self.w, self.h = textureSurface.get_size()
self.texID=glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, self.texID)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR)
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, textureSurface.get_width(),
textureSurface.get_height(), 0, GL_RGBA, GL_UNSIGNED_BYTE,
textureData )
except Exception as E:
print(E)
print ("can't open the texture: %s"%(fileName))
def __del__(self):
glDeleteTextures(self.texID)
def get_width(self):
return self.w
def get_height(self):
return self.h
def blit(texture, x, y):
"""
Function that blits a given texture on the screen
"""
#We put the texture onto the screen
glBindTexture(GL_TEXTURE_2D, texture.texID)
#Now we must position the image
glBegin(GL_QUADS)
#We calculate each of the points relative to the center of the screen
top = -y/(HEIGHT//2) + 1.0
left = x/(WIDTH//2) - 1.0
right = left + texture.w/(WIDTH//2)
down = top - texture.h/(HEIGHT//2)
#We position each point of the image
glTexCoord2f(0.0, 1.0)
glVertex2f(left, top)
glTexCoord2f(1.0,1.0)
glVertex2f(right, top)
glTexCoord2f(1.0,0.0)
glVertex2f(right, down)
glTexCoord2f(0.0,0.0)
glVertex2f(left, down)
glEnd()
I configured openGL as follows:
def ConfigureOpenGL(w, h):
#glShadeModel(GL_SMOOTH)
#glClearColor(0.0, 0.0, 0.0, 1.0)
#glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glViewport(0, 0, w, h)
glMatrixMode(GL_PROJECTION)
#glLoadIdentity()
#gluOrtho2D(-8.0, 8.0, -6.0, 6.0)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glShadeModel(GL_SMOOTH)
glClearColor(0.0, 0.0, 0.0, 0.0)
glClearDepth(1.0)
glDisable(GL_DEPTH_TEST)
glDisable(GL_LIGHTING)
glDepthFunc(GL_LEQUAL)
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST)
glEnable(GL_BLEND)
Surface = pygame.display.set_mode((WIDTH, HEIGHT), OPENGL|DOUBLEBUF)#|FULLSCREEN)
ConfigureOpenGL(WIDTH, HEIGHT)
Before putting anything in the screen i also call this method:
def OpenGLRender(self):
"""
Used to prepare the screen to render
"""
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
glLoadIdentity()
glDisable(GL_LIGHTING)
glEnable(GL_TEXTURE_2D)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
glClearColor(1.0, 1.0, 1.0, 1.0)
I'm using PyOpenGL 3.0.2
Use GL_NEAREST in your glTexParameteri() calls:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST)
I have an image in OpenGL that I am attempting to apply a simple HSB filter to. The user selects a hue value, I shade the image appropriately, display it, and everyone is happy. The problem I am running into is that the code I have inherited that worked on a previous system (Solaris, presuming OpenGL 2.1) does not work on our current system (RHEL 5, OpenGL 3.0).
Right now, the image appears in grey-scale, no matter what saturation is set to. However, brightness does seem to be acting appropriately. The relevant code has been reproduced below:
// imageData - unsigned char[3*width*height]
// (red|green|blue)Channel - unsigned char[width*height]
// brightnessBias - float in range [-1/3,1/3]
// hsMatrix - float[4][4] Described by algorithm from
// http://www.graficaobscura.com/matrix/index.html
// (see Hue Rotation While Preserving Luminance)
glDrawPixels(width, height, format, GL_UNSIGNED_BYTE, imageData);
// Split into RGB channels
glReadPixels(0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, redChannel);
glReadPixels(0, 0, width, height, GL_GREEN, GL_UNSIGNED_BYTE, greenChannel);
glReadPixels(0, 0, width, height, GL_BLUE, GL_UNSIGNED_BYTE, blueChannel);
// Redraw and blend RGB channels with scaling and bias
glPixelZoom(1.0, 1.0);
glRasterPos2i(0, height);
glPixelTransferf(GL_RED_BIAS, brightnessBias);
glPixelTransferf(GL_GREEN_BIAS, brightnessBias);
glPixelTransferf(GL_BLUE_BIAS, brightnessBias);
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][0]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][0]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][0]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, redChannel);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][1]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][1]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][1]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, greenChannel);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][2]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][2]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][2]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, blueChannel);
// Reset pixel transfer parameters
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, 1.0f);
glPixelTransferf(GL_GREEN_SCALE, 1.0f);
glPixelTransferf(GL_BLUE_SCALE, 1.0f);
glPixelTransferf(GL_RED_BIAS, 0.0f);
glPixelTransferf(GL_GREEN_BIAS, 0.0f);
glPixelTransferf(GL_BLUE_BIAS, 0.0f);
The brightness control works as intended, however, when the glPixelTransferf(GL_*_SCALE) calls are left in, the image is displayed in greyscale. Compounding all of this is the fact that I have no prior experience with OpenGL, so I find a lot of links for what I presume are more modern techniques that I simply can't make sense of.
EDIT:
I believe the theory behind what was being done was a hack at doing the matrix multiplication through the draw calls, because GL_LUMINANCE treats the one value as the value for all three components, so if you follow the components through that drawing, you expect
// After glDrawPixels(..., redChannel)
new_red = red*hsMatrix[0][0]
new_green = red*hsMatrix[1][0]
new_blue = red*hsMatrix[2][0]
// After glDrawPixels(..., greenChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1]
// After glDrawPixels(..., blueChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1] + blue*hsMatrix[0][2]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1] + blue*hsMatrix[1][2]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1] + blue*hsMatrix[2][2]
Because it was turning out greyscale anyway and from a similar-ish example, I had thought that I might have needed to do the glPixelTransfer calls before calling glDrawPixels, but that was amazingly slow.
Wow, what the hell is that ?!
For your question, I'd replace GL_LUMINANCE in your 3 glDrawPixels by GL_RED, GL_GREEN and GL_BLUE respectively.
However :
glPixelTransfer is bad
glDrawPixels is bad
Is there a single reason why you're not using a super-simple fragment shader to do the conversion ? It's a simple matrix multiplication, and you're under ogl3.0...
Create a texture from imageData, this needs to be done only once.
Make a shader that reads the color from the texture, multiply it by the color conversion matrix, and display it
Bind the computed color matrix
Draw a fullscreen quad. Even an 5 year old card will get 500 fps out of this.