I'm trying to simply draw an image with OpenGL's immediate mode functions.
However, my output is kinda weird. I tried a few Texture parameters - but i get the same result, sometimes with different colors. I kinda can't figure out the problem, but i figure it's either the image loading or texture setup. I'll cut to the case, here is how i generate my texture (in a Texture2D class):
glBindTexture(GL_TEXTURE_2D, id);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
A call to glTexImage2D follows, like this: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _width, _height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);, where data is an array of unsigned char set to 255, white. I then load a PNG file with CImg, like this:
CImg<unsigned char> image(_filename.c_str());
image.resize(m_Width, m_Height);
glBindTexture(GL_TEXTURE_2D, m_ID);
if(image.spectrum() == 4)
{
unsigned char* data = image.data();
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_Width, m_Height, GL_RGBA, GL_UNSIGNED_BYTE, data);
}
else if(image.spectrum() == 3)
{
unsigned char* data = image.data();
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_Width, m_Height, GL_RGB, GL_UNSIGNED_BYTE, data);
}
glBindTexture(GL_TEXTURE_2D, 0);
But when i try to draw the texture in immediate mode like this (origin is upper left corner, notice the red rectangle arund the texture is intentional):
void SimpleRenderer::drawTexture(Texture2D* _texture, float _x, float _y, float _width, float _height)
{
_texture->setActive(0);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(_x, _y);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(_x + _width, _y);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(_x + _width, _y + _height);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(_x, _y + _height);
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
glColor3ub(strokeR, strokeG, strokeB);
glBegin(GL_LINE_LOOP);
glVertex2f((GLfloat)_x, (GLfloat)_y);
glVertex2f((GLfloat)_x+_width, (GLfloat)_y);
glVertex2f((GLfloat)_x+_width, (GLfloat)_y+_height);
glVertex2f((GLfloat)_x, (GLfloat)_y+_height);
glEnd();
}
I expect an output like this within the rectangle (debugging the same PNG within the same program/thread with CImg):
But i get this:
Can anyone spot the problem with my code?
From How pixel data are stored with CImg:
The values are not interleaved, and are ordered first along the X,Y,Z and V axis respectively (corresponding to the width,height,depth,dim dimensions), starting from the upper-left pixel to the bottom-right pixel of the instane image, with a classical scanline run.So, a color image with dim=3 and depth=1, will be stored in memory as :R1R2R3R4R5R6......G1G2G3G4G5G6.......B1B2B3B4B5B6.... (i.e following a 'planar' structure)and not as R1G1B1R2G2B2R3G3B3... (interleaved channels),
OpenGL does not work with planar data, it expects interleaved pixel data. Easiest solution is to use a library other than CImg.
This does not fix the scaling issues, but it's a good place to start.
Use CImg<T>::permute_axes() to transform your image buffer to interleaved format, like this :
CImg<unsigned char> img("imageRGB.png");
img.permute_axes("cxyz"); // Convert to interleaved representation
// Now, use img.data() here as a pointer for OpenGL functions.
img.permute_axes("yzcx"); // Go back to planar representation (if needed).
CImg is indeed a great library, so all these kind of transformations have been planned in the library.
Related
I ran into some trouble while extracting a matrix (cropping) using OpenCV. What's funny is if I don't execute the line to "crop" the image everything works fine. But if I do, I see horizontal multi-coloured lines in the place of the image.
This is to show that the cropping takes place correctly.
cv::imshow("before", newimg);
//the line that "crops" the image
newimg = newimg(cv::Rect(leftcol, toprow, rightcol - leftcol, bottomrow - toprow));
cv::imshow("after", newimg);
The code that follows is where I bind the image to a texture so that I can use it in OpenGL.
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, newimg.cols, newimg.rows,
0, GL_BGR, GL_UNSIGNED_BYTE, newimg.ptr());
glBindTexture(GL_TEXTURE_2D, 0);
And later to draw . . .
float h = size;
float w = size * aspectRatio; // of the image. aspectRatio = width / height
glEnable(GL_TEXTURE_2D);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f(x, y, z);
glTexCoord2f(0.0f, 1.0f); glVertex3f(x, y + h, z);
glTexCoord2f(1.0f, 1.0f); glVertex3f(x + w, y + h, z);
glTexCoord2f(1.0f, 0.0f); glVertex3f(x + w, y, z);
glEnd();
glDisable(GL_TEXTURE_2D);
All of this works well and I see the proper image drawn in the OpenGL window when I comment out that line in which I had cropped the image. I have checked the image type before and after the cropping, but the only difference seems to be the reduced number of rows and columns in the final image.
Here is the image that gets drawn when cropping has been done.
After a bit of research I found out a way to solve the problem. The reason the image was looking distorted was because though the image had been cropped, newimg.step / newimg.elemSize() still showed the original size of the image. This meant that only the values of rows and columns had changed in the output image but pixels with no data in them which were no part of the image remained. That is possibly why the "after" image has a gray area on the right. I might be wrong about this theory since I don't have a deep understanding of the subject but it all started working properly once I inserted this line before calling glTexImage2D:
glPixelStorei(GL_UNPACK_ROW_LENGTH, newimg.step / newimg.elemSize());
Note: Since we are manipulating the pixel store of OpenGL, it's best to push the state before doing so:
glPushClientAttrib(GL_CLIENT_PIXEL_STORE_BIT);
And pop it out after you are done:
glPopClientAttrib();
This is to avoid the memory from getting corrupted. I thank genpfault for pointing me in the right direction.
See 8. Know your pixel store state for more details.
https://www.opengl.org/archives/resources/features/KilgardTechniques/oglpitfall/
This post was also very useful.
I am trying to program a simple game in C++ using OpenGL for graphics. In my game, I have objects that are rendered onscreen as a white square. I would like to be able to bind an image as a texture to these objects, so that I can render an image onscreen instead of the white square. There's no restriction on the format of the image, though I've been using .png or .bmp for testing.
One of my object classes, Character.h, stores a GLuint* member variable called _texture and an unsigned char* member variable called _data (as an image handle and pixel data, respectively). Within Character.h is this function meant to bind the image as a texture to the Character object:
void loadTexture(const char* fileName)
{
/* read pixel data */
FILE* file;
file = fopen(fileName, "rb");
if (file == NULL) {cout << "ERROR: Image file not found!" << endl;}
_data = (unsigned char*)malloc(_dimension * _dimension * 3);
fread(_data, _dimension * _dimension * 3, 1, file);
fclose(file);
/* bind texture */
glGenTextures(1, &_texture); // _texture is a member variable of type GLuint
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _dimension, _dimension, 0, GL_RGB, GL_UNSIGNED_BYTE, _data); // _dimension = 64.0f.
}
Once the image is bound, I then attempt to render the Character object with this function (which is a static function found in a separate class):
static void drawTexturedRectangle(float x, float y, float w, float h, GLuint texture, unsigned char* data)
{
/* fractions are calculated to determine the position onscreen that the object is drawn at */
float xFraction = (x / GlobalConstants::SCREEN_WIDTH) * 2;
float yFraction = (y / GlobalConstants::SCREEN_HEIGHT) * 2;
float wFraction = (w / GlobalConstants::SCREEN_WIDTH) * 2;
float hFraction = (h / GlobalConstants::SCREEN_HEIGHT) * 2;
glEnable(GL_TEXTURE_2D);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glPushMatrix();
glEnable(GL_BLEND);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f(xFraction, yFraction); // bottom left corner
glTexCoord2f(0.0f, 1.0f); glVertex2f(xFraction, yFraction + hFraction); // top left corner
glTexCoord2f(1.0f, 1.0f); glVertex2f(xFraction + wFraction, yFraction + hFraction); // top right corner
glTexCoord2f(1.0f, 0.0f); glVertex2f(xFraction + wFraction, yFraction); // bottom right corner
glEnd();
glDisable(GL_BLEND);
glPopMatrix();
}
No matter how much I mess around with this, I cannot seem to get my program to do anything other than render a plain white square with no texture on it. I have checked to ensure that variables file, _texture, _dimension, and _data all contain values. Can anybody spot what I might be doing wrong?
i dont think you can just input a raw file to glTexImage2D, except if you store texture files in that format (which you probably dont). glTexImage2D expects a huge array bytes (representing texel colors), but file formats typically dont store images like that. Even bmp has some header information in it.
Use a 3rd party image loader library like DevIL or SOIL to load your textures! These libraries can convert most common image file formats (png, jpg, tga, ...) into a byte stream that you can input to OpenGL!
On a side note:
After you call glTexImage2D(..., data), the content of data is copied into the GPU memory (VRAM). From this point you only have to bind the texture using it's id, as it is already in the VRAM. Calling glTexImage2D() in each frame will reupload the whole texture in each frame into the vram, causing a big performance penalty.
So I have looked around and I haven't found the best way to actually implement this in SDL & OpenGL. At this moment I can display text to the screen with a TTF but its not really displaying the text the way it should be. Attached is a screen shot of how the engine looks like when displaying text. Basically from what I can see in my code, I think what is happening is that I am not blitting my text texture with the my main SDL_Surface that I am using as my window. I say that because in the picture my character who has a red collision box around him is being covered up my the text texture I'm rendering my text with. Any ideas of what I can do?
In my game loop I call beginDraw() and endDraw() before and after I draw everything to the screen.
Picture: http://public.gamedev.net/uploads/monthly_07_2012/post-200874-0-25909300-1342404845_thumb.png
All the engine code can be found here: https://github.com/Jevi/SDL_GL_ENGINE
P.S: I'm going to make another post for this but I might as well ask this also if you guys are already looking at my code. I have been noticing some memory issues with the engine. In task manager the memory allocated to my program is constantly climbing since its starts at around 11000 K.
void graphics::SDL_GL_RenderText(float x1, float y1, int width, int height, const char* text, int ptsize, const char* ttfLoc, int r, int g, int b){
SDL_Surface* temp;
SDL_Surface* temp2;
SDL_Rect rect;
TTF_Font* font;
SDL_Color textColor;
unsigned int texture;
font = TTF_OpenFont( ttfLoc , ptsize );
textColor.r = r;
textColor.g = g;
textColor.b = b;
temp = TTF_RenderText_Blended( font, text, textColor );
// width = nextpoweroftwo(width);
// height = nextpoweroftwo(height);
temp2 = SDL_CreateRGBSurface(0, width, height, 32, r, g, b, 0);
SDL_BlitSurface(temp, 0, temp2, 0);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, temp2->w, temp2->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, temp2->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
/* prepare to render our texture */
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glColor3f(1.0f, 1.0f, 1.0f);
/* Draw a quad at location */
glBegin(GL_QUADS);
glTexCoord2d(0,0);
glVertex2f(x1, y1);
glTexCoord2d(1,0);
glVertex2f(x1 + temp2->w, y1);
glTexCoord2d(1,1);
glVertex2f(x1 + temp2->w, y1 + temp2->h);
glTexCoord2d(0,1);
glVertex2f(x1, y1 + temp2->h);
glEnd();
glFinish();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
SDL_FreeSurface(temp);
SDL_FreeSurface(temp2);
TTF_CloseFont(font);
glDeleteTextures(1, &texture);
}
void graphics::GL_BeginDraw(){
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix(); //Start rendering phase
glOrtho(0,width,height,0,-1,1); //Set the matrix
}
void graphics::GL_EndDraw(){
glPopMatrix(); //End rendering phase
SDL_GL_SwapBuffers();
glFinish();
}
In OpenGL, how can I select an area from an image-file that was loaded using IMG_Load()?
(I am working on a tilemap for a simple 2D game)
I'm using the following principle to load an image-file into a texture:
GLuint loadTexture( const std::string &fileName ) {
SDL_Surface *image = IMG_Load(fileName.c_str());
unsigned object(0);
glGenTextures(1, &object);
glBindTexture(GL_TEXTURE_2D, object);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image->w, image->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, image->pixels);
SDL_FreeSurface(image);
return object;
}
I then use the following to actually draw the texture in my rendering-part:
glColor4ub(255,255,255,255);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex2f(x,y);
glTexCoord2d(1,0); glVertex2f(x+w,y);
glTexCoord2d(1,1); glVertex2f(x+w,y+h);
glTexCoord2d(0,1); glVertex2f(x,y+h);
glEnd();
Now what I need is a function that allows me to select certain rectangular parts from the GLuint that I get from calling loadTexture( const std::string &fileName ), such that I can then use the above code to bind these parts to rectangles and then draw them to the screen. Something like:
GLuint getTileTexture( GLuint spritesheet, int x, int y, int w, int h )
Go ahead and load the entire collage into a texture. Then select a subset of it using glTexCoord when you render your geometry.
glTexSubImage2D will not help in any way. It allows you to add more than one file to a single texture, not create multiple textures from a single file.
Example code:
void RenderSprite( GLuint spritesheet, unsigned spritex, unsigned spritey, unsigned texturew, unsigned textureh, int x, int y, int w, int h )
{
glColor4ub(255,255,255,255);
glBindTexture(GL_TEXTURE_2D, spritesheet);
glBegin(GL_QUADS);
glTexCoord2d(spritex/(double)texturew,spritey/(double)textureh);
glVertex2f(x,y);
glTexCoord2d((spritex+w)/(double)texturew,spritey/(double)textureh);
glVertex2f(x+w,y);
glTexCoord2d((spritex+w)/(double)texturew,(spritey+h)/(double)textureh);
glVertex2f(x+w,y+h);
glTexCoord2d(spritex/(double)texturew,(spritey+h)/(double)textureh);
glVertex2f(x,y+h);
glEnd();
}
Although Ben Voigt's answer is the usual way to go, if you really want an extra texture for the tiles (which may help with filtering at the edges) you can use glGetTexImage and play a bit with the glPixelStore parameters:
GLuint getTileTexture(GLuint spritesheet, int x, int y, int w, int h)
{
glBindTexture(GL_TEXTURE_2D, spritesheet);
// first we fetch the complete texture
GLint width, height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
GLubyte *data = new GLubyte[width*height*4];
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
// now we take only a sub-rectangle from this data
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, /*filter+wrapping*/, /*whatever*/);
glPixelStorei(GL_UNPACK_ROW_LENGTH, width);
glTexImage2D(GL_TEXTURE_2D, 0, RGBA, w, h, 0,
GL_RGBA, GL_UNSIGNED_BYTE, data+4*(y*width+x));
// clean up
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
delete[] data;
return texture;
}
But keep in mind, that this function always reads the whole texture atlas into CPU memory and then copies a sub-part into the new smaller texture. So it would be a good idea to create all needed sprite textures in one go and only read the data in once. In this case you can also just drop the atlas texture completely and only read the image into system memory with IMG_Load to distribute it into the individual sprite textures. Or, if you really need the large texture, then at least use a PBO to copy its data into (with GL_DYNAMIC_COPY usage or something the like), so it need not leave the GPU memory.
I have an image in OpenGL that I am attempting to apply a simple HSB filter to. The user selects a hue value, I shade the image appropriately, display it, and everyone is happy. The problem I am running into is that the code I have inherited that worked on a previous system (Solaris, presuming OpenGL 2.1) does not work on our current system (RHEL 5, OpenGL 3.0).
Right now, the image appears in grey-scale, no matter what saturation is set to. However, brightness does seem to be acting appropriately. The relevant code has been reproduced below:
// imageData - unsigned char[3*width*height]
// (red|green|blue)Channel - unsigned char[width*height]
// brightnessBias - float in range [-1/3,1/3]
// hsMatrix - float[4][4] Described by algorithm from
// http://www.graficaobscura.com/matrix/index.html
// (see Hue Rotation While Preserving Luminance)
glDrawPixels(width, height, format, GL_UNSIGNED_BYTE, imageData);
// Split into RGB channels
glReadPixels(0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, redChannel);
glReadPixels(0, 0, width, height, GL_GREEN, GL_UNSIGNED_BYTE, greenChannel);
glReadPixels(0, 0, width, height, GL_BLUE, GL_UNSIGNED_BYTE, blueChannel);
// Redraw and blend RGB channels with scaling and bias
glPixelZoom(1.0, 1.0);
glRasterPos2i(0, height);
glPixelTransferf(GL_RED_BIAS, brightnessBias);
glPixelTransferf(GL_GREEN_BIAS, brightnessBias);
glPixelTransferf(GL_BLUE_BIAS, brightnessBias);
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][0]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][0]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][0]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, redChannel);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][1]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][1]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][1]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, greenChannel);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][2]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][2]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][2]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, blueChannel);
// Reset pixel transfer parameters
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, 1.0f);
glPixelTransferf(GL_GREEN_SCALE, 1.0f);
glPixelTransferf(GL_BLUE_SCALE, 1.0f);
glPixelTransferf(GL_RED_BIAS, 0.0f);
glPixelTransferf(GL_GREEN_BIAS, 0.0f);
glPixelTransferf(GL_BLUE_BIAS, 0.0f);
The brightness control works as intended, however, when the glPixelTransferf(GL_*_SCALE) calls are left in, the image is displayed in greyscale. Compounding all of this is the fact that I have no prior experience with OpenGL, so I find a lot of links for what I presume are more modern techniques that I simply can't make sense of.
EDIT:
I believe the theory behind what was being done was a hack at doing the matrix multiplication through the draw calls, because GL_LUMINANCE treats the one value as the value for all three components, so if you follow the components through that drawing, you expect
// After glDrawPixels(..., redChannel)
new_red = red*hsMatrix[0][0]
new_green = red*hsMatrix[1][0]
new_blue = red*hsMatrix[2][0]
// After glDrawPixels(..., greenChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1]
// After glDrawPixels(..., blueChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1] + blue*hsMatrix[0][2]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1] + blue*hsMatrix[1][2]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1] + blue*hsMatrix[2][2]
Because it was turning out greyscale anyway and from a similar-ish example, I had thought that I might have needed to do the glPixelTransfer calls before calling glDrawPixels, but that was amazingly slow.
Wow, what the hell is that ?!
For your question, I'd replace GL_LUMINANCE in your 3 glDrawPixels by GL_RED, GL_GREEN and GL_BLUE respectively.
However :
glPixelTransfer is bad
glDrawPixels is bad
Is there a single reason why you're not using a super-simple fragment shader to do the conversion ? It's a simple matrix multiplication, and you're under ogl3.0...
Create a texture from imageData, this needs to be done only once.
Make a shader that reads the color from the texture, multiply it by the color conversion matrix, and display it
Bind the computed color matrix
Draw a fullscreen quad. Even an 5 year old card will get 500 fps out of this.