I have a sprite_sheet (example sheet):
I loaded as vector data and I need to make a new texture from a specified area. Like this:
const std::vector <char> image_data = LoadPNG("sprite_sheet.png"); // This works.
glMakeTexture([0, 0], [50, 50], configs, image_data) //to display only one sqm.
But i don't know how, 'couse mostly GL functions only works in full area, like glTexImage2D.
// Only have width and height, the bottomLeft coords is 0, 0 -- I want to change this.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, &(_sprite_sheet.data[0]));
Have a way to do that without load the full sprite_sheet as a texture?
OBS: I'm using picoPNG to decode PNG and I CAN load png, but not make a texture from specified area.
Because you show no code I assume that:
char *data; // data of 8-bit per sample RGBA image
int w, h; // size of the image
int sx, sy, sw, sh; // position and size of area you want to crop.
glTexImage2D does support regions-of-interest. You do it as follows:
glPixelStorei(GL_UNPACK_ROW_LENGTH, w);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, sx);
glPixelStorei(GL_UNPACK_SKIP_ROWS, sy);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // LodePNG tightly packs everything
glTexImage2D(GL_TEXTURE_2D, level, internalFormat, sw, sh, border, format, type, data);
Related
I am trying to load a height map for terrain using SOIL. I use the following code:
unsigned char* image = SOIL_load_image(fname.c_str(), &width, &height, 0, SOIL_LOAD_L);
glBindTexture(GL_TEXTURE_2D, name);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, image);
However, my terrain looks steppy because the height is expressed with 8 bits only. How can I load 16-bit height map with SOIL? Or should I use another image library for this task?
As Andon M. Coleman recommended, I used raw binary format to store 16-bit height data and got the required result.
With using texture image, I have created objects in the scene. But, there is a problem ; instead of blue, its colour is green. What is the possible reason to that bug ?
Try use GL_BGR instead.
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
In OpenGL, how can I select an area from an image-file that was loaded using IMG_Load()?
(I am working on a tilemap for a simple 2D game)
I'm using the following principle to load an image-file into a texture:
GLuint loadTexture( const std::string &fileName ) {
SDL_Surface *image = IMG_Load(fileName.c_str());
unsigned object(0);
glGenTextures(1, &object);
glBindTexture(GL_TEXTURE_2D, object);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image->w, image->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, image->pixels);
SDL_FreeSurface(image);
return object;
}
I then use the following to actually draw the texture in my rendering-part:
glColor4ub(255,255,255,255);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex2f(x,y);
glTexCoord2d(1,0); glVertex2f(x+w,y);
glTexCoord2d(1,1); glVertex2f(x+w,y+h);
glTexCoord2d(0,1); glVertex2f(x,y+h);
glEnd();
Now what I need is a function that allows me to select certain rectangular parts from the GLuint that I get from calling loadTexture( const std::string &fileName ), such that I can then use the above code to bind these parts to rectangles and then draw them to the screen. Something like:
GLuint getTileTexture( GLuint spritesheet, int x, int y, int w, int h )
Go ahead and load the entire collage into a texture. Then select a subset of it using glTexCoord when you render your geometry.
glTexSubImage2D will not help in any way. It allows you to add more than one file to a single texture, not create multiple textures from a single file.
Example code:
void RenderSprite( GLuint spritesheet, unsigned spritex, unsigned spritey, unsigned texturew, unsigned textureh, int x, int y, int w, int h )
{
glColor4ub(255,255,255,255);
glBindTexture(GL_TEXTURE_2D, spritesheet);
glBegin(GL_QUADS);
glTexCoord2d(spritex/(double)texturew,spritey/(double)textureh);
glVertex2f(x,y);
glTexCoord2d((spritex+w)/(double)texturew,spritey/(double)textureh);
glVertex2f(x+w,y);
glTexCoord2d((spritex+w)/(double)texturew,(spritey+h)/(double)textureh);
glVertex2f(x+w,y+h);
glTexCoord2d(spritex/(double)texturew,(spritey+h)/(double)textureh);
glVertex2f(x,y+h);
glEnd();
}
Although Ben Voigt's answer is the usual way to go, if you really want an extra texture for the tiles (which may help with filtering at the edges) you can use glGetTexImage and play a bit with the glPixelStore parameters:
GLuint getTileTexture(GLuint spritesheet, int x, int y, int w, int h)
{
glBindTexture(GL_TEXTURE_2D, spritesheet);
// first we fetch the complete texture
GLint width, height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
GLubyte *data = new GLubyte[width*height*4];
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
// now we take only a sub-rectangle from this data
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, /*filter+wrapping*/, /*whatever*/);
glPixelStorei(GL_UNPACK_ROW_LENGTH, width);
glTexImage2D(GL_TEXTURE_2D, 0, RGBA, w, h, 0,
GL_RGBA, GL_UNSIGNED_BYTE, data+4*(y*width+x));
// clean up
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
delete[] data;
return texture;
}
But keep in mind, that this function always reads the whole texture atlas into CPU memory and then copies a sub-part into the new smaller texture. So it would be a good idea to create all needed sprite textures in one go and only read the data in once. In this case you can also just drop the atlas texture completely and only read the image into system memory with IMG_Load to distribute it into the individual sprite textures. Or, if you really need the large texture, then at least use a PBO to copy its data into (with GL_DYNAMIC_COPY usage or something the like), so it need not leave the GPU memory.
I have an OpenGL window, and a wxWidget dialog. I want to mirror the OpenGL to the dialog. So what I intend to do is:
Capture the screenshot of the opengl
Display it onto the wxwidgets dialog.
Any idea?
Update: This is how I currently use glReadPixels (I also temporarily use FreeImage to save to BMP file, but I expect the file saving to be removed if there's a way to channel it directly to the wxImage)
// Make the BYTE array, factor of 3 because it's RBG.
BYTE* pixels = new BYTE[ 3 * width * height];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, pixels);
// Convert to FreeImage format & save to file
FIBITMAP* image = FreeImage_ConvertFromRawBits(pixels, width, height, 3 * width, 24, 0x0000FF, 0xFF0000, 0x00FF00, false);
FreeImage_Save(FIF_BMP, image, "C:/test.bmp", 0);
// Free memory
delete image;
delete pixels;
// Add Image Support for all types
wxInitAllImageHandlers();
BYTE* pixels = new BYTE[ 3 * width * height];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, pixels);
// width height pixels alpha
wxImage img(with, height, pixels, NULL); // I am not sure if NULL is permitted on the alpha channel, but you can test that yourself :).
// Second method:
wxImage img(width, heiht, true);
img.SetData(pixels);
You can now use the image for displaying, saving as jpg png bmp whatever you like. For just displaying in a dialog, you don't need to save it to the harddisc though but of course, you can.
Just create the image on the heap then.
http://docs.wxwidgets.org/stable/wx_wximage.html#wximagector
Hope it helps
I have an image in OpenGL that I am attempting to apply a simple HSB filter to. The user selects a hue value, I shade the image appropriately, display it, and everyone is happy. The problem I am running into is that the code I have inherited that worked on a previous system (Solaris, presuming OpenGL 2.1) does not work on our current system (RHEL 5, OpenGL 3.0).
Right now, the image appears in grey-scale, no matter what saturation is set to. However, brightness does seem to be acting appropriately. The relevant code has been reproduced below:
// imageData - unsigned char[3*width*height]
// (red|green|blue)Channel - unsigned char[width*height]
// brightnessBias - float in range [-1/3,1/3]
// hsMatrix - float[4][4] Described by algorithm from
// http://www.graficaobscura.com/matrix/index.html
// (see Hue Rotation While Preserving Luminance)
glDrawPixels(width, height, format, GL_UNSIGNED_BYTE, imageData);
// Split into RGB channels
glReadPixels(0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, redChannel);
glReadPixels(0, 0, width, height, GL_GREEN, GL_UNSIGNED_BYTE, greenChannel);
glReadPixels(0, 0, width, height, GL_BLUE, GL_UNSIGNED_BYTE, blueChannel);
// Redraw and blend RGB channels with scaling and bias
glPixelZoom(1.0, 1.0);
glRasterPos2i(0, height);
glPixelTransferf(GL_RED_BIAS, brightnessBias);
glPixelTransferf(GL_GREEN_BIAS, brightnessBias);
glPixelTransferf(GL_BLUE_BIAS, brightnessBias);
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][0]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][0]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][0]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, redChannel);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][1]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][1]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][1]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, greenChannel);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][2]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][2]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][2]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, blueChannel);
// Reset pixel transfer parameters
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, 1.0f);
glPixelTransferf(GL_GREEN_SCALE, 1.0f);
glPixelTransferf(GL_BLUE_SCALE, 1.0f);
glPixelTransferf(GL_RED_BIAS, 0.0f);
glPixelTransferf(GL_GREEN_BIAS, 0.0f);
glPixelTransferf(GL_BLUE_BIAS, 0.0f);
The brightness control works as intended, however, when the glPixelTransferf(GL_*_SCALE) calls are left in, the image is displayed in greyscale. Compounding all of this is the fact that I have no prior experience with OpenGL, so I find a lot of links for what I presume are more modern techniques that I simply can't make sense of.
EDIT:
I believe the theory behind what was being done was a hack at doing the matrix multiplication through the draw calls, because GL_LUMINANCE treats the one value as the value for all three components, so if you follow the components through that drawing, you expect
// After glDrawPixels(..., redChannel)
new_red = red*hsMatrix[0][0]
new_green = red*hsMatrix[1][0]
new_blue = red*hsMatrix[2][0]
// After glDrawPixels(..., greenChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1]
// After glDrawPixels(..., blueChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1] + blue*hsMatrix[0][2]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1] + blue*hsMatrix[1][2]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1] + blue*hsMatrix[2][2]
Because it was turning out greyscale anyway and from a similar-ish example, I had thought that I might have needed to do the glPixelTransfer calls before calling glDrawPixels, but that was amazingly slow.
Wow, what the hell is that ?!
For your question, I'd replace GL_LUMINANCE in your 3 glDrawPixels by GL_RED, GL_GREEN and GL_BLUE respectively.
However :
glPixelTransfer is bad
glDrawPixels is bad
Is there a single reason why you're not using a super-simple fragment shader to do the conversion ? It's a simple matrix multiplication, and you're under ogl3.0...
Create a texture from imageData, this needs to be done only once.
Make a shader that reads the color from the texture, multiply it by the color conversion matrix, and display it
Bind the computed color matrix
Draw a fullscreen quad. Even an 5 year old card will get 500 fps out of this.