I am trying to load a height map for terrain using SOIL. I use the following code:
unsigned char* image = SOIL_load_image(fname.c_str(), &width, &height, 0, SOIL_LOAD_L);
glBindTexture(GL_TEXTURE_2D, name);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, image);
However, my terrain looks steppy because the height is expressed with 8 bits only. How can I load 16-bit height map with SOIL? Or should I use another image library for this task?
As Andon M. Coleman recommended, I used raw binary format to store 16-bit height data and got the required result.
Related
I am trying to create a texture to display. I have wxh array in which each pixel is 1 byte. I have looked at Can I use a grayscale image with the OpenGL glTexImage2D function? but I am not sure as to how to currently implement it. It looks like the GL_LUMINANCE is deprecated and I need to process the single channel independently . I am not sure how I should try this
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image_width, image_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_data);
I tried changing GL_RGBA to other formats like GL_R https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml. I still cannot get the image to display. Does anyone have any suggestions?
If you you have a source texture with 1 color channel, then you can use the format GL_RED and the base internal format GL_RED:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, image_width, image_height,
0, GL_RED, GL_UNSIGNED_BYTE, image_data);
Set the texture parameters GL_TEXTURE_SWIZZLE_G and GL_TEXTURE_SWIZZLE_B (see glTexParameteri) to read the green and blue color from the red color channel, too:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, GL_RED);
Note, possibly GL_UNPACK_ALIGNMENT has to be set to 1, when the image is loaded to a texture object:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, ...);
By default the parameter is 4. This means that each line of the image is assumed to be aligned to a size which is a multiple of 4. If the image data is tightly packed then the alignment has to be changed.
If you use shader program, then the same can be achieved by Swizzling. e.g.:
vec3 color = texture(u_texture, uv).rrr;
I have a 3D graphics application that is exhibiting bad texturing behavior (specifically: a specific texture is showing up as black when it shouldn't be). I have isolated the texture data in the following call:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, fmt->gl_type, data)
I've inspected all of the values in the call and have verified they aren't NULL. Is there a way to use all of this data to save to the (Linux) filesystem a bitmap/png/some viewable format so that I can inspect the texture to verify it isn't black/some sort of garbage? It case it matters I'm using OpenGL ES 2.0 (GLES2).
If you want to read the pixels from a texture image in OpenGL ES, then you have to attach the texture to a framebuffer and read the color plane from the framebuffer by glReadPixels
GLuint textureObj = ...; // the texture object - glGenTextures
GLuint fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureObj, 0);
int data_size = mWidth * mHeight * 4;
GLubyte* pixels = new GLubyte[mWidth * mHeight * 4];
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDeleteFramebuffers(1, &fbo);
All the used functions in this code snippet are supported by OpenGL ES 2.0.
Note, in desktop OpenGL there is glGetTexImage, which can be use read pixel data from a texture. This function doesn't exist in OpenGL ES.
To write an image to a file (in c++), I recommend to use a library like STB library, which can be found at GitHub - nothings/stb.
To use the STB library library it is sufficient to include the header files (It is not necessary to link anything):
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include <stb_image_write.h>
Use stbi_write_bmp to write a BMP file:
stbi_write_bmp( "myfile.bmp", width, height, 4, pixels );
Note, it is also possible to write other file formats by stbi_write_png, stbi_write_tga or stbi_write_jpg.
I am trying to display a JPG texture using OpenGL, but I have some problems. This is the important part of my code:
unsigned char* data = stbi_load("resources/triangle.jpg", &width, &height, &nrChannels, 0);
if (data)
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
}
The JPG file that I am trying to load can be downloaded here. It works with certain JPG files but not this one, so it is clearly something regarding the formatting - but what and why?
This is how the texture is displayed
It works with certain JPG files but not this one, so it is clearly something regarding the formatting - but what and why?
By default OpenGL assumes that the size of each row of an image is aligned 4 bytes.
This is because the GL_UNPACK_ALIGNMENT parameter by default is 4.
Since the image has 3 color channels (because its a JPG), and is tightly packed the size of a row of the image may not be aligned to 4 bytes. Note if the width of an image would by 4, then it would be aligned to 4 bytes, because 4 * 3 = 12 bytes. But if the width would be 5, it wouldn't be aligned to 4, because 5 * 3 = 15 bytes.
This cause that the rows of the image seems to be misplaced. Set the GL_UNPACK_ALIGNMENT to 1, to solve your issue:
glPixelStore( GL_UNPACK_ALIGNMENT, 1 );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
Further note, you are assuming that the image has 3 color channels, because of the GL_RGB format parameter in glTexImage2D. In this case this works, because of the JPG file format.
stbi_load returns the number of color channels contained in the image buffer (nrChannels).
Take respect on it, by either using GL_RGB or GL_RGBA for the format parameter, somehow like that:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGB, width, height, 0,
nrChannels == 3 ? GL_RGB : GL_RGBA,
GL_UNSIGNED_BYTE, data);
I have a sprite_sheet (example sheet):
I loaded as vector data and I need to make a new texture from a specified area. Like this:
const std::vector <char> image_data = LoadPNG("sprite_sheet.png"); // This works.
glMakeTexture([0, 0], [50, 50], configs, image_data) //to display only one sqm.
But i don't know how, 'couse mostly GL functions only works in full area, like glTexImage2D.
// Only have width and height, the bottomLeft coords is 0, 0 -- I want to change this.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, &(_sprite_sheet.data[0]));
Have a way to do that without load the full sprite_sheet as a texture?
OBS: I'm using picoPNG to decode PNG and I CAN load png, but not make a texture from specified area.
Because you show no code I assume that:
char *data; // data of 8-bit per sample RGBA image
int w, h; // size of the image
int sx, sy, sw, sh; // position and size of area you want to crop.
glTexImage2D does support regions-of-interest. You do it as follows:
glPixelStorei(GL_UNPACK_ROW_LENGTH, w);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, sx);
glPixelStorei(GL_UNPACK_SKIP_ROWS, sy);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // LodePNG tightly packs everything
glTexImage2D(GL_TEXTURE_2D, level, internalFormat, sw, sh, border, format, type, data);
I have a opengl application that create one texture in format unsigned char*, and i gotta save this texture in one image file, but a don't know how to do. can someone help me?
this is my creation of this texture:
static unsigned char* pDepthTexBuf;
and this is my code that use this texture:
glBindTexture(GL_TEXTURE_2D, depthTexID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texWidth, texHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, pDepthTexBuf);
but how i can save this texture "pDepthTexBuf" in image file?
This is a very complicated question... I suggest referring to other public examples, like this one: http://www.andrewewhite.net/wordpress/2008/09/02/very-simple-jpeg-writer-in-c-c/
Basically, you need to integrate an image library, and then use whatever hooks it supports to save your data.
The simplest approach is probably to use a library like OpenCV, which has some very easy to use mechanisms for turning byte arrays of RGB data into image files.
You can see an example of reading the OpenGL image buffer and storing it as a PNG file here. Saving a JPG may be as simple as changing the extension of the output file.
// Create an OpenCV matrix of the appropriate size and depth
cv::Mat img(windowSize.y, windowSize.x, CV_8UC3);
glPixelStorei(GL_PACK_ALIGNMENT, (img.step & 3) ? 1 : 4);
glPixelStorei(GL_PACK_ROW_LENGTH, img.step / img.elemSize());
// Fetch the pixels as BGR byte values
glReadPixels(0, 0, img.cols, img.rows, GL_BGR, GL_UNSIGNED_BYTE, img.data);
// Image files use Y = down, so we need to flip the image on the X axis
cv::flip(img, img, 0);
static int counter = 0;
static char buffer[128];
sprintf(buffer, "screenshot%05i.png", counter++);
// write the image file
bool success = cv::imwrite(buffer, img);
if (!success) {
throw std::runtime_error("Failed to write image");
}