Ok so I need to create my own texture/image data and then display it onto a quad in OpenGL. I have the quad working and I can display a TGA file onto it with my own texture loader and it maps to the quad perfectly.
But how do I create my own "homemade image", that is 1000x1000 and 3 channels (RGB values) for each pixel? What is the format of the texture array, how do I for example set pixel (100,100) to black?
This is how I would imagine it for a completely white image/texture:
#DEFINE SCREEN_WIDTH 1000
#DEFINE SCREEN_HEIGHT 1000
unsigned int* texdata = new unsigned int[SCREEN_HEIGHT * SCREEN_WIDTH * 3];
for(int i=0; i<SCREEN_HEIGHT * SCREEN_WIDTH * 3; i++)
texdata[i] = 255;
GLuint t = 0;
glEnable(GL_TEXTURE_2D);
glGenTextures( 1, &t );
glBindTexture(GL_TEXTURE_2D, t);
// Set parameters to determine how the texture is resized
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_MIN_FILTER , GL_LINEAR_MIPMAP_LINEAR );
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_MAG_FILTER , GL_LINEAR );
// Set parameters to determine how the texture wraps at edges
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_WRAP_S , GL_REPEAT );
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_WRAP_T , GL_REPEAT );
// Read the texture data from file and upload it to the GPU
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCREEN_WIDTH, SCREEN_HEIGHT, 0,
GL_RGB, GL_UNSIGNED_BYTE, texdata);
glGenerateMipmap(GL_TEXTURE_2D);
EDIT: Below answers are correct but I also found that OpenGL doesn't handle normal ints which I used but it works fine with uint8_t. I assume it's because of the GL_RGB together with the GL_UNSIGNED_BYTE (which is only 8 bits and a normal int is not 8 bit) flag that I use when I upload to GPU.
But how do I create my own "homemade image", that is 1000x1000 and 3 channels (RGB values) for each pixel?
std::vector< unsigned char > image( 1000 * 1000 * 3 /* bytes per pixel */ );
What is the format of the texture array
Red byte, then green byte, then blue byte. Repeat.
how do I for example set pixel (100,100) to black?
unsigned int width = 1000;
unsigned int x = 100;
unsigned int y = 100;
unsigned int location = ( x + ( y * width ) ) * 3;
image[ location + 0 ] = 0; // R
image[ location + 1 ] = 0; // G
image[ location + 2 ] = 0; // B
Upload via:
// the rows in the image array don't have any padding
// so set GL_UNPACK_ALIGNMENT to 1 (instead of the default of 4)
// https://www.khronos.org/opengl/wiki/Pixel_Transfer#Pixel_layout
glPixelStorei( GL_UNPACK_ALIGNMENT, 1 );
glTexImage2D
(
GL_TEXTURE_2D, 0,
GL_RGB, 1000, 1000, 0,
GL_RGB, GL_UNSIGNED_BYTE, &image[0]
);
By default, each row of a texture should be aligned to 4 bytes.
The texture is an RGB texture, which needs 24 bits or 3 bytes for each texel and the texture is tightly packed especially the rows of the texture.
This means that the alignment of 4 bytes for the start of a line of the texture is disregarded (except 3 times the width of the texture is divisible by 4 without a remaining).
To deal with that the alignment has to be changed to 1.
This means the GL_UNPACK_ALIGNMENT paramter has to be set before loading a tightly packed texture to the GPU (glTexImage2D):
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
Otherwise an offset of 0-3 bytes per line is gained, at texture lookup. This causes a continuously twisted or tilted texture.
Since you use the soure format GL_RGB in GL_UNSIGNED_BYTE, each pixel consits of 3 color channels (red, green and blue) and each color channel is stored in one byte in range [0, 255].
If you want to set a pixel at (x, y) to the color R, G and B, the this is done like this:
texdata[(y*WIDTH+x)*3+0] = R;
texdata[(y*WIDTH+x)*3+1] = G;
texdata[(y*WIDTH+x)*3+2] = B;
Related
When I rasterize out a font, my code gives me a single channel of visability for a texture. Currently, I just duplicate this out to 4 different channels, and send that as a texture. Now this works, but I want to try and avoid unnecessary memory allocations and de-alocations on the cpu.
unsigned char *bitmap = new unsigned char[width*height] //How this is populated is not the point.
bitmap, now contains a 2d graphic.
It seems this guy also has the same problem: Opengl: Use single channel texture as alpha channel to display text
I do the same thing as a work around for now, where I just multiply the array size by 4 and copy the data into it 4 times.
unsigned char* colormap = new unsigned char[width * height * 4];
int offset = 0;
for (int d = 0; d < width * height;d++)
{
for (int i = 0;i < 4;i++)
{
colormap[offset++] = bitmap[d];
}
}
WHen I multiply it out, I use:
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(gltype, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, colormap);
And get:
Which is what I want.
When i use only the single channel:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, bitmap);
And Get:
It has no transparency, only red ext. makes it hard to colorize and ext. later.
Instead of having to do what I feel is a unnecessary allocations on the cpu side id like the tell OpenGL: "Hey your getting just one channel. multiply it out for all 4 color channels."
Is there a command for that?
In your shader, it's trivial enough to just broadcast the r component to all four channels:
vec4 vals = texture(tex, coords).rrrr;
If you don't want to modify your shader (perhaps because you need to use the same shader for 4-channel textures too), then you can apply a texture swizzle mask to the texture:
GLint swizzleMask[] = {GL_RED, GL_RED, GL_RED, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
When mechanisms read from the fourth component of the texture, they'll get the value defined by the red component of that texture.
For the gui for my game, I have a custom texture object that stores the rgba data for a texture. Each GUI element registered by my game adds to the final GUI texture, and then that texture is overlayed onto the framebuffer after post-processing.
I'm having trouble converting my Texture object to an openGL texture.
First I create a 1D int array that goes rgbargbargba... etc.
public int[] toIntArray(){
int[] colors = new int[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = r[x][y];
colors[i+1] = g[x][y];
colors[i+2] = b[x][y];
colors[i+3] = a[x][y];
i += 4;
}
}
return colors;
}
Where r, g, b, and a are jagged int arrays from 0 to 255. Next I create the int buffer and the texture.
id = glGenTextures();
glBindTexture(GL_TEXTURE_2D, id);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
IntBuffer iBuffer = BufferUtils.createIntBuffer(((width * height)*4));
int[] data = toIntArray();
iBuffer.put(data);
iBuffer.rewind();
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_INT, iBuffer);
glBindTexture(GL_TEXTURE_2D, 0);
After that I add a 50x50 red square into the upper left of the texture, and bind the texture to the framebuffer shader and render the fullscreen rect that displays my framebuffer.
frameBuffer.unbind(window.getWidth(), window.getHeight());
postShaderProgram.bind();
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, guiManager.texture()); // this gets the texture id that was created
postShaderProgram.setUniform("gui_texture", 1);
mesh.render();
postShaderProgram.unbind();
And then in my fragment shader, I try displaying the GUI:
#version 330
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D texFramebuffer;
uniform sampler2D gui_texture;
void main()
{
outColor = texture(gui_texture, Texcoord);
}
But all it outputs is a black window!
I added a red 50x50 rectangle into the upper left corner and verified that it exists, but for some reason it isn't showing in the final output.
That gives me reason to believe that I'm not converting my texture into an opengl texture with glTexImage2D correctly.
Can you see anything I'm doing wrong?
Update 1:
Here I saw them doing a similar thing using a float array, so I tried converting my 0-255 to a 0-1 float array and passing it as the image data like so:
public float[] toFloatArray(){
float[] colors = new float[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = (( r[x][y] * 1.0f) / 255);
colors[i+1] = (( g[x][y] * 1.0f) / 255);
colors[i+2] = (( b[x][y] * 1.0f) / 255);
colors[i+3] = (( a[x][y] * 1.0f) / 255);
i += 4;
}
}
return colors;
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_FLOAT, toFloatArray());
And it works!
I'm going to leave the question open however as I want to learn why the int buffer wasn't working :)
When you specified GL_UNSIGNED_INT as the type of the "Host" data, OpenGL expected 32 bits allocated for each color. Since OpenGL only maps the output colors in the default framebuffer to the range [0.0f, 1.0f], it'll take your input color values (mapped in the range [0, 255]) and divide all of them by the maximum size of an int (about 4.2 Billion) to get the final color displayed on screen. As an exercise, using your original code, set the "clear" color of the screen to white, and see that a black rectangle is getting drawn on screen.
You have two options. The first is to convert the color values to the range specified by GL_UNSIGNED_INT, which means for each color value, multiply them by Math.pow((long)2, 24), and trust that the integer overflow of multiplying by that value will behave correctly (since Java doesn't have unsigned integer types).
The other, far safer option, is to store each 0-255 value in a byte[] object (do not use char. char is 1 byte in C/C++/OpenGL, but is 2 bytes in Java) and specify the type of the elements as GL_UNSIGNED_BYTE.
I have textured a sphere with a .bmp format image. The problem is, when the image is mapped on the sphere the colour of the image looks inverted. Like the RED becomes BLUE and BLUE becomes RED.
I have tried using GL_BGR instead of GL_RGB but its no use.
Do I have to change the code for loading the image. Because It produces warning for the use of fopen() function and also I don't think its relevant of what I am asking.
The image what I am getting after mapping istexured sphere with inverted colors
This is what I have tried for loading the image and applied some texture rendering stuff.
GLuint LoadTexture( const char * filename, int width, int height )
{
GLuint texture;
unsigned char * data;
FILE * file;
//The following code will read in our RAW file
file = fopen( filename, "rb" );
if ( file == NULL ) return 0;
data = (unsigned char *)malloc( width * height * 3 );
fread( data, width * height * 3, 1, file );
fclose( file );
glGenTextures( 1, &texture ); //generate the texturewith the loaded data
glBindTexture( GL_TEXTURE_2D, texture ); //bind thetexture to it’s array
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE ); //set texture environment parameters
// better quality
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_LINEAR_MIPMAP_LINEAR );
//Here we are setting the parameter to repeat the textureinstead of clamping the texture
//to the edge of our shape.
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_REPEAT );
//Generate the texture with mipmaps
gluBuild2DMipmaps( GL_TEXTURE_2D, 3, width, height,GL_RGB, GL_UNSIGNED_BYTE, data );
free( data ); //free the texture
return texture; //return whether it was successfull
}
void FreeTexture( GLuint texture )
{
glDeleteTextures( 1, &texture );
}
A BMP file starts with a BITMAPFILEHEADER struct, which contains (amongst other things) the offset to the actual start of the bits in the file.
So you could do something like this to get to the bits of the BMP file.
BITMAPFILEHEADER bmpFileHeader;
fread(&bmpFileHeader,sizeof(bmpFileHeader),1,file);
fsetpos(file,bmpFileHeader->bfOffBits,SEEK_SET);
size_t bufferSize = width * height * 3;
fread(data,bufferSize,1,file);
Of course, this is dangerous as you are expecting a properly sized and formatted BMP file. So you really need to read the BITMAPINFO too.
BITMAPFILEHEADER bmpFileHeader;
fread(&bmpFileHeader,sizeof(bmpFileHeader),1,file);
BITMAPINFO bmpInfo;
fread(&bmpInfo,sizeof(bmpInfo),1,file)
fsetpos(file,bmpFileHeader->bfOffBits,SEEK_SET);
int width = bmpInfo->bmiHeader.biWidth;
int height = bmpInfo->bmiHeader.biHeight;
assert(bmpInfo->bmiHeader.biCompression == BI_RGB);
assert(bmpInfo->bmiHeader.biBitCount == 24;
size_t bufferSize = width * height * 3;
fread(data,bufferSize,1,file);
You can obviously make this increasingly sophisticated in mapping bmp formats to allowed opengl formats.
Additional complications will be:
the bmp data is bottom up so will appear upsidedown unless you correct for that.
the pixel data in a bmp file is padded to ensure each row is a multiple of 4 bytes wide which can cause stride issues if your image is not either 32bits per pixel, or a multiple of 4 pixels wide (which is practice is usually true).
Just because you can feed a constant to OpenGL does not mean the format is actually supported. Sometimes you just have to get in there and re-order the bytes yourself.
Finally I got what i needed. I was using an wrong argument GL_RGB instead I have replaced it with GL_BRG_EXT in this function
gluBuild2DMipmaps( GL_TEXTURE_2D, 3, width, height,GL_BGR_EXT, GL_UNSIGNED_BYTE, data );
previously I was getting this.textured mapped sphere with inverted colors
now I am getting this after the change I have made above.textured mapped sphere with true colors
I have a set of X,Y,Z values on a regular spaced grid from which I need to create a color-filled contour plot using C++. I've been googling on this for days and the consensus appears to be that this is achievable using a 1D texture map in openGL. However I have not found a single example of how to actually do this and I'm not getting anywhere just reading the openGL documentation. My confusion comes down to one core question:
My data does not contain an X,Y value for every pixel - it's a regularly spaced grid with data every 4 units on the X and Y axis, with a positive integer Z value.
For example: (0, 0, 1), (4, 0, 1), (8, 0, 2), (0, 4, 2), (0, 8, 4), (4, 4, 3), etc.
Since the contours would be based on the Z value and there are gaps between data points, how does applying a 1D texture achieve contouring this data (i.e. how does applying a 1D texture interpolate between grid points?)
The closest I've come to finding an example of this is in the online version of the Redbook (http://fly.cc.fer.hr/~unreal/theredbook/chapter09.html) in the teapot example but I'm assuming that teapot model has data for every pixel and therefore no interpolation between data points is needed.
If anyone can shed light on my question or better yet point to a concrete example of working with a 1D texture map in this way I'd be forever grateful as I've burned 2 days on this project with little to show for it.
EDIT:
The following code is what I'm using and while it does display the points in the correct location there is no interpolation or contouring happening - the points are just displayed as, well, points.
//Create a 1D image - for this example it's just a red line
int stripeImageWidth = 32;
GLubyte stripeImage[3*stripeImageWidth];
for (int j = 0; j < stripeImageWidth; j++) {
stripeImage[3*j] = j < 2 ? 0 : 255;
stripeImage[3*j+1] = 255;
stripeImage[3*j+2] = 255;
}
glDisable(GL_TEXTURE_2D);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage1D(GL_TEXTURE_1D, 0, 3, stripeImageWidth, 0, GL_RGB, GL_UNSIGNED_BYTE, stripeImage);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR );
float s[4] = { 0,1,0,0 };
glTexGenfv( GL_S, GL_OBJECT_PLANE, s );
glEnable( GL_TEXTURE_GEN_S );
glEnable( GL_TEXTURE_1D );
glBegin(GL_POINTS);
//_coords contains X,Y,Z data - Z is the value that I'm trying to contour
for (int x = 0; x < _coords.size(); ++x)
{
glTexCoord1f(static_cast<ValueCoord*>(_coords[x])->GetValue());
glVertex3f(_coords[x]->GetX(), _coords[x]->GetY(), zIndex);
}
glEnd();
The idea is using the Z coordinate as S coordinate into the texture. The linear interpolation over the texture coordinate then creates the contour. Note that by using a shader you can put the XY->Z data into a 2D texture and use a shader to do a indirection of the value of the 2D sampler in the color ramp of the 1D texture.
Update: Code example
First we need to change the way you use textures a bit.
To this to prepare the texture:
//Create a 1D image - for this example it's just a red line
int stripeImageWidth = 32;
GLubyte stripeImage[3*stripeImageWidth];
for (int j = 0; j < stripeImageWidth; j++) {
stripeImage[3*j] = j*255/32; // use a gradient instead of a line
stripeImage[3*j+1] = 255;
stripeImage[3*j+2] = 255;
}
GLuint texID;
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_1D, texID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage1D(GL_TEXTURE_1D, 0, 3, stripeImageWidth, 0, GL_RGB, GL_UNSIGNED_BYTE, stripeImage);
// We want the texture to wrap, so that values outside the range [0, 1]
// are mapped into a gradient sawtooth
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
And this to bind it for usage.
// The texture coordinate comes from the data, it it not
// generated from the vertex position!!!
glDisable( GL_TEXTURE_GEN_S );
glDisable(GL_TEXTURE_2D);
glEnable( GL_TEXTURE_1D );
glBindTexture(GL_TEXTURE_1D, texID);
Now to your conceptual problem: You cannot directly make a contour plot from XYZ data. XYZ are just sparse sampling points. You need to fill the gaps, for example by putting it into a 2D histogram first. For this create a grid with a certain amount of bins in each direction, initialized to all NaN (pseudocode)
float hist2D[bins_x][bins_y] = {NaN, NaN, ...}
then for each XYZ, add the Z value to the bins of the grid if not a NaN, otherwise replace NaN with the Z value. Afterwards use a Laplace filter on the histogram to smooth out the bins still containing a NaN. Finally you can render the grid as contour plot using
glBegin(GL_QUADS);
for(int y=0; y<grid_height; y+=2) for(int x=0; x<grid_width; x+=2) {
glTexCoord1f(hist2D[x ][y ]]); glVertex2i(x ,y);
glTexCoord1f(hist2D[x+1][y ]]); glVertex2i(x+1,y);
glTexCoord1f(hist2D[x+1][y+1]]); glVertex2i(x+1,y+1);
glTexCoord1f(hist2D[x ][y+1]]); glVertex2i(x ,y+1);
}
glEnd();
or you could upload the grid as a 2D texture and use a fragment shader to indirect into the color ramp.
Another way to fill the gaps in sparse XYZ data is to find the 2D Voronoi diagram of the XY set and use this to create the sampling geometry. The Z values for the vertices would be the distance weighted average of the XYZs contributing to the Voronoi cells intersecting.
I'm loading a custom data into 2D texture GL_RGBA16F:
glActiveTexture(GL_TEXTURE0);
int Gx = 128;
int Gy = 128;
GLuint grammar;
glGenTextures(1, &grammar);
glBindTexture(GL_TEXTURE_2D, grammar);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA16F, Gx, Gy);
float* grammardata = new float[Gx * Gy * 4](); // set default to zero
*(grammardata) = 1;
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,Gx,Gy,GL_RGBA,GL_FLOAT,grammardata);
int grammarloc = glGetUniformLocation(p_myGLSL->getProgramID(), "grammar");
if (grammarloc < 0) {
printf("grammar missing!\n");
exit(0);
}
glUniform1i(grammarloc, 0);
When I read the value of uniform sampler2D grammar in GLSL, it returns 0.25 instead of 1. How do I fix the scaling problem?
if (texture(grammar, vec2(0,0) == 0.25) {
FragColor = vec4(0,1,0,1);
} else
{
FragColor = vec4(1,0,0,1);
}
By default texture interpolation is set to the following values:
GL_TEXTURE_MIN_FILTER = GL_NEAREST_MIPMAP_LINEAR,
GL_TEXTURE_MAG_FILTER = GL_LINEAR
GL_WRAP[R|S|T] = GL_REPEAT
This means, in cases where the mapping between texels of the texture and pixels on the screen does not fit, the hardware interpolates will interpolate for you. There can be two cases:
The texture is displayed smaller than it actually is: In this case interpolation is performed between two mipmap levels. If no mipmaps are generated, these are treated as beeing 0, which could lead to 0.25.
The texture is displayed larger than it actually is (and I think this will be the case here): Here, the hardware does not interpolate between mipmap levels, but between adjacent texels in the texture. The problem now comes from the fact, that (0,0) in texture coordinates is NOT the center of pixel [0,0], but the lower left corner of it.
Have a look at the following drawing, which illustrates how texture coordinates are defined (here with 4 texels)
tex-coord: 0 0.25 0.5 0.75 1
texels |-----0-----|-----1-----|-----2-----|-----3-----|
As you can see, 0 is on the boundary of a texel, while the first texels center is at (1/(2 * |texels|)).
This means for you, that with wrap mode set to GL_REPEAT, texture coordinate (0,0) will interpolate uniformly between the texels [0,0], [-1,0], [-1,-1], [0,-1]. Since -1 == 127 (due to repeat) and everything except [0,0] is 0, this results in
([0,0] + [-1,0] + [-1,-1] + [0,-1]) / 4 =
1 + 0 + 0 + 0 ) / 4 = 0.25