OpenGL Texture messed up - opengl

Here is my code to load a texture. I have tried to load a file using this example; it is a gif file. Can I ask if gif files can be loaded, or is it only raw files can be loaded?
void setUpTextures()
{
printf("Set up Textures\n");
//This is the array that will contain the image color information.
// 3 represents red, green and blue color info.
// 512 is the height and width of texture.
unsigned char earth[512 * 512 * 3];
// This opens your image file.
FILE* f = fopen("/Users/Raaj/Desktop/earth.gif", "r");
if (f){
printf("file loaded\n");
}else{
printf("no load\n");
fclose(f);
return;
}
fread(earth, 512 * 512 * 3, 1, f);
fclose(f);
glEnable(GL_TEXTURE_2D);
//Here 1 is the texture id
//The texture id is different for each texture (duh?)
glBindTexture(GL_TEXTURE_2D, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
//In this line you only supply the last argument which is your color info array,
//and the dimensions of the texture (512)
glTexImage2D(GL_TEXTURE_2D, 0, 3, 512, 512, 0, GL_RGB, GL_UNSIGNED_BYTE,earth);
glDisable(GL_TEXTURE_2D);
}
void Draw()
{
glEnable(GL_TEXTURE_2D);
// Here you specify WHICH texture you will bind to your coordinates.
glBindTexture(GL_TEXTURE_2D,1);
glColor3f(1,1,1);
double n=6;
glBegin(GL_QUADS);
glTexCoord2d(0,50); glVertex2f(n/2, n/2);
glTexCoord2d(50,0); glVertex2f(n/2, -n/2);
glTexCoord2d(50,50); glVertex2f(-n/2, -n/2);
glTexCoord2d(0,50); glVertex2f(-n/2, n/2);
glEnd();
// Do not forget this line, as then the rest of the colors in your
// Program will get messed up!!!
glDisable(GL_TEXTURE_2D);
}
And all I get is this:
Can I know why?

Basically, no, you can't just give arbitrary texture formats to GL - it only wants pixel data, not encoded files.
Your code, as posted, clearly declares an array for 24-bit RGB data, but then you open and attempt to read that much data from a GIF file. GIF is a compressed and palettised format, complete with header information etc., so that's never doing to work.
You need to use an image loader to decompress the file into raw pixels.
Also, your texture coordinates don't look right. There are four vertices, but only 3 distinct coordinates used, and 2 adjacent coordinates are diagonally opposite each other. Even if your texture was loaded correctly, that's unlikely to be what you want.

Related

Saving a single color OpenGL texture to a file results in a 64x64 black box

This code fragment is meant to create a GL texture with a single color, then save the raw pixel data to disk. I then convert that to PNG using ffmpeg. I have tried multiple ways of generating the texture, and multiple ways of saving the texture data, but the result is always the same - a 1920x1080 image with a 64x64 black box in the corner. What I expected was a 1920x1080 image of a single color.
What am I doing wrong?
Conversion command:
ffmpeg -pix_fmt rgba -s 1920x1080 -i texture.raw -f image2 output.png
Code:
gpu::gles2::GLES2Interface* gl = GetContextProvider()->ContextGL();
GLuint texture;
gl->GenTextures(1, &texture);
gl->BindTexture(GL_TEXTURE_2D, texture);
int width = 1920;
int height = 1080;
std::vector<unsigned char> data(width * height * 4, 0);
for (size_t i = 2; i < data.size(); i += 4) {
data[i] = 255; // blue channel
}
gl->TexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data.data());
gl->TexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
gl->TexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
std::vector<unsigned char> buffer(width * height * 4);
gl->ReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer.data());
std::ofstream file("texture.raw", std::ios::binary);
file.write(reinterpret_cast<char*>(buffer.data()), buffer.size());
file.close();
Take a look at the description for glReadPixels from the main documentation here: https://registry.khronos.org/OpenGL-Refpages/gl4/html/glReadPixels.xhtml.
Essentially glReadPixels is for getting the pixels from the current frame buffer, not from the currently bound GL_TEXTURE_2D. I can't 100% confidently answer without more code and context, but it looks like the code you have there is before anything is actually rendered to the frame buffer and you're only setting things up. It's most likely that you get a black box because the data getting saved to the buffer isn't valid.
Hope that helps.

CUDA/OpenGL Interop: Writing to surface object does not erase previous contents

I am attempting to use a CUDA kernel to modify an OpenGL texture, but am having a strange issue where my calls to surf2Dwrite() seem to blend with the previous contents of the texture, as you can see in the image below. The wooden texture in the back is what's in the texture before modifying it with my CUDA kernel. The expected output would include ONLY the color gradients, not the wood texture behind it. I don't understand why this blending is happening.
Possible Problems / Misunderstandings
I'm new to both CUDA and OpenGL. Here I'll try to explain the thought process that led me to this code:
I'm using a cudaArray to access the texture (rather than e.g. an array of floats) because I read that it's better for cache locality when reading/writing a texture.
I'm using surfaces because I read somewhere that it's the only way to modify a cudaArray
I wanted to use surface objects, which I understand to be the newer way of doing things. The old way is to use surface references.
Some possible problems with my code that I don't know how to check/test:
Am I being inconsistent with image formats? Maybe I didn't specify the correct number of bits/channel somewhere? Maybe I should use floats instead of unsigned chars?
Code Summary
You can find a full minimum working example in this GitHub Gist. It's quite long because of all the moving parts, but I'll try to summarize. I welcome suggestions on how to shorten the MWE. The overall structure is as follows:
create an OpenGL texture from a file stored locally
register the texture with CUDA using cudaGraphicsGLRegisterImage()
call cudaGraphicsSubResourceGetMappedArray() to get a cudaArray that represents the texture
create a cudaSurfaceObject_t that I can use to write to the cudaArray
pass the surface object to a kernel that writes to the texture with surf2Dwrite()
use the texture to draw a rectangle on-screen
OpenGL Texture Creation
I am new to OpenGL, so I'm using the "Textures" section of the LearnOpenGL tutorials as a starting point. Here's how I set up the texture (using the image library stb_image.h)
GLuint initTexturesGL(){
// load texture from file
int numChannels;
unsigned char *data = stbi_load("img/container.jpg", &g_imageWidth, &g_imageHeight, &numChannels, 4);
if(!data){
std::cerr << "Error: Failed to load texture image!" << std::endl;
exit(1);
}
// opengl texture
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
// wrapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
// filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// set texture image
glTexImage2D(
GL_TEXTURE_2D, // target
0, // mipmap level
GL_RGBA8, // internal format (#channels, #bits/channel, ...)
g_imageWidth, // width
g_imageHeight, // height
0, // border (must be zero)
GL_RGBA, // format of input image
GL_UNSIGNED_BYTE, // type
data // data
);
glGenerateMipmap(GL_TEXTURE_2D);
// unbind and free image
glBindTexture(GL_TEXTURE_2D, 0);
stbi_image_free(data);
return textureId;
}
CUDA Graphics Interop
After calling the function above, I register the texture with CUDA:
void initTexturesCuda(GLuint textureId){
// register texture
HANDLE(cudaGraphicsGLRegisterImage(
&g_textureResource, // resource
textureId, // image
GL_TEXTURE_2D, // target
cudaGraphicsRegisterFlagsSurfaceLoadStore // flags
));
// resource description for surface
memset(&g_resourceDesc, 0, sizeof(g_resourceDesc));
g_resourceDesc.resType = cudaResourceTypeArray;
}
Render Loop
Every frame, I run the following to modify the texture and render the image:
while(!glfwWindowShouldClose(window)){
// -- CUDA --
// map
HANDLE(cudaGraphicsMapResources(1, &g_textureResource));
HANDLE(cudaGraphicsSubResourceGetMappedArray(
&g_textureArray, // array through which to access subresource
g_textureResource, // mapped resource to access
0, // array index
0 // mipLevel
));
// create surface object (compute >= 3.0)
g_resourceDesc.res.array.array = g_textureArray;
HANDLE(cudaCreateSurfaceObject(&g_surfaceObj, &g_resourceDesc));
// run kernel
kernel<<<gridDim, blockDim>>>(g_surfaceObj, g_imageWidth, g_imageHeight);
// unmap
HANDLE(cudaGraphicsUnmapResources(1, &g_textureResource));
// --- OpenGL ---
// clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// use program
shader.use();
// triangle
glBindVertexArray(vao);
glBindTexture(GL_TEXTURE_2D, textureId);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
// glfw: swap buffers and poll i/o events
glfwSwapBuffers(window);
glfwPollEvents();
}
CUDA Kernel
The actual CUDA kernel is as follows:
__global__ void kernel(cudaSurfaceObject_t surface, int nx, int ny){
int x = blockIdx.x * blockDim.x + threadIdx.x;
int y = blockIdx.y * blockDim.y + threadIdx.y;
if(x < nx && y < ny){
uchar4 data = make_uchar4(x % 255,
y % 255,
0, 255);
surf2Dwrite(data, surface, x * sizeof(uchar4), y);
}
}
If I understand correctly, you initially register the texture, map it once, create a surface object for the array representing the mapped texture, and then unmap the texture. Every frame, you then map the resource again, ask for the array representing the mapped texture, and then completely ignore that one and use the surface object created for the array you got back when you first mapped the resource. From the documentation:
[…] The value set in array may change every time that resource is mapped.
You have to create a new surface object every time you map the resource because you might get a different array every time. And, in my experience, you will actually get a different one every so often. It may be a valid thing to do to only create a new surface object whenever the array actually changes. The documentation seems to allow for that, but I never tried, so I can't tell whether that works for sure…
Apart from that: You generate mipmaps for your texture. You only overwrite mip level 0. You then render the texture using mipmapping with trilinear interpolation. So my guess would be that you just happen to render the texture at a resolution that does not match the resolution of mip level 0 exactly and, thus, you will end up interpolating between level 0 (in which you wrote) and level 1 (which was generated from the original texture)…
It turns out the problem is that I had mistakenly generated mipmaps for the original wood texture, and my CUDA kernel was only modifying the level-0 mipmap. The blending I noticed was the result of OpenGL interpolating between my modified level-0 mipmap and a lower-resolution version of the wood texture.
Here's the correct output, obtained by disabling mipmap interpolation. Lesson learned!

How to copy part of texture to an image in opengl

Im working on a porject in opengl.
I have a polygon in the polygon filled with bmp image file.
I can rotate the camera to look at the image from different places, and I want to copy the part of the image and put it inside a new bmp file.
I have alot of Unnecessary code so I will copy the imprtant parts.
_textureId = LoadBMP("file.bmp");
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glColor3f(1, 1, 0.7);
float BOX_SIZE = -12.0f;
glBegin(GL_QUADS);
glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, -5);
glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, -5);
glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, 5);
glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, 5);
glEnd();
and the rotation is pretty basic, soo someone have any suggestions?
thanks alot.
If you want to save the output of OpenGL to a file, you will have to read back the contents of the color buffer from the GL to client memory. Then, you can do whatecver you want to it. The command
glReadPixels(GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid *data)
will read back the pixel data in an rectangle of the width * height pixels beginning at x,y to the memory buffer located at data. Since you said you want to save it as a BMP file, you probably want GL_UNSIGNED_BYTE as type, because BMP only supports up to 8 bit per channel. You also want probably GL_BGA or GL_BGR as the format, as this is the native channel layout for BMP.

Unable to render a texture on a quad

I am trying to program a simple game in C++ using OpenGL for graphics. In my game, I have objects that are rendered onscreen as a white square. I would like to be able to bind an image as a texture to these objects, so that I can render an image onscreen instead of the white square. There's no restriction on the format of the image, though I've been using .png or .bmp for testing.
One of my object classes, Character.h, stores a GLuint* member variable called _texture and an unsigned char* member variable called _data (as an image handle and pixel data, respectively). Within Character.h is this function meant to bind the image as a texture to the Character object:
void loadTexture(const char* fileName)
{
/* read pixel data */
FILE* file;
file = fopen(fileName, "rb");
if (file == NULL) {cout << "ERROR: Image file not found!" << endl;}
_data = (unsigned char*)malloc(_dimension * _dimension * 3);
fread(_data, _dimension * _dimension * 3, 1, file);
fclose(file);
/* bind texture */
glGenTextures(1, &_texture); // _texture is a member variable of type GLuint
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _dimension, _dimension, 0, GL_RGB, GL_UNSIGNED_BYTE, _data); // _dimension = 64.0f.
}
Once the image is bound, I then attempt to render the Character object with this function (which is a static function found in a separate class):
static void drawTexturedRectangle(float x, float y, float w, float h, GLuint texture, unsigned char* data)
{
/* fractions are calculated to determine the position onscreen that the object is drawn at */
float xFraction = (x / GlobalConstants::SCREEN_WIDTH) * 2;
float yFraction = (y / GlobalConstants::SCREEN_HEIGHT) * 2;
float wFraction = (w / GlobalConstants::SCREEN_WIDTH) * 2;
float hFraction = (h / GlobalConstants::SCREEN_HEIGHT) * 2;
glEnable(GL_TEXTURE_2D);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glPushMatrix();
glEnable(GL_BLEND);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f(xFraction, yFraction); // bottom left corner
glTexCoord2f(0.0f, 1.0f); glVertex2f(xFraction, yFraction + hFraction); // top left corner
glTexCoord2f(1.0f, 1.0f); glVertex2f(xFraction + wFraction, yFraction + hFraction); // top right corner
glTexCoord2f(1.0f, 0.0f); glVertex2f(xFraction + wFraction, yFraction); // bottom right corner
glEnd();
glDisable(GL_BLEND);
glPopMatrix();
}
No matter how much I mess around with this, I cannot seem to get my program to do anything other than render a plain white square with no texture on it. I have checked to ensure that variables file, _texture, _dimension, and _data all contain values. Can anybody spot what I might be doing wrong?
i dont think you can just input a raw file to glTexImage2D, except if you store texture files in that format (which you probably dont). glTexImage2D expects a huge array bytes (representing texel colors), but file formats typically dont store images like that. Even bmp has some header information in it.
Use a 3rd party image loader library like DevIL or SOIL to load your textures! These libraries can convert most common image file formats (png, jpg, tga, ...) into a byte stream that you can input to OpenGL!
On a side note:
After you call glTexImage2D(..., data), the content of data is copied into the GPU memory (VRAM). From this point you only have to bind the texture using it's id, as it is already in the VRAM. Calling glTexImage2D() in each frame will reupload the whole texture in each frame into the vram, causing a big performance penalty.

DevIL image not rendering correctly

I am using OpenGL, I can load tga files properly, but for some reason when i render jpg files, i do not see them correctly.
This is what the image is supposed to look like--
And this is what it looks like.. why is it stretched? is it because of the coordinates?
Here is the code i am using for drawing.
void Renderer::DrawJpg(GLuint tex, int xi, int yq, int width, int height) const
{
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(0+xi, 0+xi);
glTexCoord2i(0, 1); glVertex2i(0+xi, height+xi);
glTexCoord2i(1, 1); glVertex2i(width+xi, height+xi);
glTexCoord2i(1, 0); glVertex2i(width+xi, 0+xi);
glEnd();
}
This is how i am loading the image...
imagename=s;
ILboolean success;
ilInit();
ilGenImages(1, &id);
ilBindImage(id);
success = ilLoadImage((const ILstring)imagename.c_str());
if (success)
{
success = ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE); /* Convert every colour component into
unsigned byte. If your image contains alpha channel you can replace IL_RGB with IL_RGBA */
if (!success)
{
printf("image conversion failed.");
}
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
width = ilGetInteger(IL_IMAGE_WIDTH);
height = ilGetInteger(IL_IMAGE_HEIGHT);
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE,
ilGetData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
I probably should mention this, but some images did get rendered properly, I thought it was because width != height. But that is not the case, images with width != height also get loaded fine.
But for other images i still get this problem.
You probably need to call
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
before uploading the texture data with glTexImage2D.
From the reference pages:
GL_UNPACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row
in memory. The allowable values are 1 (byte-alignment), 2 (rows
aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start
on double-word boundaries).
The default value for the alignment is 4 and your image loading library probably returns pixel data with byte-aligned rows, which explains why some of your images look OK (when the width is a multiple of four).
Always try to have the images width and height of the power of two because some GPU support textures only in NPOT resolution. (for example 128x128, 512x512 but not 123x533, 128x532)
And i think that here instead of GL_REPEAT you should use GL_CLAMP_TO_EDGE :)
GL_REPEAT is used when your texture coordinates are > 1.0f, CLAMP_TO_EDGE too but guarantees the image will fill the polygon without unwanted lines on edges. (it's blocking your linear filtering on edges)
Remember to try out code where floats are used (sample from comment) :)
Here is good explanation http://open.gl/textures :)