Loading opengl texture using Boost.GIL - c++

I wrote a simple app that load model using OpenGL, Assimp and Boost.GIL.
My model contains a PNG texture. When I load it using GIL and render it through OPENGL I got a wrong result. Thank of powel of codeXL, I found my texture loaded in OpenglGL is completely different from the image itself.
Here is a similar question and I followed its steps but still got same mistake.
Here are my codes:
// --------- image loading
std::experimental::filesystem::path path(pathstr);
gil::rgb8_image_t img;
if (path.extension() == ".jpg" || path.extension() == ".jpeg" || path.extension() == ".png")
{
if (path.extension() == ".png")
gil::png_read_and_convert_image(path.string(), img);
else
gil::jpeg_read_and_convert_image(path.string(), img);
_width = static_cast<int>(img.width());
_height = static_cast<int>(img.height());
typedef decltype(img)::value_type pixel;
auto srcView = gil::view(img);
//auto view = gil::interleaved_view(
// img.width(), img.height(), &*gil::view(img).pixels(), img.width() * sizeof pixel);
auto pixeldata = new pixel[_width * _height];
auto dstView = gil::interleaved_view(
img.width(), img.height(), pixeldata, img.width() * sizeof pixel);
gil::copy_pixels(srcView, dstView);
}
// ---------- texture loading
{
glBindTexture(GL_TEXTURE_2D, handle());
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
image.width(), image.height(),
0, GL_RGB, GL_UNSIGNED_BYTE,
reinterpret_cast<const void*>(image.data()));
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
And my texture is:
When it runs, my codeXL debugger reports me that the texture became:
all other textures of this model went wrong too.

Technically this is a FAQ, asked already several times. Essentially you're running into an alignment issue. By default (you can change it) OpenGL expects image rows to be aligned on 4 byte boundaries. If your image data doesn't match this, you get this skewed result. Adding a call to glPixelStorei(GL_UNPACK_ALIGNMENT, 1); right before the call to glTexImage… will do the trick for you. Of course you should retrieve the actual alignment from the image metadata.
The image being "upside down" is caused by OpenGL putting the origin of textures into the lower left (if all transformation matrices are left at default or have positive determinant). That is unlike most image file formats (but not all) which have it in the upper left. Just flip the vertical texture coordinate and you're golden.

Related

CUDA/OpenGL Interop: Writing to surface object does not erase previous contents

I am attempting to use a CUDA kernel to modify an OpenGL texture, but am having a strange issue where my calls to surf2Dwrite() seem to blend with the previous contents of the texture, as you can see in the image below. The wooden texture in the back is what's in the texture before modifying it with my CUDA kernel. The expected output would include ONLY the color gradients, not the wood texture behind it. I don't understand why this blending is happening.
Possible Problems / Misunderstandings
I'm new to both CUDA and OpenGL. Here I'll try to explain the thought process that led me to this code:
I'm using a cudaArray to access the texture (rather than e.g. an array of floats) because I read that it's better for cache locality when reading/writing a texture.
I'm using surfaces because I read somewhere that it's the only way to modify a cudaArray
I wanted to use surface objects, which I understand to be the newer way of doing things. The old way is to use surface references.
Some possible problems with my code that I don't know how to check/test:
Am I being inconsistent with image formats? Maybe I didn't specify the correct number of bits/channel somewhere? Maybe I should use floats instead of unsigned chars?
Code Summary
You can find a full minimum working example in this GitHub Gist. It's quite long because of all the moving parts, but I'll try to summarize. I welcome suggestions on how to shorten the MWE. The overall structure is as follows:
create an OpenGL texture from a file stored locally
register the texture with CUDA using cudaGraphicsGLRegisterImage()
call cudaGraphicsSubResourceGetMappedArray() to get a cudaArray that represents the texture
create a cudaSurfaceObject_t that I can use to write to the cudaArray
pass the surface object to a kernel that writes to the texture with surf2Dwrite()
use the texture to draw a rectangle on-screen
OpenGL Texture Creation
I am new to OpenGL, so I'm using the "Textures" section of the LearnOpenGL tutorials as a starting point. Here's how I set up the texture (using the image library stb_image.h)
GLuint initTexturesGL(){
// load texture from file
int numChannels;
unsigned char *data = stbi_load("img/container.jpg", &g_imageWidth, &g_imageHeight, &numChannels, 4);
if(!data){
std::cerr << "Error: Failed to load texture image!" << std::endl;
exit(1);
}
// opengl texture
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
// wrapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
// filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// set texture image
glTexImage2D(
GL_TEXTURE_2D, // target
0, // mipmap level
GL_RGBA8, // internal format (#channels, #bits/channel, ...)
g_imageWidth, // width
g_imageHeight, // height
0, // border (must be zero)
GL_RGBA, // format of input image
GL_UNSIGNED_BYTE, // type
data // data
);
glGenerateMipmap(GL_TEXTURE_2D);
// unbind and free image
glBindTexture(GL_TEXTURE_2D, 0);
stbi_image_free(data);
return textureId;
}
CUDA Graphics Interop
After calling the function above, I register the texture with CUDA:
void initTexturesCuda(GLuint textureId){
// register texture
HANDLE(cudaGraphicsGLRegisterImage(
&g_textureResource, // resource
textureId, // image
GL_TEXTURE_2D, // target
cudaGraphicsRegisterFlagsSurfaceLoadStore // flags
));
// resource description for surface
memset(&g_resourceDesc, 0, sizeof(g_resourceDesc));
g_resourceDesc.resType = cudaResourceTypeArray;
}
Render Loop
Every frame, I run the following to modify the texture and render the image:
while(!glfwWindowShouldClose(window)){
// -- CUDA --
// map
HANDLE(cudaGraphicsMapResources(1, &g_textureResource));
HANDLE(cudaGraphicsSubResourceGetMappedArray(
&g_textureArray, // array through which to access subresource
g_textureResource, // mapped resource to access
0, // array index
0 // mipLevel
));
// create surface object (compute >= 3.0)
g_resourceDesc.res.array.array = g_textureArray;
HANDLE(cudaCreateSurfaceObject(&g_surfaceObj, &g_resourceDesc));
// run kernel
kernel<<<gridDim, blockDim>>>(g_surfaceObj, g_imageWidth, g_imageHeight);
// unmap
HANDLE(cudaGraphicsUnmapResources(1, &g_textureResource));
// --- OpenGL ---
// clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// use program
shader.use();
// triangle
glBindVertexArray(vao);
glBindTexture(GL_TEXTURE_2D, textureId);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
// glfw: swap buffers and poll i/o events
glfwSwapBuffers(window);
glfwPollEvents();
}
CUDA Kernel
The actual CUDA kernel is as follows:
__global__ void kernel(cudaSurfaceObject_t surface, int nx, int ny){
int x = blockIdx.x * blockDim.x + threadIdx.x;
int y = blockIdx.y * blockDim.y + threadIdx.y;
if(x < nx && y < ny){
uchar4 data = make_uchar4(x % 255,
y % 255,
0, 255);
surf2Dwrite(data, surface, x * sizeof(uchar4), y);
}
}
If I understand correctly, you initially register the texture, map it once, create a surface object for the array representing the mapped texture, and then unmap the texture. Every frame, you then map the resource again, ask for the array representing the mapped texture, and then completely ignore that one and use the surface object created for the array you got back when you first mapped the resource. From the documentation:
[…] The value set in array may change every time that resource is mapped.
You have to create a new surface object every time you map the resource because you might get a different array every time. And, in my experience, you will actually get a different one every so often. It may be a valid thing to do to only create a new surface object whenever the array actually changes. The documentation seems to allow for that, but I never tried, so I can't tell whether that works for sure…
Apart from that: You generate mipmaps for your texture. You only overwrite mip level 0. You then render the texture using mipmapping with trilinear interpolation. So my guess would be that you just happen to render the texture at a resolution that does not match the resolution of mip level 0 exactly and, thus, you will end up interpolating between level 0 (in which you wrote) and level 1 (which was generated from the original texture)…
It turns out the problem is that I had mistakenly generated mipmaps for the original wood texture, and my CUDA kernel was only modifying the level-0 mipmap. The blending I noticed was the result of OpenGL interpolating between my modified level-0 mipmap and a lower-resolution version of the wood texture.
Here's the correct output, obtained by disabling mipmap interpolation. Lesson learned!

How i can convert unsigned char* to image file (like jpg) in c++?

I have a opengl application that create one texture in format unsigned char*, and i gotta save this texture in one image file, but a don't know how to do. can someone help me?
this is my creation of this texture:
static unsigned char* pDepthTexBuf;
and this is my code that use this texture:
glBindTexture(GL_TEXTURE_2D, depthTexID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texWidth, texHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, pDepthTexBuf);
but how i can save this texture "pDepthTexBuf" in image file?
This is a very complicated question... I suggest referring to other public examples, like this one: http://www.andrewewhite.net/wordpress/2008/09/02/very-simple-jpeg-writer-in-c-c/
Basically, you need to integrate an image library, and then use whatever hooks it supports to save your data.
The simplest approach is probably to use a library like OpenCV, which has some very easy to use mechanisms for turning byte arrays of RGB data into image files.
You can see an example of reading the OpenGL image buffer and storing it as a PNG file here. Saving a JPG may be as simple as changing the extension of the output file.
// Create an OpenCV matrix of the appropriate size and depth
cv::Mat img(windowSize.y, windowSize.x, CV_8UC3);
glPixelStorei(GL_PACK_ALIGNMENT, (img.step & 3) ? 1 : 4);
glPixelStorei(GL_PACK_ROW_LENGTH, img.step / img.elemSize());
// Fetch the pixels as BGR byte values
glReadPixels(0, 0, img.cols, img.rows, GL_BGR, GL_UNSIGNED_BYTE, img.data);
// Image files use Y = down, so we need to flip the image on the X axis
cv::flip(img, img, 0);
static int counter = 0;
static char buffer[128];
sprintf(buffer, "screenshot%05i.png", counter++);
// write the image file
bool success = cv::imwrite(buffer, img);
if (!success) {
throw std::runtime_error("Failed to write image");
}

TTF_RenderUTF8_Blended Rendering Colored Text

I'm trying to render text using SDL_ttf and openGL.
I open the font and render the text to an SDL_Surface and then attached that surface to a texture and bind it for openGL to render it.
I have googled this issue a bunch and not getting many hits which would lead me to believe I'm understand something.
The only two functions that matter since I've pretty much made a temp variable to trouble shoot this issue. They are:
SDL_Surface* CFont::BlendedUTF8Surface() {
SDL_Surface* Surf_Text;
SDL_Color Blah;
Blah.r = 0;
Blah.b = 255;
Blah.g = 0;
if(!(Surf_Text = TTF_RenderUTF8_Blended(pFont,TxtMsg,Blah))) {
char str[256];
sprintf_s(str, "? %s \n", TTF_GetError());
OutputDebugString(str);
}
return Surf_Text;
}
This uses SDL_ttf to render the text to the Surf_Text surface. You can see I've maxed the blue channel. I'll talk about this in a minute. Here is the rendering
void CLabel::OnRender(int xOff,int yOff) {
if(Visible) {
glColor4f(1.0,1.0,1.0,1.0);
Font.Color(FontColors.r, FontColors.g, FontColors.b); //useless I overwrote the variable with Blah to test this
SDL_Surface* Surf_Text;
Surf_Text = Font.BlendedUTF8Surface();
Text_Font.OnLoad(Surf_Text);
Text_Font.RenderQuad(_X+xOff,_Y+yOff);
SDL_FreeSurface(Surf_Text);
glColor4f(0.0,0.0,0.0,1.0);
}
}
Alright, so far from what I can tell, the problem is probably coming from the current color state and the texture environment mode.
When I render the text in this fashion, the text will change colors but it's like the R and B channels have swtiched. If I make red 255, the text is blue and if I make blue 255, the text is red. Green stays green (RGB vs BGR ?).
If I remove the glColor4f call in the rendering function, the text refused to render colored at all. Always black (I habitually set the color back to (0,0,0) everytime I render someething, so possible since the mode is modulate (R = 0 * Texture (Font) R, etc) so it will be black. Makes sense.
If I set the Texture environment to DECAL then the text renders black and a box behind the text renders the color I am trying to render the text.
I think I just don't know the correct way to do this. Anyone have any experience with SDL_ttf and openGL texture environments that could give some ideas?
Edit:
I've done some rewriting of the functions and testing the surface and have finally figured out a few things. If I use GL_DECAL the text renders the correct color and the pixel values value is 0 everywhere on the surface where it's not the red color I tried rendering (which renders with a value of 255, which is strange since red is the first channel it should have been 255^3 (or in terms of hex FF0000) at least I would expect). With DECAL, the alpha space (the white space around the text that has 0 for a pixel value) shows up the color of the current glColor() call. If I use Blended, the alpha zone disappears but my text renders as blended as well (of course) so it blends with the underlying background texture.
I guess the more appropriate question is how to I blend only the white space and not the text? My guess is that I could call a new glBlendFunc(); but I have tested parameters and I'm like a child in the woods. No clue how to get the desired result.
Solution isn't completely verified, but the format of the surface is indeed BGRA but I cannot implement this correction. I'm going to attempt to create a color swap function for this I guess.
This fix did not work. Instead of setting BGR, I thought just create a new RGB surface:
if (Surface->format->Rmask == 0x00ff0000) {
Surface = SDL_CreateRGBSurfaceFrom(Surface->pixels, Surface->w, Surface->h, 32, Surface->pitch, Surface->format->Rmask, Surface->format->Gmask, Surface->format->Bmask, Surface->format->Amask);
}
After that failed to work I tried swapping Surface->format->Bmask and Surface->format->Rmask but that had no effect either.
In order to handle BGR and RGB changes you can try this code to create a texture from a SDL_Surface
int createTextureFromSurface(SDL_Surface *surface)
{
int texture;
// get the number of channels in the SDL surface
GLint nbOfColors = surface->format->BytesPerPixel;
GLenum textureFormat = 0;
switch (nbOfColors) {
case 1:
textureFormat = GL_ALPHA;
break;
case 3: // no alpha channel
if (surface->format->Rmask == 0x000000ff)
textureFormat = GL_RGB;
else
textureFormat = GL_BGR;
break;
case 4: // contains an alpha channel
if (surface->format->Rmask == 0x000000ff)
textureFormat = GL_RGBA;
else
textureFormat = GL_BGRA;
break;
default:
qDebug() << "Warning: the image is not truecolor...";
break;
}
glEnable( GL_TEXTURE_2D );
// Have OpenGL generate a texture object handle for us
glGenTextures( 1, &texture );
// Bind the texture object
glBindTexture( GL_TEXTURE_2D, texture );
// Edit the texture object's image data using the information SDL_Surface gives us
glTexImage2D( GL_TEXTURE_2D, 0, nbOfColors, surface->w, surface->h, 0,
textureFormat, GL_UNSIGNED_BYTE, surface->pixels );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
return texture;
}

OpenGL Texture messed up

Here is my code to load a texture. I have tried to load a file using this example; it is a gif file. Can I ask if gif files can be loaded, or is it only raw files can be loaded?
void setUpTextures()
{
printf("Set up Textures\n");
//This is the array that will contain the image color information.
// 3 represents red, green and blue color info.
// 512 is the height and width of texture.
unsigned char earth[512 * 512 * 3];
// This opens your image file.
FILE* f = fopen("/Users/Raaj/Desktop/earth.gif", "r");
if (f){
printf("file loaded\n");
}else{
printf("no load\n");
fclose(f);
return;
}
fread(earth, 512 * 512 * 3, 1, f);
fclose(f);
glEnable(GL_TEXTURE_2D);
//Here 1 is the texture id
//The texture id is different for each texture (duh?)
glBindTexture(GL_TEXTURE_2D, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
//In this line you only supply the last argument which is your color info array,
//and the dimensions of the texture (512)
glTexImage2D(GL_TEXTURE_2D, 0, 3, 512, 512, 0, GL_RGB, GL_UNSIGNED_BYTE,earth);
glDisable(GL_TEXTURE_2D);
}
void Draw()
{
glEnable(GL_TEXTURE_2D);
// Here you specify WHICH texture you will bind to your coordinates.
glBindTexture(GL_TEXTURE_2D,1);
glColor3f(1,1,1);
double n=6;
glBegin(GL_QUADS);
glTexCoord2d(0,50); glVertex2f(n/2, n/2);
glTexCoord2d(50,0); glVertex2f(n/2, -n/2);
glTexCoord2d(50,50); glVertex2f(-n/2, -n/2);
glTexCoord2d(0,50); glVertex2f(-n/2, n/2);
glEnd();
// Do not forget this line, as then the rest of the colors in your
// Program will get messed up!!!
glDisable(GL_TEXTURE_2D);
}
And all I get is this:
Can I know why?
Basically, no, you can't just give arbitrary texture formats to GL - it only wants pixel data, not encoded files.
Your code, as posted, clearly declares an array for 24-bit RGB data, but then you open and attempt to read that much data from a GIF file. GIF is a compressed and palettised format, complete with header information etc., so that's never doing to work.
You need to use an image loader to decompress the file into raw pixels.
Also, your texture coordinates don't look right. There are four vertices, but only 3 distinct coordinates used, and 2 adjacent coordinates are diagonally opposite each other. Even if your texture was loaded correctly, that's unlikely to be what you want.

Apply hue/saturation filters to image with OpenGL

I have an image in OpenGL that I am attempting to apply a simple HSB filter to. The user selects a hue value, I shade the image appropriately, display it, and everyone is happy. The problem I am running into is that the code I have inherited that worked on a previous system (Solaris, presuming OpenGL 2.1) does not work on our current system (RHEL 5, OpenGL 3.0).
Right now, the image appears in grey-scale, no matter what saturation is set to. However, brightness does seem to be acting appropriately. The relevant code has been reproduced below:
// imageData - unsigned char[3*width*height]
// (red|green|blue)Channel - unsigned char[width*height]
// brightnessBias - float in range [-1/3,1/3]
// hsMatrix - float[4][4] Described by algorithm from
// http://www.graficaobscura.com/matrix/index.html
// (see Hue Rotation While Preserving Luminance)
glDrawPixels(width, height, format, GL_UNSIGNED_BYTE, imageData);
// Split into RGB channels
glReadPixels(0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, redChannel);
glReadPixels(0, 0, width, height, GL_GREEN, GL_UNSIGNED_BYTE, greenChannel);
glReadPixels(0, 0, width, height, GL_BLUE, GL_UNSIGNED_BYTE, blueChannel);
// Redraw and blend RGB channels with scaling and bias
glPixelZoom(1.0, 1.0);
glRasterPos2i(0, height);
glPixelTransferf(GL_RED_BIAS, brightnessBias);
glPixelTransferf(GL_GREEN_BIAS, brightnessBias);
glPixelTransferf(GL_BLUE_BIAS, brightnessBias);
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][0]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][0]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][0]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, redChannel);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][1]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][1]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][1]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, greenChannel);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][2]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][2]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][2]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, blueChannel);
// Reset pixel transfer parameters
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, 1.0f);
glPixelTransferf(GL_GREEN_SCALE, 1.0f);
glPixelTransferf(GL_BLUE_SCALE, 1.0f);
glPixelTransferf(GL_RED_BIAS, 0.0f);
glPixelTransferf(GL_GREEN_BIAS, 0.0f);
glPixelTransferf(GL_BLUE_BIAS, 0.0f);
The brightness control works as intended, however, when the glPixelTransferf(GL_*_SCALE) calls are left in, the image is displayed in greyscale. Compounding all of this is the fact that I have no prior experience with OpenGL, so I find a lot of links for what I presume are more modern techniques that I simply can't make sense of.
EDIT:
I believe the theory behind what was being done was a hack at doing the matrix multiplication through the draw calls, because GL_LUMINANCE treats the one value as the value for all three components, so if you follow the components through that drawing, you expect
// After glDrawPixels(..., redChannel)
new_red = red*hsMatrix[0][0]
new_green = red*hsMatrix[1][0]
new_blue = red*hsMatrix[2][0]
// After glDrawPixels(..., greenChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1]
// After glDrawPixels(..., blueChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1] + blue*hsMatrix[0][2]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1] + blue*hsMatrix[1][2]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1] + blue*hsMatrix[2][2]
Because it was turning out greyscale anyway and from a similar-ish example, I had thought that I might have needed to do the glPixelTransfer calls before calling glDrawPixels, but that was amazingly slow.
Wow, what the hell is that ?!
For your question, I'd replace GL_LUMINANCE in your 3 glDrawPixels by GL_RED, GL_GREEN and GL_BLUE respectively.
However :
glPixelTransfer is bad
glDrawPixels is bad
Is there a single reason why you're not using a super-simple fragment shader to do the conversion ? It's a simple matrix multiplication, and you're under ogl3.0...
Create a texture from imageData, this needs to be done only once.
Make a shader that reads the color from the texture, multiply it by the color conversion matrix, and display it
Bind the computed color matrix
Draw a fullscreen quad. Even an 5 year old card will get 500 fps out of this.