Im trying to show some textures in my program, and I have this code thats used to load bitmaps into openGL textures:
void LoadGLTextures()
{
// Bitmap handle and structure
HBITMAP hBMP;
BITMAP BMP;
// Generate list of textures from resources
byte Texture[] = {IDB_FONT, IDB_SKIN, IDB_PIANO};
glGenTextures(sizeof(Texture), &texture[0]);
// Iterate through texture list and load bitmaps
for (int loop=0; loop<sizeof(Texture); loop++)
{
hBMP = (HBITMAP)LoadImage(GetModuleHandle(NULL), MAKEINTRESOURCE(Texture[loop]),
IMAGE_BITMAP, 0, 0, LR_CREATEDIBSECTION);
if (hBMP)
{
GetObject(hBMP,sizeof(BMP), &BMP);
glPixelStorei(GL_UNPACK_ALIGNMENT,4);
glBindTexture(GL_TEXTURE_2D, texture[loop]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
// Generate Mipmapped Texture (3 Bytes, Width, Height And Data From The BMP)
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, BMP.bmWidth, BMP.bmHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, BMP.bmBits);
DeleteObject(hBMP);
}
}
And while my background skin loads, and gets drawn correctly, the other (piano) texture doesn't get drawn. Im sure the drawing code is correct because when i swap which texture is used (from the piano to the background texture, in this case), the other texture gets drawn. So i think the bitmap isn't being loaded correctly. But im not sure why? Is there something glaringly obvious i have overlooked?
The bitmap is 128*256 and 24 bit colour.
If you need any of the other code please let me know.
edit - If anyone knows of any librarys that would do what I require, please let me know
It might not be working right because it's deprecated.
From http://www.opengl.org/wiki/Common_Mistakes:
gluBuild2DMipmaps - Never use this. Use either GL_GENERATE_MIPMAP (requires GL 1.4) or the glGenerateMipmap function (requires GL 3.0).
Edit: Also, you probably need to call glEnable(GL_TEXTURE_2D) for EACH texture unit, i.e. inside the loop where you call glBindTexture.
Related
I am attempting to use a CUDA kernel to modify an OpenGL texture, but am having a strange issue where my calls to surf2Dwrite() seem to blend with the previous contents of the texture, as you can see in the image below. The wooden texture in the back is what's in the texture before modifying it with my CUDA kernel. The expected output would include ONLY the color gradients, not the wood texture behind it. I don't understand why this blending is happening.
Possible Problems / Misunderstandings
I'm new to both CUDA and OpenGL. Here I'll try to explain the thought process that led me to this code:
I'm using a cudaArray to access the texture (rather than e.g. an array of floats) because I read that it's better for cache locality when reading/writing a texture.
I'm using surfaces because I read somewhere that it's the only way to modify a cudaArray
I wanted to use surface objects, which I understand to be the newer way of doing things. The old way is to use surface references.
Some possible problems with my code that I don't know how to check/test:
Am I being inconsistent with image formats? Maybe I didn't specify the correct number of bits/channel somewhere? Maybe I should use floats instead of unsigned chars?
Code Summary
You can find a full minimum working example in this GitHub Gist. It's quite long because of all the moving parts, but I'll try to summarize. I welcome suggestions on how to shorten the MWE. The overall structure is as follows:
create an OpenGL texture from a file stored locally
register the texture with CUDA using cudaGraphicsGLRegisterImage()
call cudaGraphicsSubResourceGetMappedArray() to get a cudaArray that represents the texture
create a cudaSurfaceObject_t that I can use to write to the cudaArray
pass the surface object to a kernel that writes to the texture with surf2Dwrite()
use the texture to draw a rectangle on-screen
OpenGL Texture Creation
I am new to OpenGL, so I'm using the "Textures" section of the LearnOpenGL tutorials as a starting point. Here's how I set up the texture (using the image library stb_image.h)
GLuint initTexturesGL(){
// load texture from file
int numChannels;
unsigned char *data = stbi_load("img/container.jpg", &g_imageWidth, &g_imageHeight, &numChannels, 4);
if(!data){
std::cerr << "Error: Failed to load texture image!" << std::endl;
exit(1);
}
// opengl texture
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
// wrapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
// filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// set texture image
glTexImage2D(
GL_TEXTURE_2D, // target
0, // mipmap level
GL_RGBA8, // internal format (#channels, #bits/channel, ...)
g_imageWidth, // width
g_imageHeight, // height
0, // border (must be zero)
GL_RGBA, // format of input image
GL_UNSIGNED_BYTE, // type
data // data
);
glGenerateMipmap(GL_TEXTURE_2D);
// unbind and free image
glBindTexture(GL_TEXTURE_2D, 0);
stbi_image_free(data);
return textureId;
}
CUDA Graphics Interop
After calling the function above, I register the texture with CUDA:
void initTexturesCuda(GLuint textureId){
// register texture
HANDLE(cudaGraphicsGLRegisterImage(
&g_textureResource, // resource
textureId, // image
GL_TEXTURE_2D, // target
cudaGraphicsRegisterFlagsSurfaceLoadStore // flags
));
// resource description for surface
memset(&g_resourceDesc, 0, sizeof(g_resourceDesc));
g_resourceDesc.resType = cudaResourceTypeArray;
}
Render Loop
Every frame, I run the following to modify the texture and render the image:
while(!glfwWindowShouldClose(window)){
// -- CUDA --
// map
HANDLE(cudaGraphicsMapResources(1, &g_textureResource));
HANDLE(cudaGraphicsSubResourceGetMappedArray(
&g_textureArray, // array through which to access subresource
g_textureResource, // mapped resource to access
0, // array index
0 // mipLevel
));
// create surface object (compute >= 3.0)
g_resourceDesc.res.array.array = g_textureArray;
HANDLE(cudaCreateSurfaceObject(&g_surfaceObj, &g_resourceDesc));
// run kernel
kernel<<<gridDim, blockDim>>>(g_surfaceObj, g_imageWidth, g_imageHeight);
// unmap
HANDLE(cudaGraphicsUnmapResources(1, &g_textureResource));
// --- OpenGL ---
// clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// use program
shader.use();
// triangle
glBindVertexArray(vao);
glBindTexture(GL_TEXTURE_2D, textureId);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
// glfw: swap buffers and poll i/o events
glfwSwapBuffers(window);
glfwPollEvents();
}
CUDA Kernel
The actual CUDA kernel is as follows:
__global__ void kernel(cudaSurfaceObject_t surface, int nx, int ny){
int x = blockIdx.x * blockDim.x + threadIdx.x;
int y = blockIdx.y * blockDim.y + threadIdx.y;
if(x < nx && y < ny){
uchar4 data = make_uchar4(x % 255,
y % 255,
0, 255);
surf2Dwrite(data, surface, x * sizeof(uchar4), y);
}
}
If I understand correctly, you initially register the texture, map it once, create a surface object for the array representing the mapped texture, and then unmap the texture. Every frame, you then map the resource again, ask for the array representing the mapped texture, and then completely ignore that one and use the surface object created for the array you got back when you first mapped the resource. From the documentation:
[…] The value set in array may change every time that resource is mapped.
You have to create a new surface object every time you map the resource because you might get a different array every time. And, in my experience, you will actually get a different one every so often. It may be a valid thing to do to only create a new surface object whenever the array actually changes. The documentation seems to allow for that, but I never tried, so I can't tell whether that works for sure…
Apart from that: You generate mipmaps for your texture. You only overwrite mip level 0. You then render the texture using mipmapping with trilinear interpolation. So my guess would be that you just happen to render the texture at a resolution that does not match the resolution of mip level 0 exactly and, thus, you will end up interpolating between level 0 (in which you wrote) and level 1 (which was generated from the original texture)…
It turns out the problem is that I had mistakenly generated mipmaps for the original wood texture, and my CUDA kernel was only modifying the level-0 mipmap. The blending I noticed was the result of OpenGL interpolating between my modified level-0 mipmap and a lower-resolution version of the wood texture.
Here's the correct output, obtained by disabling mipmap interpolation. Lesson learned!
I wrote a simple app that load model using OpenGL, Assimp and Boost.GIL.
My model contains a PNG texture. When I load it using GIL and render it through OPENGL I got a wrong result. Thank of powel of codeXL, I found my texture loaded in OpenglGL is completely different from the image itself.
Here is a similar question and I followed its steps but still got same mistake.
Here are my codes:
// --------- image loading
std::experimental::filesystem::path path(pathstr);
gil::rgb8_image_t img;
if (path.extension() == ".jpg" || path.extension() == ".jpeg" || path.extension() == ".png")
{
if (path.extension() == ".png")
gil::png_read_and_convert_image(path.string(), img);
else
gil::jpeg_read_and_convert_image(path.string(), img);
_width = static_cast<int>(img.width());
_height = static_cast<int>(img.height());
typedef decltype(img)::value_type pixel;
auto srcView = gil::view(img);
//auto view = gil::interleaved_view(
// img.width(), img.height(), &*gil::view(img).pixels(), img.width() * sizeof pixel);
auto pixeldata = new pixel[_width * _height];
auto dstView = gil::interleaved_view(
img.width(), img.height(), pixeldata, img.width() * sizeof pixel);
gil::copy_pixels(srcView, dstView);
}
// ---------- texture loading
{
glBindTexture(GL_TEXTURE_2D, handle());
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
image.width(), image.height(),
0, GL_RGB, GL_UNSIGNED_BYTE,
reinterpret_cast<const void*>(image.data()));
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
And my texture is:
When it runs, my codeXL debugger reports me that the texture became:
all other textures of this model went wrong too.
Technically this is a FAQ, asked already several times. Essentially you're running into an alignment issue. By default (you can change it) OpenGL expects image rows to be aligned on 4 byte boundaries. If your image data doesn't match this, you get this skewed result. Adding a call to glPixelStorei(GL_UNPACK_ALIGNMENT, 1); right before the call to glTexImage… will do the trick for you. Of course you should retrieve the actual alignment from the image metadata.
The image being "upside down" is caused by OpenGL putting the origin of textures into the lower left (if all transformation matrices are left at default or have positive determinant). That is unlike most image file formats (but not all) which have it in the upper left. Just flip the vertical texture coordinate and you're golden.
I am trying to make DirectX - OpenGL interop to work, with no success so far. In my case rendering is done in OpenGL (by OSG library), and I would like to have the rendered image as DirectX Texture2D. What I am trying so far:
Initialization:
ID3D11Device *dev3D;
// init dev3D with D3D11CreateDevice
ID3D11Texture2D *dxTexture2D;
// init dxTexture2D with CreateTexture2D, with D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE
HANDLE hGlDev = wglDXOpenDeviceNV(dev3D);
GLuint glTex;
glGenTextures(1, &glTex);
HANDLE hGLTx = wglDXRegisterObjectNV(hGlDev, (void*) dxTexture2D, glTex, GL_TEXTURE_2D, WGL_ACCESS_READ_WRITE_NV);
On every frame rendered by OSG camera I am getting a callback. First I start with glReadBuffer(GL_FRONT), and it seems to be OK till that point, as I am able to read the rendered buffer into memory with glReadPixels. The problem is that I can't copy the pixels to previously created GL_TEXTURE_2D:
BOOL lockOK = wglDXLockObjectsNV(hGlDev, 1, &hGLTx);
glBindTexture(GL_TEXTURE_2D, glTex);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, width, height, 0);
auto err = glGetError();
The last call to glCopyTexImage2D creates an error 0x502 (GL_INVALID_OPERATION), and I can't figure out why. Until this point everything else looks fine.
Any help is appreciated.
Found the problem. Instead of the call to glCopyTexImage2D (which creates a new texture), needed to use glCopyTexSubImage2D:
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, width, height);
I'm trying to write bitmap files for every frames that I rendered through OpenGL.
Please notice that I'm not going to read bitmap, I'm gonna WRITE NEW BITMAP files.
Here is part of my C++ code
void COpenGLWnd::ShowinWnd(int ID)
{
if(m_isitStart == 1)
{
m_hDC = ::GetDC(m_hWnd);
SetDCPixelFormat(m_hDC);
m_hRC = wglCreateContext(m_hDC);
VERIFY(wglMakeCurrent(m_hDC, m_hRC));
m_isitStart = 0;
}
GLRender();
CDC* pDC = CDC::FromHandle(m_hDC);
//pDC->FillSolidRect(0, 0, 100, 100, RGB(100, 100, 100));
CRect rcClient;
GetClientRect(&rcClient);
SaveBitmapToDirectFile(pDC, rcClient, _T("a.bmp"));
SwapBuffers(m_hDC);
}
"GLRender" is the function which can render on the MFC window.
"SaveBitmapToDirectFile" is the function that writes a new bitmap image file from the parameter pDC, and I could check that it works well if I erase that double slash on the second line, because only gray box on left top is drawn at "a.bmp"
So where has m_hDC gone? I have no idea why rendered scene wasn't written on "a.bmp".
Here is GLRender codes, but I don't think that this function was the problem, because it can render image and print it out well on window.
void COpenGLWnd::GLFadeinRender()
{
glViewport(0,0, m_WndWidth, m_WndHeight);
glOrtho(0, m_WndWidth, 0, m_WndHeight, 0, 100);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(m_BlendingSrc, m_BlendingDest);
glPixelTransferf(GL_ALPHA_SCALE,(GLfloat)(1-m_BlendingAlpha));
glPixelZoom((GLfloat)m_WndWidth/(GLfloat)m_w1, -(GLfloat)m_WndHeight/(GLfloat)m_h1);
glRasterPos2f(0, m_WndHeight);
glDrawPixels((GLsizei)m_w1, (GLsizei)m_h1, GL_BGR_EXT, GL_UNSIGNED_BYTE, m_pImageA);
glPixelTransferf(GL_ALPHA_SCALE,(GLfloat)m_BlendingAlpha);
glPixelZoom((GLfloat)m_WndWidth/(GLfloat)m_w2, -(GLfloat)m_WndHeight/(GLfloat)m_h2);
glRasterPos2f(0, m_WndHeight);
glDrawPixels((GLsizei)m_w2, (GLsizei)m_h2, GL_BGR_EXT, GL_UNSIGNED_BYTE, m_pImageB);
glFlush();
}
I'm guessing you're using MFC or Windows API functions to capture the bitmap from the window. The problem is that you need to use glReadPixels to get the image from a GL context -- winapi isn't able to do that.
I am currently trying to load an icon which has a transparent background.
Then I create a bitmap from it and try to display the bits via glTexImage2D().
But the background of the icon never gets transparent :(
Here is some of my code:
DWORD dwBmpSize = 32*32*4;
byte* bmBits = new byte[dwBmpSize];
for(unsigned int i = 0; i <dwBmpSize; i+=4)
{
bmBits[i] = 255; // R
bmBits[i+1] = 0; // G
bmBits[i+2] = 0; // B
bmBits[i+3] = 255;// A
// I always get a red square, no matter what value i fill into alpha
}
//create texture from bitmap
glTexImage2D(target, 0,
GL_RGBA, 32, 32,
0, GL_RGBA, GL_UNSIGNED_BYTE, bmBits);
delete bmBits;
Edit: I changed the code, to be sure, that my bits have an alpha channel.
Now I am filling a 32x32 pxl area with custom values to see, what happens, instead of loading an icon. It still does not work!
What am I missing? Or is it just not possible?
You have to enable blending and set the correct blend mode.
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Also if you fill the entire alpha channel with 255 it will still be opaque. Try 128 or something instead.