Access Violation after creating GL_TEXTURE_2D_ARRAY - c++

I'm having access violation on every gl call after this texture initialization (actually the last GLCALL(glBindTexture(m_Target, bound)); is also causing access violation so the code at the top is what probably causing it):
Texture2D::Texture2D(unsigned int format, unsigned int width, unsigned int height, unsigned int unit, unsigned int mimapLevels, unsigned int layers)
: Texture(GL_TEXTURE_2D_ARRAY, unit)
{
unsigned int internalFormat;
if (format == GL_DEPTH_COMPONENT)
{
internalFormat = GL_DEPTH_COMPONENT32;
}
else
{
internalFormat = format;
}
m_Format = format;
m_Width = width;
m_Height = height;
unsigned int bound = 0;
glGetIntegerv(GL_TEXTURE_BINDING_2D_ARRAY, (int*)&bound);
GLCALL(glGenTextures(1, &m_ID));
GLCALL(glActiveTexture(GL_TEXTURE0 + m_Unit));
GLCALL(glBindTexture(m_Target, m_ID));
GLCALL(glTexParameteri(m_Target, GL_TEXTURE_MIN_FILTER, GL_LINEAR));
GLCALL(glTexParameteri(m_Target, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLCALL(glTexStorage3D(m_Target, mimapLevels, internalFormat, width, height, layers));
for (size_t i = 0; i < layers; i++)
{
glTexSubImage3D(m_Target, 0, 0, 0, i, m_Width, m_Height, 1, m_Format, s_FormatTypeMap[internalFormat], NULL);
}
GLCALL(glBindTexture(m_Target, bound));
}
OGL pointers are initialized with glad at the beginning of the program:
if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))
{
std::cout << "Failed to initialize GLAD" << std::endl;
return -1;
}
And this only happens with GL_TEXTURE_2D_ARRAY, even when this is the first line of my code (after initialization of-course), example code:
auto t = Texture2D(GL_DEPTH_COMPONENT, 1024, 1024, 10, 1, 4);
Any idea what may be causing it?
Thanks in advance!

You're passing a NULL for the last argument of glTexSubImage3D, but OpenGL does not allow that:
TexSubImage*D and TextureSubImage*D arguments width, height, depth, format, type, and data match the corresponding arguments to the corresponding TexImage*D command (where those arguments exist), meaning that they accept the same values, and have the same meanings. The exception is that a NULL data pointer does not represent unspecified image contents.
...and there's no text that allows a NULL pointer, therefore you cannot pass NULL.
It's unclear what you're trying to achieve with those glTexSubImage3D calls. Since you're using an immutable texture (glTexStorage3D) you don't need to do anything extra. If instead you want to clear your texture then you can use glClearTexSubImage which does accept NULL for data to means 'clear with zeros'.

Related

OpenGL problem using glTextureView on Framebuffer

i was trying to use textureview for framebuffer
it is ok using textureview that use base mipmap level, but when textureview specify mipmap level more that baselevel ,
glCheckFramebufferStatus() return error code 36057 that is incomplete_dimension
i checked there is no missmatching between textureivew and origintexture'size
here is code that is making prefilterCubemap for PBR Rendering
unsigned int maxMipLevels = 5;
for (int mip = 0; mip < maxMipLevels; ++mip)
{
unsigned int mipWidth = 128 * std::pow(0.5, mip);
unsigned int mipHeight = 128 * std::pow(0.5, mip);
RenderCommand::SetViewport(0, 0, mipWidth, mipHeight);
m_DepthTexture->SetSize(mipWidth, mipHeight);
CubeMapPass->DetachAll();
CubeMapPass->AttachTexture(m_DepthTexture, AttachmentType::Depth_Stencil, TextureType::Texture2D, 0);
float roughtness = (float)mip / (float)(maxMipLevels - 1);
roughBuffer.roughtness = roughtness;
roughnessConstantBuffer->SetData(&roughBuffer,sizeof(roughBuffer),0);
for (int i = 0; i < 6; ++i)
{
....
...
CameraBuffer.view = captureViews[i];
cameraConstantBuffer->SetData(&CameraBuffer, sizeof(CameraData), 0);
CubeMapPass->AttachTexture(m_PrefilterMap, AttachmentType::Color_0, TextureType::Texture2D, mip,i,1);
CubeMapPass->Clear();
....
...
}
}
DepthStencil texture's size is changed for face texture of cubemap
based on mipmap(cubemap texture size starts at 128x128)
in AttachTexture Function
// m_RendererID is framebufferid
Ref<OpenGLRenderTargetView> targetview=CreateRef<OpenGLRenderTargetView>(type, texture, texture->GetDesc().Format, mipLevel, layerStart, layerLevel);
targetview->Bind(m_RendererID, attachType);
in texture view's bind function
GLenum attachmentIndex = Utils::GetAttachmentType(type);
GLenum target = Utils::GetTextureTarget(viewDesc.Type, multisampled);
glNamedFramebufferTexture(framebufferid, attachmentIndex, m_renderID, m_startMip);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
as i said before there is no problem when textureview is accessing basemiplevel
but after first loop , textureview is accessing next mipmap (that textureSize will be 64x64, and depthStencilTexture size also 64x64)
and framebuffer has two attachment one is color(textureView) and other is depthstencil textrue
but glCheckFramebufferStatus keeps telling me there is incompletedimenssion after first loop..
i searched about textureview , https://www.khronos.org/opengl/wiki/Texture_Storage
in this article
glTextureView(..., GLuint minlevel​, GLuint numlevels​, GLuint minlayer​, GLuint numlayers​)
function will make textureview for origintexture and minlevel gonna be baselevel mipmap on textureview
as you can see my AttachTexture function it takes miplevel and next parameter is for arraylevel and arraycount
and in creation textureview
glTextureView(m_renderID, target, srcTexID, internalformat, startMip, 1, startLayer, numlayer);
it only takes one mipmap not two or three
i dont know why it doesnt work..
and it works when im not using texture view
like below code
if (texture->GetDesc().Type == TextureType::TextureCube &&type == TextureType::Texture2D)
{
target = GL_TEXTURE_CUBE_MAP_POSITIVE_X + layerStart;
}
if (type == TextureType::TextureCube)
glFramebufferTexture(GL_FRAMEBUFFER, attachmentIndex, texId, mipLevel);
else
glFramebufferTexture2D(GL_FRAMEBUFFER, attachmentIndex, target, texId, mipLevel);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);

How to split a big image into multiple textures

Let's assume I have a big image whose size is 2560*800 and format is RGBA.
I'd like to this big image to 2 textures whose size are 1280*800.
The simple, but stupid, way is m
#define BPP_RGBA 4
int* makeNTexturesFromBigRgbImage(uint8_t *srcImg,
Size srcSize,
uint32_t numTextures,
uint32_t texWidth,
uint32_t texHeigh) {
int i, h, srcStride;
uint8_t *pSrcPos, *pDstPos;
int [] texIds = new int[numTextures];
srcStride = srcSize.w*BPP_RGBA;
glGenTextures(numTextures, texIds);
for (i=0; i<numTexures; i++) {
uint8_t *subImageBuf = alloc(texWidth*texHeight*BPP_RGBA);
pSrcPos = srcImg+(texWidth*BPP_RGBA)*i
pDstPos = subImageBuf;
for (h=0; h<texHeight; h++) {
memcpy(pDstPos, pSrcPos, texWidth*BPP_RGBA)
pSrcPos += srcStride;
pDstPos += (texWidth*BPP_RGBA);
}
glBindTexture(GL_TEXTURE_2D, texIds[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, subImageBuf);
free(subImageBuf);
}
}
But, as I mentioned above, this approach is very stupid.
So, I'd like to know the best way that DOES NOT copy operation on CPU like above.
Is it possible with only openGL APIs.
For example, is it possible?
Step 1. Make a texture, 2560*800, with an big image.
Step 2. Make 2 textures, 1280*800, from the texture in 1.
Thanks.

Returning result of stbi_load function and using it for glTexImage2D causes memory violation

I've a problem with the stbi library and I thought, maybe you have an idea why this isn't working. I have declared a function like this:
bool LoadTextureFile(std::string file, unsigned char ** pixel_data, int * width, int * height, int * n);
In this function I get the result of stbi_load directly saved in the *pixel_data variable:
*pixel_data = stbi_load(file.c_str(), width, height, n, 0);
// Do some more stuff till return
return true;
So, now my pixel_data pointer points to the memory of the result of stbi_load. Now I wanna use this result with the glTexImage2D method in my previous function. This function calls the LoadTextureFile method before calling the glTexImage2D method of OpenGL like this:
bool LoadTexture(...)
{
int tex_width, tex_height, tex_n;
unsigned char * pixel_data = NULL;
LoadTextureFile(filename, &pixel_data, &tex_width, &tex_height, &tex_n);
// Do something special ...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tex_width, tex_height, 0, GL_RGB, GL_UNSIGNED_BYTE, &pixel_data);
stbi_image_free(&pixel_data);
// ...
}
But if I do it like that, then I get a memory violation message at the point of calling the glTexImage2D.
If I move this whole magic into the LoadTextureFile, after loading a new texture file with stbi_load, then it works:
bool LoadTextureFile(std::string file, unsigned char ** pixel_data, int * width, int * height, int * n)
{
unsigned char * = = stbi_load(file.c_str(), width, height, n, 0);
// Do some magic ...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 80, 80, 0, GL_RGB, GL_UNSIGNED_BYTE, pixel_data);
stbi_image_free(pixel_data);
return true;
}
Can someone tell me why I get this message and how to solve this problem?
I guess, it is a matter of keep the reserved memory safe, but I'm not really sure, how to solve it. I tried this in a simple console application before, and there it works.
Thank you for your help!
This:
unsigned char * pixel_data = NULL;
[...]
glTexImage2D(..., &pixel_data);
is certainly not what you want. You are using the address of the pionter to your pixel data, not the value of the pointer, so you are basically telling the GL to use some random segment of your stack memory as source for the texture. It should be just
glTexImage2D(..., pixel_data);
In your second variant, what actually happens is unclear since the line
unsigned char * = = stbi_load(file.c_str(), width, height, n, 0);
just doesn't make sense and will never compile. So I assume it is copy and paste error when writing the question. But it is hard to guess what your real code would do.

Accessing calloc'd data through a shared_ptr

I'm trying to access the data that I previously allocated with the calloc method through a shared_ptr. For some reason I can't access it (keeps on crashing with EXC_BAD_ACCESS) on glTexImage2D (last line of my code snippets).
My util method to load the data:
shared_ptr<ImageData> IOSFileSystem::loadImageFile(string path) const
{
// Result
shared_ptr<ImageData> result = shared_ptr<ImageData>();
...
// Check if file exists
if([[NSFileManager defaultManager] fileExistsAtPath:fullPath isDirectory:NO])
{
...
GLubyte *spriteData = (GLubyte*) calloc(width * height * 4, sizeof(GLubyte));
...
// Put result in shared ptr
shared_ptr<GLubyte> spriteDataPtr = shared_ptr<GLubyte>(spriteData);
result = shared_ptr<ImageData>(new ImageData(path, width, height, spriteDataPtr));
}
else
{
cout << "IOSFileSystem::loadImageFile -> File does not exist at path.\nPath: " + path;
exit(1);
}
return result;
}
Header for ImageData:
class ImageData
{
public:
ImageData(string path, int width, int height, shared_ptr<GLubyte> data);
~ImageData();
string getPath() const;
int getWidth() const;
int getHeight() const;
shared_ptr<GLubyte> getData() const;
private:
string path;
int width;
int height;
shared_ptr<GLubyte> data;
};
File that calls the util class:
void TextureMaterial::load()
{
shared_ptr<IFileSystem> fileSystem = ServiceLocator::getFileSystem();
shared_ptr<ImageData> imageData = fileSystem->loadImageFile(path);
this->bind(imageData);
}
void TextureMaterial::bind(shared_ptr<ImageData> data)
{
// Pointer to pixel data
shared_ptr<GLubyte> pixelData = data->getData();
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, data->getWidth(), data->getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, &pixelData);
}
Just for the record: if I throw out all shared_ptr's I'm able to access the data. Signature for glTexImage2D:
void glTexImage2D(GLenum target, GLint level, GLint internalFormat, GLsizei width, GLsizei height, GLint border, GLenum format, GLenum type, const GLvoid *data);
Additional question: normally you have to free(spriteData) but since I gave the data to a shared_ptr, will the data be free'd when the shared_ptr is removed?
shared_ptr cannot magically guess how to release the memory. By default it tries to delete it, and since you didn't use new, that ends up in disaster.
You need to tell it how to do it:
shared_ptr<GLubyte>(spriteData, &std::free);
I think this is your problem:
..., &pixelData);
You are taking an address of a local variable (of type shared_ptr<GLubyte>), which is silently cast to void*, instead of getting the pointer from it. Replace it with:
..., pixelData.get());

C++ OpenGL glTexImage2D Access Violation

I'm writing an application using OpenGL (freeglut and glew).
I also wanted textures so I did some research on the Bitmap file format and wrote a struct for the main header and another for the DIB header (info header).
Then I started writing the loader. It automatically binds the texture to OpenGL. Here is the function:
static unsigned int ReadInteger(FILE *fp)
{
int a, b, c, d;
// Integer is 4 bytes long.
a = getc(fp);
b = getc(fp);
c = getc(fp);
d = getc(fp);
// Convert the 4 bytes to an integer.
return ((unsigned int) a) + (((unsigned int) b) << 8) +
(((unsigned int) c) << 16) + (((unsigned int) d) << 24);
}
static unsigned int ReadShort(FILE *fp)
{
int a, b;
// Short is 2 bytes long.
a = getc(fp);
b = getc(fp);
// Convert the 2 bytes to a short (int16).
return ((unsigned int) a) + (((unsigned int) b) << 8);
}
GLuint LoadBMP(const char* filename)
{
FILE* file;
// Check if a file name was provided.
if (!filename)
return 0;
// Try to open file.
fopen_s(&file, filename, "rb");
// Return if the file could not be open.
if (!file)
{
cout << "Warning: Could not find texture '" << filename << "'." << endl;
return 0;
}
// Read signature.
unsigned char signature[2];
fread(&signature, 2, 1, file);
// Use signature to identify a valid bitmap.
if (signature[0] != BMPSignature[0] || signature[1] != BMPSignature[1])
{
fclose(file);
return 0;
}
// Read width and height.
unsigned long width, height;
fseek(file, 16, SEEK_CUR); // After the signature we have 16bytes until the width.
width = ReadInteger(file);
height = ReadInteger(file);
// Calculate data size (we'll only support 24bpp).
unsigned long dataSize;
dataSize = width * height * 3;
// Make sure planes is 1.
if (ReadShort(file) != 1)
{
cout << "Error: Could not load texture '" << filename << "' (planes is not 1)." << endl;
return 0;
}
// Make sure bpp is 24.
if (ReadShort(file) != 24)
{
cout << "Error: Could not load texture '" << filename << "' (bits per pixel is not 24)." << endl;
return 0;
}
// Move pointer to beggining of data. (after the bpp we have 24 bytes until the data)
fseek(file, 24, SEEK_CUR);
// Allocate memory and read the image data.
unsigned char* data = new unsigned char[dataSize];
if (!data)
{
fclose(file);
cout << "Warning: Could not allocate memory to store data of '" << filename << "'." << endl;
return 0;
}
fread(data, dataSize, 1, file);
if (data == NULL)
{
fclose(file);
cout << "Warning: Could no load data from '" << filename << "'." << endl;
return 0;
}
// Close the file.
fclose(file);
// Create the texture.
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //NEAREST);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, width, height, GL_BGR_EXT, GL_UNSIGNED_BYTE, data);
return texture;
}
I know that the bitmap's data is correctly read because I outputted it's data to the console and compared with the image opened in paint.
The problem here is this line:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, dibheader.width,
dibheader.height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
Most of the times I run the application this line crashes with the error:
Unhandled exception at 0x008ffee9 in GunsGL.exe: 0xC0000005: Access violation reading location 0x00af7002.
This is the Disassembly of where the error occurs:
movzx ebx,byte ptr [esi+2]
It's not an error with my loader, because I've downloaded other loaders.
A downloaded loader that I used was this one from NeHe.
EDIT: (CODE UPDATED ABOVE)
I rewrote the loader, but I still get the crash on the same line. Instead of that crash, sometimes I get a crash on mlock.c (same error message is I recall correctly):
void __cdecl _lock (
int locknum
)
{
/*
* Create/open the lock, if necessary
*/
if ( _locktable[locknum].lock == NULL ) {
if ( !_mtinitlocknum(locknum) )
_amsg_exit( _RT_LOCK );
}
/*
* Enter the critical section.
*/
EnterCriticalSection( _locktable[locknum].lock );
}
On the line:
EnterCriticalSection( _locktable[locknum].lock );
Also, here is a screen shot of one of those times the applications doesn't crash (the texture is obviously not right):
http://i.stack.imgur.com/4Mtso.jpg
Edit2:
Updated code with the new working one.
(The reply marked as an answer does not contain all that was needed for this to work, but it was vital)
Try glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before your glTexImage2D() call.
I know, it's tempting to read binary data like this
BitmapHeader header;
BitmapInfoHeader dibheader;
/*...*/
// Read header.
fread(&header, sizeof(BitmapHeader), 1, file);
// Read info header.
fread(&dibheader, sizeof(BitmapInfoHeader), 1, file);
but you really shouldn't do it that way. Why? Because the memory layout of structures may be padded to meet alignment constraints (yes, I know about packing pragmas), the type size of the used compiler may not match the data size in the binary file, and last but not least endianess may not match.
Always read binary data into a intermediary buffer of which you extract the fields in a well defined way with exactly specified offsets and typing.
// Allocate memory for the image data.
data = (unsigned char*)malloc(dibheader.dataSize);
If this is C++, then use the new operator. If this is C, then don't cast from void * to the L value type, it's bad style and may cover usefull compiler warnings.
// Verify memory allocation.
if (!data)
{
free(data);
If data is NULL you mustn't free it.
// Swap R and B because bitmaps are BGR and OpenGL uses RGB.
for (unsigned int i = 0; i < dibheader.dataSize; i += 3)
{
B = data[i]; // Backup Blue.
data[i] = data[i + 2]; // Place red in right place.
data[i + 2] = B; // Place blue in right place.
}
OpenGL does indeed support BGR alignment. The format parameter is, surprise, GL_BGR
// Generate texture image.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, dibheader.width, dibheader.height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
Well, and this misses setting all of the pixel store parameters. Always set every pixel store parameter before doing pixel transfers, they may be left in some undesired state from a previous operation. Better safe than sorry.