Memory error using OpenGL "glTexImage2D" - c++

I've been following this tutorial on OpenGL and C++:
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=06
...and I've found myself facing quite the error. Whenever I try to compile, my program crashes with an error of the type, System.AccessViolationException. I've isolated the problem to be in this function:
glTexImage2D(GL_TEXTURE_2D, 0, 3, TextureImage[0]->sizeX, TextureImage[0]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[0]->data);
In case you don't want to look through that tutorial, the memory appears to be set up like so:
AUX_RGBImageRec *TextureImage[1];
memset(TextureImage,0,sizeof(void *)*1);
Any help would be awesome. Thanks.

You're crashing because TextureImage[0] is NULL. The initial memset there sets it to NULL; if you follow along in the tutorial, the next line of code is this:
if (TextureImage[0]=LoadBMP("Data/NeHe.bmp"))
Note carefully that there is a single = sign here, not a double == as you'd normally see (you may even get a compiler warning here; to suppress that, add extra parentheses around the assignment)). Make sure you copied this line of code correctly and that you have a single = here.
If in fact you do have a single =, then check to make sure that LoadBMP is returning a non-NULL value. If it's returning NULL, the most likely cause is that it can't find the bitmap file Data/NeHe.bmp, either because it doesn't exist or it's looking for it in the wrong directory. Make sure your current working directory is set up correctly so that it can find the image.

Turns out the bitmap I was trying to load was too large. I shrunk it to 256x256px and it worked perfectly.

Related

Exception thrown: read access violation. std::shared_ptr<>::operator-><,0>(...)->**** was 0xFFFFFFFFFFFFFFE7

Good afternoon to all! I am writing a game engine using OpenGL + Win32 / GLFW. Therefore, I will say the project is large, but I have a problem that has led me to a dead end and I can't understand what the problem is. The problem is that I have a shared_ptr<Context> in the 'windows' class (it is platform-based) that is responsible for the context (GL, D3D). And everything works fine when I launch the application, everything is drawn normally, but when I start entering the cursor into the window, a crash occurs when any function calls from context, in my case is:
context->swapBuffers();
Here a crash:
std::shared_ptr<Context>::operator-><Context,0>(...)->**** was 0xFFFFFFFFFFFFFFE7.
Then I got into the callstack and saw that the context-> itself is non-zero, so the function should be called.
Then I searched for a long time for the problem and found that when I remove the Win32 message processing code, and when I remove it, errors do not occur when calling a function from context->. I removed everything unnecessary from the loop and left only the basic functions and tried to run it like this, because I thought that some other functions inside were causing this problem, but no.
while (PeekMessageW(&msg, NULL, NULL, NULL, PM_REMOVE) > 0) {
TranslateMessage(&msg);
DispatchMessageW(&msg);
}
That is, when I delete TranslateMessage() and DispatchMessage(), the error goes away. And then I finally got confused i.e I don't understand what is happening. And then I thought that maybe somehow the operating system itself affects that pointer, prohibits reading or something like that.
And then I paid attention to __vtptr in the call stack, and noticed that it is nullptr, and moreover it has the void** type. And also the strangest thing is that in the error I have ->**** was 0xffffffffffffffc7 as many as four consecutive pointers. What is it?
I understand I practically didn't throw off the code, because I have a big project and I think it doesn't make sense to throw everything, so I tried to explain the problem by roughly showing what is happening in my code. I will be grateful to everyone who will help :)

Igraph eigenvector centrality Run-Time error c++

I'm writing program on c++ that needs to generate graphs and calculate some measures.I'm working with Visual Studio 2013 and Igraph C library. At this point I can create graphs from custom info and calculate some metrics like betweennes and closeness centrality, but when i try to calculate eigenvector centrality, the program crash and show me this message:
"Run-Time Check Failure #3 - The variable 'tgetv0' is being used without being initialized."
The tgetv0 variable is used inside of dgetv.c from Igraph source.
Here is my code:
void GraphObject::calcEigen()
{
igraph_arpack_options_t options;
igraph_real_t value;
igraph_vector_t weights;
igraph_vector_init(&weights, igraph_ecount(&cGraph)); //cGraph is already created.
igraph_vector_init(&eigenRes, igraph_vcount(&cGraph)); //All ..Res igraph_vector_t are declarated in header
igraph_vector_init(&betweennesRes, 0);
igraph_vector_init(&closenessRes, 0);
igraph_arpack_options_init(&options);
igraph_betweenness(&cGraph, &betweennesRes, igraph_vss_all(), 0, 0, 1);
igraph_closeness(&cGraph, &closenessRes, igraph_vss_all(), IGRAPH_ALL, 0, 1);
igraph_eigenvector_centrality(&cGraph, &eigenRes, &value, 0, 1, &weights, &options);
}
The closeness and betwenness are correctly calculated an "couted" but crash on eigenvector function.
After lot of research on documentation, internet and the debugger i cant't figure which is the problem, especially when I tryed the example code in the documentation http://igraph.org/c/doc/igraph-Structural.html#igraph_eigenvector_centrality (copy/paste) and makes the same. Is this a library or example issue, I a'm missing something?
When I init the weights vector and then I call igraph_null(&weights), it works but the result of all eigenvalues is 1, and this is incorrect result. What I'm doing wrong?
Let us assume that Visual Studio is right and we indeed have a variable named tgetv0 that is being used uninitialized. I scanned igraph's source code and it looks like there are two places where it could indeed be the case. One of them is in src/lapack/dnaupd.c, the other one is in src/lapack/dsaupd.c. Both of these files were converted from Fortran using f2c so it is hard to tell whether the issue was present in the original Fortran code or whether this was introduced during the conversion. Either way, you can probably fix this easily by looking up the lines where tgetv0 is declared in src/lapack/dnaupd.c and src/lapack/dsaupd.c and initializing it to a value of 0. In my version, the lines to change are line 486 in src/lapack/dnaupd.c and line 482 in src/lapack/dsaupd.c.
Please add a comment to confirm whether the solution works for you or not - if it works, I'll commit a patch to the igraph source tree.

devIL ilLoad error 1285

I'm Having an issue with loading an image with devIL for openGL
in an earlier part of my project i call
ilInit();
in a function right after i call my load just like this
//generate a texture
ilGenImages( 1, &uiTextureHandle );
//bind our image
ilBindImage( uiTextureHandle );
//load
//ilLoad( IL_PNG, (const ILstring)"fake.png" );
ilLoad( IL_PNG, "fake.png" );
for the sake of error tracking i did place "ilGetError()" after every call
which returned 0 for all of these except for ilLoad which returns 1285
after some searching i figured out that this is a lack of memory error.
so ilLoad always returns 0 and not loaded.
anyone know what im doing incorrect as for my loading or if i forgot to do something
because i feel i might have forgotten something and thats the reason why 1285 appears.
A common reason for ilLoad() to fail with IL_OUT_OF_MEMORY is simply if the PNG file you're using is corrupt.
However, 1285 means IL_INVALID_VALUE - it means the path you're giving it is likely wrong. Try an absolute path (remembering that back slashes aren't okay in C++ unless you use double slashes).
I personally have used DevIL for quite some time and did like it. However, I urge you to consider FreeImage. It has a bit more development going on and is quite stable - I used it in a commercial engine for all my image needs, and it integrates decently well with DirectX/OpenGL much like DevIL.

DirectX Crash When Resizing Tiny

I am trying to make my program more bullet proof. My program resizes fine until I make it super tiny like this:
A method to prevent that from happening is to set a minimum size, which I know how to do already. I want to look deeper into the problem before I do that.
The following is where the functions start to crash.
hr=swapChain->ResizeBuffers(settings.bufferCount, settings.width, settings.height, DXGI_FORMAT_UNKNOWN, 0);
if(FAILED(hr)) return 0;
I figured it was because the buffer was too small, so I made a fail safe buffer size. It also failed though.
hr=swapChain->ResizeBuffers(settings.bufferCount, fallback.width, fallback.height, DXGI_FORMAT_UNKNOWN, 0);
if(FAILED(hr)) return 0;
What is the reason the program chokes when I make it tiny? I thought it was the buffers being too small. Doesnt seem like it is the case.
Edit:
Been a while since I posted this, so my code has changed a lot. Now it gives an unhandled exception crash when calling deviceContext->ClearRenderTargetView().

Passing D3DFMT_UNKNOWN into IDirect3DDevice9::CreateTexture()

I'm kind of wondering about this, if you create a texture in memory in DirectX with the CreateTexture function:
HRESULT CreateTexture(
UINT Width,
UINT Height,
UINT Levels,
DWORD Usage,
D3DFORMAT Format,
D3DPOOL Pool,
IDirect3DTexture9** ppTexture,
HANDLE* pSharedHandle
);
...and pass in D3DFMT_UNKNOWN format what is supposed to happen exactly? If I try to get the surface of the first or second level will it cause an error? Can it fail? Will the graphics device just choose a random format of its choosing? Could this cause problems between different graphics card models/brands?
I just tried it out and it does not fail, mostly
When Usage is set to D3DUSAGE_RENDERTARGET or D3DUSAGE_DYNAMIC, it consistently came out as D3DFMT_A8R8G8B8, no matter what I did to the back buffer format or other settings. I don't know if that has to do with my graphics card or not. My guess is that specifying unknown means, "pick for me", and that the 32-bit format is easiest for my card.
When the usage was D3DUSAGE_DEPTHSTENCIL, it failed consistently.
So my best conclusion is that specifying D3DFMT_UNKNOWN as the format gives DirectX the choice of what it should be. Or perhaps it always just defaults to D3DFMT_A8R8G8B.
Sadly, I can't confirm any of this in any documentation anywhere. :|
MSDN doesn't say. But I'm pretty sure you'd get "D3DERR_INVALIDCALL" as a result.
If the method succeeds, the return
value is D3D_OK. If the method fails,
the return value can be one of the
following: D3DERR_INVALIDCALL,
D3DERR_OUTOFVIDEOMEMORY,
E_OUTOFMEMORY.
I think this falls into the "undefined" category. Some drivers will fail the allocations, while others may default to something. I've never seen anything in the WDK that says that this condition needs to be handled. I'm guessing if you enable the debug DX runtime you will see an error message.