DirectX Crash When Resizing Tiny - c++

I am trying to make my program more bullet proof. My program resizes fine until I make it super tiny like this:
A method to prevent that from happening is to set a minimum size, which I know how to do already. I want to look deeper into the problem before I do that.
The following is where the functions start to crash.
hr=swapChain->ResizeBuffers(settings.bufferCount, settings.width, settings.height, DXGI_FORMAT_UNKNOWN, 0);
if(FAILED(hr)) return 0;
I figured it was because the buffer was too small, so I made a fail safe buffer size. It also failed though.
hr=swapChain->ResizeBuffers(settings.bufferCount, fallback.width, fallback.height, DXGI_FORMAT_UNKNOWN, 0);
if(FAILED(hr)) return 0;
What is the reason the program chokes when I make it tiny? I thought it was the buffers being too small. Doesnt seem like it is the case.
Edit:
Been a while since I posted this, so my code has changed a lot. Now it gives an unhandled exception crash when calling deviceContext->ClearRenderTargetView().

Related

Exception thrown: read access violation. std::shared_ptr<>::operator-><,0>(...)->**** was 0xFFFFFFFFFFFFFFE7

Good afternoon to all! I am writing a game engine using OpenGL + Win32 / GLFW. Therefore, I will say the project is large, but I have a problem that has led me to a dead end and I can't understand what the problem is. The problem is that I have a shared_ptr<Context> in the 'windows' class (it is platform-based) that is responsible for the context (GL, D3D). And everything works fine when I launch the application, everything is drawn normally, but when I start entering the cursor into the window, a crash occurs when any function calls from context, in my case is:
context->swapBuffers();
Here a crash:
std::shared_ptr<Context>::operator-><Context,0>(...)->**** was 0xFFFFFFFFFFFFFFE7.
Then I got into the callstack and saw that the context-> itself is non-zero, so the function should be called.
Then I searched for a long time for the problem and found that when I remove the Win32 message processing code, and when I remove it, errors do not occur when calling a function from context->. I removed everything unnecessary from the loop and left only the basic functions and tried to run it like this, because I thought that some other functions inside were causing this problem, but no.
while (PeekMessageW(&msg, NULL, NULL, NULL, PM_REMOVE) > 0) {
TranslateMessage(&msg);
DispatchMessageW(&msg);
}
That is, when I delete TranslateMessage() and DispatchMessage(), the error goes away. And then I finally got confused i.e I don't understand what is happening. And then I thought that maybe somehow the operating system itself affects that pointer, prohibits reading or something like that.
And then I paid attention to __vtptr in the call stack, and noticed that it is nullptr, and moreover it has the void** type. And also the strangest thing is that in the error I have ->**** was 0xffffffffffffffc7 as many as four consecutive pointers. What is it?
I understand I practically didn't throw off the code, because I have a big project and I think it doesn't make sense to throw everything, so I tried to explain the problem by roughly showing what is happening in my code. I will be grateful to everyone who will help :)

Where to look for Segmentation fault?

My program only sometimes gets a Segmentation fault: 11 and I can't figure it out for the life of me. I don't know a whole lot in the realm of C++ and pointers, so what kinds of things should I be looking for?
I know it might have to do with some function pointers I'm using.
My question is what kinds of things produce Segmentation faults? I'm desperately lost on this and I have looked through all the code I thought could cause this.
The debugger I'm using is lldb and it shows the error being in this code segment:
void Player::update() {
// if there is a smooth animation waiting, do this one
if (queue_animation != NULL) {
// once current animation is done,
// switch it with the queue animation and make the queue NULL again
if (current_animation->Finished()) {
current_animation = queue_animation;
queue_animation = NULL;
}
}
current_animation->update(); // <-- debug says program halts on this line
game_object::update();
}
current_animation and queue_animation are both pointers of class Animation.
Also to note, within Animation::update() is a function pointer that gets passed to Animation in the constructor.
If you need to see all of the code, it's over here.
EDIT:
I changed the code to use a bool:
void Player::update() {
// if there is a smooth animation waiting, do this one
if (is_queue_animation) {
// once current animation is done,
// switch it with the queue animation and make the queue NULL again
if (current_animation->Finished()) {
current_animation = queue_animation;
is_queue_animation = false;
}
}
current_animation->update();
game_object::update();
}
It didn't help anything because I still sometimes get a Segmentation fault.
EDIT 2:
Modified code to this:
void Player::update() {
// if there is a smooth animation waiting, do this one
if (is_queue_animation) {
std::cout << "queue" << std::endl;
// once current animation is done,
// switch it with the queue animation and make the queue NULL again
if (current_animation->Finished()) {
if (queue_animation != NULL) // make sure this is never NULL
current_animation = queue_animation;
is_queue_animation = false;
}
}
current_animation->update();
game_object::update();
}
Just to see when this function would output without any user input. Every time I got a Segmentation fault this would output twice right before the fault. This is my debug output:
* thread #1: tid = 0x1421bd4, 0x0000000000000000, queue = 'com.apple.main-thread, stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
frame #0: 0x0000000000000000
error: memory read failed for 0x0
Some causes of segmentation fault:
You dereference a pointer that is uninitialized or that points to NULL
You dereference a deleted pointer
You write outside the bounds of the scope of allocated memory (e.g. after the last element of an array)
Run valgrind with your software (warning, it really slows things down). Its likely that memory has been overwritten in some way. Valgrind (and other tools) can help track down some of these kinds of issues, but not everything.
If its a large program, this could get very difficult as everything is suspect since anything can corrupt anything in memory. You might try to minimize the code paths run by limiting the program in some way and see if you can make the problem happen. This can help reduce the amount of suspect code.
If you have a previous version of the code that didn't have the problem, see if you can revert back to that and then look to see what changed. If you are using git, it has a way to bisect search into the revision where a failure first occurred.
Warning, this kind of thing is the bane of C/C++ developers, which is one of the reason that languages such as Java are "safer".
You might just start looking through the code and see if you can find things that look suspicious, including possible race conditions. Hopefully this won't take to much time. I don't want to freak you out, but these kinds of bugs can be some of the most difficult to track down.

Checking and closing HANDLE

I am working with HANDLES, the first one, nextColorFrameEvent is an event handler and the second one is a stream handler. They are being initialized in the following piece of code:
nextColorFrameEvent = CreateEvent( NULL, TRUE, FALSE, NULL );
hr = nui->NuiImageStreamOpen(
NUI_IMAGE_TYPE_COLOR,
NUI_IMAGE_RESOLUTION_640x480,
0,
2,
nextColorFrameEvent,
&videoStreamHandle);
I want to properly deal with them on destruction, while not creating errors at the same time. Sometimes the initializer wont be called, so both HANDLEs are still NULL when the software comes to an end. Thats why I want to check first if the HANDLEs are properly initialized etc. and if they are, I want to close them. I got my hands on the following piece of code for this:
if (nextColorFrameEvent && nextColorFrameEvent != INVALID_HANDLE_VALUE)CloseHandle(nextColorFrameEvent);
#ifdef QT_DEBUG
DWORD error = GetLastError();
qDebug()<< error;
#endif
if (videoStreamHandle && videoStreamHandle != INVALID_HANDLE_VALUE)CloseHandle(videoStreamHandle);
#ifdef QT_DEBUG
error = GetLastError();
qDebug()<< error;
#endif
But this is apperently incorrect: if I do not run the initializer and then close the software this piece of code runs and gives me a 6:
Starting C:\...\Qt\build-simpleKinectController-Desktop_Qt_5_0_2_MSVC2012_64bit-Debug\debug\simpleKinectController...
6
6
C:\...\Qt\build-simpleKinectController-Desktop_Qt_5_0_2_MSVC2012_64bit-Debug\debug\simpleKinectController exited with code 0
which means:
ERROR_INVALID_HANDLE 6 (0x6) The handle is invalid.
Which means that closeHandle ran anyway despite the tests. What tests should I do to prevent closing when the handle is not a valid HANDLE?
Bonus question: if I run the initializer this error will no longer appear when only closing colorFrameEvent, but will still appear when closing videoStreamHandle:
Starting C:\...\Qt\build-simpleKinectController-Desktop_Qt_5_0_2_MSVC2012_64bit-Debug\debug\simpleKinectController...
0
6
C:\...\Qt\build-simpleKinectController-Desktop_Qt_5_0_2_MSVC2012_64bit-Debug\debug\simpleKinectController exited with code 0
Do I need a diffent function to close a stream handler?
nui->NuiImageStreamOpen(...) does not create a valid Windows handle for the stream but instead it creates an internal handle inside the driver side.
So you can not use windows API to release/close stream handle !!!
To do that just call nui->NuiShutdown(). I have not yet used the callback event but I think its a valid windows handle and should be closed normally.
if you need just to change settings you can always call nui->NuiImageStreamOpen(...) with new settings. No need to shutdown ...
I would also welcome function nui->NuiImageStreamClose(...); because current state of API complicates things for long term running aps with changing sensor configurations.
CreateEvent (http://msdn.microsoft.com/en-us/library/windows/desktop/ms682396(v=vs.85).aspx) returns NULL if an event was not created.
You are checking against INVALID_HANDLE_VALID which is not NULL.
You are probably trying to double-close a handle. That is likely to generate ERROR_INVALID_HANDLE 6. You can't detect this with your test, because the first CloseHandle(nextColorFrameEvent); did not change nextColorFrameEvent.
The solution is to use C++ techniques, in particular RAII. There are plenty of examples around how to use shared_ptr with HANDLE. shared_ptr is the standard solution to run cleanup code at most once, after everyone is done, and only if anybody actually allocated a resource.
there is a good way of debugging that I'm particularly fond of, despite being all writen in macros, which are nasty, but in this case they work wonders.
Zed's Awesome Debug Macros
there are a couple of things I like to change though. They make extensive use of goto, which I tend to avoid, specially in c++ projects, because otherwise you wouldn't be able to declare variables mid-code. This is why I use exit(-1) instead, or, in some projects, I mod the code to try, throw, catch c++. Since you are working with Handles, a good thing would be setting a variable and telling the program to close itself.
here is what I mean. Take this piece of code from the macros (i assume you would read the exercise and familiarize with the macros):
#define check(A, M, ...) if(!(A)) { log_err(M, ##__VA_ARGS__); errno=0; goto error; }
i'd change
goto error;
to something like
error = true;
the syntax inside the program would be something like, and I took it from a multithread program I'm writing myself:
pRSemaphore = CreateSemaphore(NULL, 0, MAX_TAM_ARQ, (LPCWSTR) "p_read_semaphore");
check(pRSemaphore, "Impossible to create semaphore: %d\n", GetLastError());
As you can see, GetLastError is only called when pRSemaphore is set to null. There are somewhat fancy mechanisms behind the macro (at least they are fancy for me), but they are hidden inside the "check" mask, so you needn't worry about them.
next step is to treat the error with something like:
inline void ExitWithError(bool &err) {
//close all handles
//tell other related process to do the same if necessary
exit(-1);
}
or you could just call it inside the macro like
#define check(A, M, ...) if(!(A)) { log_err(M, ##__VA_ARGS__); errno=0; ExitWithError(); }
hope I could be of any help

Memory error using OpenGL "glTexImage2D"

I've been following this tutorial on OpenGL and C++:
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=06
...and I've found myself facing quite the error. Whenever I try to compile, my program crashes with an error of the type, System.AccessViolationException. I've isolated the problem to be in this function:
glTexImage2D(GL_TEXTURE_2D, 0, 3, TextureImage[0]->sizeX, TextureImage[0]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[0]->data);
In case you don't want to look through that tutorial, the memory appears to be set up like so:
AUX_RGBImageRec *TextureImage[1];
memset(TextureImage,0,sizeof(void *)*1);
Any help would be awesome. Thanks.
You're crashing because TextureImage[0] is NULL. The initial memset there sets it to NULL; if you follow along in the tutorial, the next line of code is this:
if (TextureImage[0]=LoadBMP("Data/NeHe.bmp"))
Note carefully that there is a single = sign here, not a double == as you'd normally see (you may even get a compiler warning here; to suppress that, add extra parentheses around the assignment)). Make sure you copied this line of code correctly and that you have a single = here.
If in fact you do have a single =, then check to make sure that LoadBMP is returning a non-NULL value. If it's returning NULL, the most likely cause is that it can't find the bitmap file Data/NeHe.bmp, either because it doesn't exist or it's looking for it in the wrong directory. Make sure your current working directory is set up correctly so that it can find the image.
Turns out the bitmap I was trying to load was too large. I shrunk it to 256x256px and it worked perfectly.

Passing D3DFMT_UNKNOWN into IDirect3DDevice9::CreateTexture()

I'm kind of wondering about this, if you create a texture in memory in DirectX with the CreateTexture function:
HRESULT CreateTexture(
UINT Width,
UINT Height,
UINT Levels,
DWORD Usage,
D3DFORMAT Format,
D3DPOOL Pool,
IDirect3DTexture9** ppTexture,
HANDLE* pSharedHandle
);
...and pass in D3DFMT_UNKNOWN format what is supposed to happen exactly? If I try to get the surface of the first or second level will it cause an error? Can it fail? Will the graphics device just choose a random format of its choosing? Could this cause problems between different graphics card models/brands?
I just tried it out and it does not fail, mostly
When Usage is set to D3DUSAGE_RENDERTARGET or D3DUSAGE_DYNAMIC, it consistently came out as D3DFMT_A8R8G8B8, no matter what I did to the back buffer format or other settings. I don't know if that has to do with my graphics card or not. My guess is that specifying unknown means, "pick for me", and that the 32-bit format is easiest for my card.
When the usage was D3DUSAGE_DEPTHSTENCIL, it failed consistently.
So my best conclusion is that specifying D3DFMT_UNKNOWN as the format gives DirectX the choice of what it should be. Or perhaps it always just defaults to D3DFMT_A8R8G8B.
Sadly, I can't confirm any of this in any documentation anywhere. :|
MSDN doesn't say. But I'm pretty sure you'd get "D3DERR_INVALIDCALL" as a result.
If the method succeeds, the return
value is D3D_OK. If the method fails,
the return value can be one of the
following: D3DERR_INVALIDCALL,
D3DERR_OUTOFVIDEOMEMORY,
E_OUTOFMEMORY.
I think this falls into the "undefined" category. Some drivers will fail the allocations, while others may default to something. I've never seen anything in the WDK that says that this condition needs to be handled. I'm guessing if you enable the debug DX runtime you will see an error message.