Cannot SDL_GL_DeleteContext in destructor - sdl

I am creating an SDL-OpenGL application in D. I am using the Derelict SDL binding to accomplish this.
When I am finished running my application, I want to unload SDL. To do this I run the following function:
public ~this() {
SDL_GL_DeleteContext(renderContext);
SDL_DestroyWindow(window);
}
For some reason however, that'll give me a vague segmentation fault (no traces in GDB) and return -11. Can I not destroy SDL in a destructor, do I even have to destroy SDL after use?
My constructor:
window = SDL_CreateWindow("TEST", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 1280, 720, SDL_WINDOW_OPENGL | SDL_WINDOW_FULLSCREEN_DESKTOP);
if(window == null) {
string error = to!string(SDL_GetError());
throw new Exception(error);
}
renderContext = SDL_GL_CreateContext(window);
if(renderContext == null) {
string error = to!string(SDL_GetError());
throw new Exception(error);
}

Class destructors may run in a different thread than the thread where the class was created. The crash may occur because OpenGL or SDL may not handle cleanup from a different thread properly.
Destructors for heap-allocated (GC-managed) objects are not a good way to perform cleanup, because their invocation is not guaranteed. Instead, move the code to a cleanup function, or use a deterministic way to finalize the object (reference counting, or manual memory management).

Related

D3D11CreateDeviceAndSwapChain Fails With E_ACCESSDENIED When Using Same HWND

If I create a window and pass the HWND to D3D11CreateDeviceAndSwapChain, it works. However, after I release the device, context, swapchain, etc and try to repeat the process using the same HWND, D3D11CreateDeviceAndSwapChain fails with E_ACCESSDENIED. This tells me something must be holding onto the HWND, but what? I release all my global variables in the destructor of the class. Anyone have an idea what the problem is?
~decoder()
{
m_VertexShader->Release();
m_VertexShader = nullptr;
m_PixelShader->Release();
m_PixelShader = nullptr;
m_InputLayout->Release();
m_InputLayout = nullptr;
device->Release();
device = nullptr;
context->Release();
context = nullptr;
swapchain->Release();
swapchain = nullptr;
rendertargetview->Release();
rendertargetview = nullptr;
m_SamplerLinear->Release();
m_SamplerLinear = nullptr;
HRESULT hr = S_OK;
hr = decoder_transform->ProcessMessage(MFT_MESSAGE_NOTIFY_END_OF_STREAM, NULL);
hr = decoder_transform->ProcessMessage(MFT_MESSAGE_NOTIFY_END_STREAMING, NULL);
hr = decoder_transform->ProcessMessage(MFT_MESSAGE_COMMAND_FLUSH, NULL);
decoder_transform.Release();
color_transform.Release();
hr = MFShutdown();
}
While D3D11CreateDeviceAbdSwapChain does not mention why this is happening in the documentation, it is essentially just a wrapper around creating a D3D11Device and swap chain. The documentation for IDXGIFactory2::CreateSwapChainForHwnd does go into detail on why this is happening.
Because you can associate only one flip presentation model swap chain at a time with an HWND, the Microsoft Direct3D 11 policy of deferring the destruction of objects can cause problems if you attempt to destroy a flip presentation model swap chain and replace it with another swap chain. For more info about this situation, see Deferred Destruction Issues with Flip Presentation Swap Chains.
The documentation regarding Deferred Destruction Issues with Flip Presentation Swap Chains advises calling ID3D11DeviceContext::ClearState followed by ID3D11DeviceContext::Flush.
However, if an application must actually destroy an old swap chain and create a new swap chain, the application must force the destruction of all objects that the application freed. To force the destruction, call ID3D11DeviceContext::ClearState (or otherwise ensure no views are bound to pipeline state), and then call Flush on the immediate context. You must force destruction before you call IDXGIFactory2::CreateSwapChainForHwnd, IDXGIFactory2::CreateSwapChainForCoreWindow, or IDXGIFactory2::CreateSwapChainForComposition again to create a new swap chain.

scaleform 4.4.30 questions about opengl

I write a little demo, not completed, but already can run, when I run into bSuccess = m_pRenderHAL->InitHAL(GL::HALInitParams()); a GL error came out,
Assert: GL error before GraphicsDeviceImmediate::Initialize (0x502).
what's the reason, is some setting not correct?
namespace SF = Scaleform;
using namespace Scaleform;
using namespace Render;
using namespace GFx;
void initHAL()
{
SF::SysAllocMalloc a;
SF::GFx::System gfxInit(&a);
SingleThreadCommandQueue* queue = new SingleThreadCommandQueue;
//m_pCommandQueue = queue;
Ptr<GL::HAL> m_pRenderHAL = *new GL::HAL(queue);
//assert(m_pRenderHAL != NULL);
queue-> pHAL = m_pRenderHAL;
bool bSuccess;
//GLenum error = glGetError();
bSuccess = m_pRenderHAL->InitHAL(GL::HALInitParams());
assert(bSuccess == true);
}
int main()
{
initHAL();
}
Under normal operation, Scaleform should not generate any OpenGL errors. When you call GL::HAL::InitHAL, it checks for any existing GL error codes. This assert is warning you that an error has occurred in the current context before using Scaleform. As eluded to in your sample, you can simply call glGetError() before calling InitHAL (and subsequently HAL::BeginScene/HAL::Display before rendering each scene) to avoid this assert.
However, Scaleform also expects that a GL context is initialized properly on the current thread - in your example, there is no code showing this. If it isn't initialized propertly, it's likely the call to glGetError (internally in Scaleform) is failing. If this is the case, you will need to set a current context before calling GL::HAL::InitHAL.
I solved this problem, that's because in the engine some GL error is reported before calling scaleform's InitHAL function, when in debug mode, scaleform will report this error, I just use some fix method, calling glGetError() before this method.

Application breaks on CCriticalSection::Lock

I am upgrading an application from VC6 to VS2010 (Legacy Code). The application runs as it should in VC6 but after converting the project to VS2010 I encountered some problems.
On exiting the application, the program breaks while attempting to lock on entering a critical section.
The lock count usually alternates from -1(Unlocked) to -2(Locked) but just before the program crashes, the lock count is 0.
g_RenderTargetCriticalSection.Lock();// Breaks here
if (g_RenderTargets.Lookup(this, pRenderTarget))
{
ASSERT_VALID(pRenderTarget);
g_RenderTargets.RemoveKey(this);
delete pRenderTarget;
}
g_RenderTargetCriticalSection.Unlock();
Here is the CCriticalSection::Lock() function where ::EnterCriticalSection(&m_sect); fails. I found it strange that on failing, the lock count changes from 0 to -4??
_AFXMT_INLINE BOOL (::CCriticalSection::Lock())
{
::EnterCriticalSection(&m_sect);
return TRUE;
}
If anyone has encountered anything similar to this, some insight would be greatly appreciated. Thanks in advance.
The comments indicate this is a file-scope object destructor order issue. There are various ways you could address this. Since I haven't seen the rest of the code it's difficult to offer specific advice, but one idea is to change the CS to live in a shared_ptr and have your CWnd hold onto a copy so it won't be destroyed prematurely. e.g.:
std::shared_ptr<CCriticalSection> g_renderTargetCriticalSection(new CCriticalSection());
Then in your window class:
class CMyWindow : public CWnd
{
private:
std::shared_ptr<CCriticalSection> m_renderTargetCriticalSection;
public:
CMyWindow()
: m_renderTargetCriticalSection(g_renderTargetCriticalSection)
{
// ...
}
~CMyWindow()
{
// guaranteed to still be valid since our shared_ptr is keeping it alive
CSingleLock lock(m_renderTargetCriticalSection.get(), TRUE);
// ...
}
// ...
};
The issue was that the application's Main Window was being destroyed after the application's global object was destroyed. This meant that the g_renderTargetCriticalSection was already Null when the main window was trying to be destroyed.
The solution was to destroy the Application's main window before it's global object (CDBApp theApp) calls ExitInstance() and is destroyed.
int CDBApp::ExitInstance()
{
LOGO_RELEASE
//Destroy the Main Window before the CDBApp Object (theApp) is destroyed.
if(m_Instance.m_hWnd)
m_Instance.DestroyWindow();
return CWinApp::ExitInstance();
}
This code doesn't make sense:
int CDBApp::ExitInstance()
{
LOGO_RELEASE
//Destroy the Main Window before the CDBApp Object (theApp) is destroyed.
if(m_Instance.m_hWnd)
m_Instance.DestroyWindow();
return CWinApp::ExitInstance();
}
m_Instance is a handle, not a class, so it can't be used to call functions!

use opengl in thread

I have a library, which is engaged in rendering on opengl and prinimaet streams from the network.
I write under a poppy, but plans to use on linux
so the window is created for objective c
I start drawing in a separate thread in the other receiving and decoding the data.
I crash bug (EXT_BAD_ACCESS) on methods of opengl, even if I use them only in a single thread.
my code
main glut:
int main(int argc, const char * argv[]){
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA);
int win = glutGetWindow();
glutInitWindowSize(800, 600);
glutCreateWindow("OpenGL lesson 1");
client_init(1280, 720, win, "192.168.0.98", 8000, 2222);
return 0;
}
or objective c
- (id)initWithFrame:(NSRect)frameRect pixelFormat:(NSOpenGLPixelFormat*)format{
self = [super initWithFrame:frameRect];
if (self != nil) {
NSOpenGLPixelFormatAttribute attributes[] = {
NSOpenGLPFANoRecovery,
NSOpenGLPFAFullScreen,
NSOpenGLPFAScreenMask,
CGDisplayIDToOpenGLDisplayMask(kCGDirectMainDisplay),
(NSOpenGLPixelFormatAttribute) 0
};
_pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];
if (!_pixelFormat)
{
return nil;
}
//_pixelFormat = [format retain];
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(_surfaceNeedsUpdate:)
name:NSViewGlobalFrameDidChangeNotification
object:self];
_openGLContext = [self openGLContext];
client_init(1280, 720, win, "192.168.0.98", 8000, 2222);
}
return self;
}
client_init code
// pthread_create(&posixThreadID, NULL, (void*(*)(void*))ShowThread, dh_tmp);
pthread_create(&posixThreadID, NULL, (void*(*)(void*))ShowThread, NULL);
void* ShowThread(struct drawhandle * dh){
//glViewport(0, 0, dh->swidth, dh->sheight);//EXT_BAD_ACCESS
glViewport(0, 0, 1280, 720);//EXT_BAD_ACCESS
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
//gluOrtho2D(0, dh->swidth, 0, dh->sheight);
gluOrtho2D(0, 1280, 0, 720);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
...
return 0;
}
I think the problem is? that uncreated context opengl.
How to create it in macos / linux?
This thread has no current OpenGL context. Even if you did create a context earlier in the program (not visible in your snippet), it will not be current in the thread you launch.
An OpenGL context is always, with no exceptions, "current" for exactly one thread at a time. By default this is the thread that created the context. Any thread calling OpenGL must be made "current" first.
You must either create the context in this thread, or call glXMakeCurrent (Unix/Linux) or aglMakeCurrent (Mac) or wglMakeCurrent (Windows) inside ShowThread (before doing anything else related to OpenGL).
(probably not the reason for the crash, though... see datenwolf's answer for the likely reason of the crash -- nevertheless it's wrong)`
OpenGL and multithreading are on difficult terms. It can be done, but it requires some care. First and foremost, a OpenGL context can be active in only one thread at a time. And on some systems, like Windows extension function pointers are per context, so with different contexts in different threads you may end up with different extension function pointers, which must be provisioned for.
So there's problem number one: You've probably got no OpenGL context on this thread, but this should not crash on calling a non-extension function, it would just do nothing.
If it really crashes on the line you indicated, then the dh pointer is invalid for sure. It's the only explanation. A pointer in C is just some number that's interpreted in a special way. If you pass around pointers – especially if used as a parameter to a callback, or thread function – then the object to pointer points to must not go invalid until it's made sure this pointer can no longer be accessed. Which means: You must not use this on things you create on the stack, i.e. in C auto storage.
This will break:
void foo(void)
{
struct drawhandle dh_tmp;
pthread_create(&posixThreadID, NULL, (void*(*)(void*))ShowThread, &dh_tmp);
}
why? Because the moment foo returns the object dh_tmp goes invalid. But &dh_tmp (the pointer to it) is just a number and this number will not "magically" turn zero, the moment dh_tmp gets invalid.
You must allocate it on the heap for this to work. Of course there's the problem, when to free the memory again.

Uninitialized read problem

Program works fine (with random crashes) and Memory Validator reports Uninitialized read problem in pD3D = Direct3DCreate9.
What could be the problem ?
init3D.h
class CD3DWindow
{
public:
CD3DWindow();
~CD3DWindow();
LPDIRECT3D9 pD3D;
HRESULT PreInitD3D();
HWND hWnd;
bool killed;
VOID KillD3DWindow();
};
init3D.cpp
CD3DWindow::CD3DWindow()
{
pD3D=NULL;
}
CD3DWindow::~CD3DWindow()
{
if (!killed) KillD3DWindow();
}
HRESULT CD3DWindow::PreInitD3D()
{
pD3D = Direct3DCreate9( D3D_SDK_VERSION ); // Here it reports a problem
if( pD3D == NULL ) return E_FAIL;
// Other not related code
VOID CD3DWindow::KillD3DWindow()
{
if (killed) return;
diwrap::input.UnCreate();
if (hWnd) DestroyWindow(hWnd);
UnregisterClass( "D3D Window", wc.hInstance );
killed = true;
}
Inside main app .h
CD3DWindow *d3dWin;
Inside main app .cpp
d3dWin = new CD3DWindow;
d3dWin->PreInitD3D();
And here is the error report:
Error: UNINITIALIZED READ: reading register ebx
#0:00:02.969 in thread 4092
0x7c912a1f <ntdll.dll+0x12a1f> ntdll.dll!RtlUnicodeToMultiByteN
0x7e42d4c4 <USER32.dll+0x1d4c4> USER32.dll!WCSToMBEx
0x7e428b79 <USER32.dll+0x18b79> USER32.dll!EnumDisplayDevicesA
0x4fdfc8c7 <d3d9.dll+0x2c8c7> d3d9.dll!DebugSetLevel
0x4fdfa701 <d3d9.dll+0x2a701> d3d9.dll!D3DPERF_GetStatus
0x4fdfafad <d3d9.dll+0x2afad> d3d9.dll!Direct3DCreate9
0x00644c59 <Temp.exe+0x244c59> Temp.exe!CD3DWindow::PreInitD3D
c:\_work\Temp\initd3d.cpp:32
Edit: Your stack trace is very, very strange- inside the USER32.dll? That's part of Windows.
What I might suggest is that you're linking the multi-byte Direct3D against the Unicode D3D libraries, or something like that. You shouldn't be able to cause Windows functions to trigger an error.
Your Memory Validator application is reporting false positives to you. I would ignore this error and move on.
There is no copy constructor in your class CD3DWindow. This might not be the cause, but it is the very first thing that comes to mind.
If, by any chance, anywhere in your code a temporary copy is made of a CD3DWindow instance, the destructor of that copy will destroy the window handle. Afterwards, your original will try to use that same, now invalid, handle.
The same holds for the assignment operator.
This might even work, if the memory is not overwritten yet, for some time. Then suddenly, the memory is reused and your code crashes.
So start by adding this to your class:
private:
CD3DWindow(const CD3DWindow&); // left unimplemented intentionally
CD3DWindow& operator=(const CD3DWindow&); // left unimplemented intentionally
If the compiler complains, check the code it refers to.
Update: Of course, this problem might apply to all your other classes. Please read up on the "Rule of Three".