Whenever I exit my program it gives me this exception "0xC0000022: A process has requested access to an object, but has not been granted those access rights."
It breaks right at the end of a function called _lock_file in _file.c.
After trying to narrow down what the cause of the problem is I found out that it does not crash if I remove all fclose() function calls in my program then cleaning and rebuilding my program. Even if the function itself is never called it will still crash. Obviously this solution is not ideal.
When I tried to use fstream instead it produced a similar crash at the start of the program.
It's also worth mentioning that my program uses SDL.
Edit: Someone requested a minimal example and this is what I cam up with.
main.cpp
#include <stdlib.h>
#include <SDL.h>
/*...*/
#pragma comment(lib, "SDL.lib")
#pragma comment(lib, "SDLmain.lib")
/*...*/
int main( int argc, char **argv)
{
if(false)
fclose(NULL);
return 0;
}
draw.cpp
/*...*/
If I run this it will crash on exit just like I mentioned above. And yes the draw.cpp is completely commented out, but if I remove it from the project the program will run fine. All other files were removed from the project.
Edit2: In response to karlphillip I decided to double check if it is actually running and it seems that it is actually crashing at the start with this example.
Also it is a Win32 project.
Having a crash on exit usually means that the heap is corrupted during program execution. Try using a memory checker to find where. Try using _CrtDumpMemoryLeaks()
Are you using the same runtime library (Debug DLL, Debug, Release DLL, Release, etc.) for your main program as was used to build the SDL library? That can often (but not always) cause odd problems, and would be my first port of call when getting this sort of odd behaviour at runtime.
(If you get an LNK4098 warning when building, this is what it is trying to tell you, and you really need to fix it properly; the "solution" the text of the warning suggests is anything but.)
Another option is memory corruption. Consider running a debug build, and calling the following on startup:
_CrtSetDbgFlag(_CrtSetDbgFlag(_CRTDBG_REPORT_FLAG)|_CRTDBG_CHECK_ALWAYS_DF);
This activates more thorough heap checking. (You might have to go and make a cup of tea when your program runs with this switched on, if it allocates lots of stuff while it's running.) If then "crashes" in one of the memory allocation functions -- it's actually an assert, you can't always tell though -- then at some point between that call, and the previous call to a memory management function, something has overwritten some memory it should not have. And you can take it from there, to find out what.
-Edit: "_CRTDBG_REPORT_FLAG_DF", was probably intended to be "_CRTDBG_REPORT_FLAG".
Crashing on exit can also be caused by static variables destructing and accessing objects that have already been cleaned up.
Check you static objects and ensure their destructors are not causing the crash.
How do you know your application is being executed in the first place? Add a debug right after main() is called:
#include <stdlib.h>
#include <SDL.h>
/*...*/
#pragma comment(lib, "SDL.lib")
#pragma comment(lib, "SDLmain.lib")
/*...*/
int main( int argc, char **argv)
{
printf("dbg1\n");
if(false)
fclose(NULL);
printf("dbg2\n");
return 0;
}
What kind of project are you creating? Console, Win32 or something else?
I find this post very interesting.
Related
Using Visual Studio 2019 Professional on Windows 10 x64. I have several C++ DLL projects, some of which are multi-threaded. I'm using CRITICAL_SECTION objects for thread safety.
In DLL1:
CRITICAL_SECTION critDLL1;
InitializeCriticalSection(&critDLL1);
In DLL2:
CRITICAL_SECTION critDLL2;
InitializeCriticalSection(&critDLL2);
When I use critDLL1 with EnterCriticalSection or LeaveCriticalSection everything is fine in both _DEBUG or NDEBUG mode. But when I use critDLL2, I get an access violation in 'ntdll.dll' in NDEBUG (though not in _DEBUG).
After popping up message boxes in NDEBUG mode, I was eventually able to track the problem down to the first use of EnterCriticalSection.
What might be causing the CRITICAL_SECTION to fail in one project but work in others? The MSDN page was not helpful.
UPDATE 1
After comparing project settings of DLL1 (working) and DLL2 (not working), I've accidentally got DLL2 working. I've confirmed this by reverting to an earlier version (which crashes) and then making the project changes (no crash!).
This is the setting:
Project Properties > C/C++ > Optimization > Whole Program Optimization
Set this to Yes (/GL) and my program crashes. Change that to No and it works fine. What does the /GL switch do and why might it cause this crash?
UPDATE 2
The excellent answer from #Acorn and comment from #RaymondChen, provided the clues to track down and then resolve the issue. There were two problems (both programmer errors).
PROBLEM 1
The assumption of Whole Program Optimzation (wPO) is the MSVC compiler is compiling "the whole program". This is an incorrect assumption for my DLL project which internally consumes a 3rd party library and is in turn consumed by an external application written in Delphi. This setting is set to Yes (/GL) by default but should be No. This feels like a bug in Visual Studio, but in any case, the programmer needs to be aware of this. I don't know all the details of what WPO is meant to do, but at least for DLLs meant to be consumed by other applications, the default should be changed.
PROBLEM 2
Serious programmer error. It was a call into a 3rd party library, which returned a 128-byte ASCII code which was the error:
// Before
// m_config::acSerial defined as "char acSerial[21]"
(void) m_pLib->GetPara(XPARA_PRODUCT_INFO, &m_config.acSerial[0]);
EnterCriticalSection(&crit); // Crash!
// After
#define SERIAL_LEN 20
// m_config::acSerial defined as "char acSerial[SERIAL_LEN+1]"
//...
char acSerial[128];
(void) m_pLib->GetPara(XPARA_PRODUCT_INFO, &acSerial[0]);
strncpy(m_config.acSerial, acSerial, max(SERIAL_LEN, strlen(acSerial)));
EnterCriticalSection(&crit); // Works!
The error, now obvious, is that the 3rd party library did not copy the serial number of the device into the char* I provided...it copied 128 bytes into my char* stomping over everything contiguous in memory after acSerial. This wasn't noticed before because m_pLib->GetPara(XPARA_PRODUCT_INFO, ...) was one of the first calls into the 3rd party library and the rest of the contiguous data was mostly NULL at that point.
The problem was never to do with the CRITICAL_SECTION. My thanks for Acorn and RaymondChen ... sanity has been restored to this corner of the universe.
If your program crashes under WPO (an optimization that assumes that whatever you are compiling is the entire program), it means that either the assumption is incorrect or that the optimizer ends up exploiting some undefined behavior that previously didn't (without the optimization applied), even if the assumption is correct.
In general, avoid enabling optimizations unless you are really sure you know you meet their requirements.
For further analysis, please provide a MRE.
In the code shown below I have used all documented ways to detect an exception and produce a diagnostic. It uses the C++ try/catch keywords, catches an SEH exception with the __try/__catch extension keywords, uses the Windows' AddVectoredExceptionHandler() and SetUnhandledExceptionFilter() winapi functions to install VEH/SEH filters.
Running this with Visual C++ 2003:
/GS: outputs "hello,world!" and terminates with exit code 0.
/GS-: outputs "hello,world!" and terminates with exit code 0.
Running this with Visual C++ 2013:
/GS: no output, terminates with exit code -1073740791
/GS-: outputs "hello,world!" and terminates with exit with 0.
How do I produce a diagnostic in a VS2013 compiled program with /GS in effect?
#include "stdafx.h"
#include <Windows.h>
#define CALL_FIRST 1
#define CALL_LAST 0
LONG WINAPI MyVectoredHandler(struct _EXCEPTION_POINTERS *ExceptionInfo)
{
UNREFERENCED_PARAMETER(ExceptionInfo);
printf("MyVectoredHandler\n");
return EXCEPTION_CONTINUE_SEARCH;
}
LONG WINAPI MyUnhandledExceptionFilter(_In_ struct _EXCEPTION_POINTERS *ExceptionInfo)
{
printf("SetUnhandledExceptionFilter\n");
return EXCEPTION_CONTINUE_SEARCH;
}
void f()
{
__try
{
char p[20] = "hello,world!";
p[24] = '!';
printf("%s\n", p);
}
__except (EXCEPTION_EXECUTE_HANDLER)
{
printf("f() exception\n");
}
}
int _tmain(int argc, _TCHAR* argv[])
{
AddVectoredExceptionHandler(CALL_FIRST, MyVectoredHandler);
SetUnhandledExceptionFilter(MyUnhandledExceptionFilter);
try{
f();
}
catch (...){
printf("catched f exception\n");
}
return 0;
}
The CRT function that handles stack buffer overruns detection, __report_gsfailure(), assumes that the stack frame corruption was induced by a malware attack. Such malware traditionally messed with the fs:[0] SEH exception filters (stored on the stack frame) to get an exception handler to trigger the malware payload. One of the ways to get data to turn into executable code.
So that CRT function cannot assume that throwing an exception is safe to do. And no longer does in the CRT included with VS2013, goes back to ~VS2005. It will failfast if the OS supports it and, if not, ensures that a registered VEH/SEH exception handler cannot see the exception either. Kaboom, crash to the desktop with no diagnostic unless you have a debugger attached.
The /SAFESEH option defeats this kind of malware attack so it isn't as serious as it once was. If you are still at the stage where your code suffers from stack corruption bugs and your app is not popular enough to become the target of malware then replacing the CRT function is something you could consider.
Do talk this over with your supervisor, you never want to be personally responsible for this given the enormous liability to your client. History rarely tells the tale of what happened to the one programmer whose code put an entire corporation out of business for a month. But surely wasn't anything pretty.
Paste this code somewhere close to your main() function:
__declspec(noreturn) extern "C"
void __cdecl __report_gsfailure() {
RaiseException(STATUS_STACK_BUFFER_OVERRUN, EXCEPTION_NONCONTINUABLE, 0, nullptr);
}
And plan to remove it again soon.
There is no solution for the question as asked.
Overrunning an array causes undefined behaviour in standard C++, so no particular result is guaranteed. Failure to give a reliable result is not a problem with the compiler - it is permitted behaviour.
I'm aware of no implementation that guarantees any specific behaviour in response to an overrun - VS certainly doesn't. Which is hardly surprising as compilers are not required to do that (that is, essentially, the meaning of undefined behaviour). The reason that is the case is that it is often difficult to reliably or consistently detect such occurrences.
This means the only consistent way to detect an array overrun is to check that array indices are valid BEFORE using them to access an array element and take appropriate actions (e.g. throw an exception which can be caught instead of doing the bad operation). The downside is that it does not provide a simple or reliable way to catch errors in arbitrary code - short of modifying all code to do the required checks.
I would have liked to comment on the accepted answer, but I just joined and don't have enough reputation to do that yet.
I tried the solution with Visual Studio 2017 and had to make a couple of changes to get the solution to compile.
First I had to change the signature of __report_gsfailure to match one of Microsoft's header files to fix a compilation error.
__declspec(noreturn) extern "C" void __cdecl __report_gsfailure(_In_ uintptr_t _StackCookie)
{
RaiseException(STATUS_STACK_BUFFER_OVERRUN, EXCEPTION_NONCONTINUABLE, 0, nullptr);
}
Next I encountered a LNK2005 error, which I was able to correct by adding /FORCE:MULTIPLE to the Linker->Command Line for my project's properties.
Been stuck on this all day.The data in statsStruct becomes corrupt very first second the program starts in main at line 1. I do not know why when ever I try to declare statsStruct as global it becomes this way.
EDIT: This does compile just the values are all messed up in statsstruct, text is corrupted and value is 323232
extern attributes statsStruct[];
statsStruct is extern in header file for multiple cpp but Iv deleted all source code till just statsSTruct remains but I cannot get it to be global. When i declare it inside a function it works with good values but I need it global across multiple files and have it share same value.
// tess.cpp : main project file.
#include "stdafx.h"
#include "Form1.h"
#include "test.h"
using namespace tess;
struct attributes{
std::string stat;
int amount;
};
attributes statsStruct[] = {{"Acc",0},
{"Cri",0},
{"Cr",0},
{"Crit",0},
{"Cr",0},
{"Ev",0},
{"An",0}};
[STAThreadAttribute]
int main(array<System::String ^> ^args)
{
// Enabling Windows XP visual effects before any controls are created
Application::EnableVisualStyles();
Application::SetCompatibleTextRenderingDefault(false);
// Create the main window and run it
Application::Run(gcnew Form1());
return 0;
}
I think I got a repro for this, rather by accident. It is caused by a compiler setting in your project. To fix it, right-click your project in the Solution Explorer window, Properties, General. Change the Common Language Runtime support setting from /clr:pure to /clr
I need to wave my hands a bit at the explanation. With /clr:pure in effect, the compiler is only allowed to generate pure IL and no machine code. Problem is, the CLR does not support global variables. The compiler has to pull a few stunts to emulate your attributes[] array, an unmanaged array, and get it initialized properly. This appears to be enough to confuse the debugger, it takes you to the shim for "statsArray" instead of the actual statsArray data. You'll indeed see garbage in the array elements. An extra pointer dereference is required, and then some, the debugger forgets to do this.
Always compile with /clr in effect if you use unmanaged declarations.
C++ class constructor can be inlined or not be inlined. However, I found a strange situation where only inline class constructor can avoid Visual Studio memory crash. The example is as follows:
dll.h
class _declspec(dllexport) Image
{
public:
Image();
virtual ~Image();
};
class _declspec(dllexport) Testimage:public Image
{
public:
Testimage();
virtual ~Testimage();
};
typedef std::auto_ptr<Testimage> TestimagePtr;
dll.cpp
#include "dll.h"
#include <assert.h>
Image::~Image()
{
std::cout<<"Image is being deleted."<<std::endl;
}
Image::Image()
{
}
Testimage::Testimage()
{
}
Testimage::~Testimage()
{
std::cout<<"Geoimage is being deleted."<<std::endl;
}
The dll library is compiled as a dynamic library, and it is statically linked to the C++ runtime library (Multi-threaded Debug (/MTd)). The executable program that runs the library is as follows:
int main()
{
TestimagePtr my_img(new Testimage());
return 0;
}
The executable program will invoke the dll library and it also statically links the runtime library. The problem I have is that when running the executable program the following error message appears:
However, when the class constructor in dll is inlined as the following codes show:
class _declspec(dllexport) Image
{
public:
Image();
virtual ~Image();
};
class _declspec(dllexport) Testimage:public Image
{
public:
Testimage()
{
}
virtual ~Testimage();
};
The crash will disappear. Could someone explain the reason behind? Thanks! By the way, I am using VC2010.
EDIT: The following situation also trigger the same crash
.
Situation 1
int main()
{
//TestimagePtr my_img(new Testimage());
Testimage *p_img;
p_img = new Testimage();
delete p_img;
return 0;
}
it is statically linked to the C++ runtime library (Multi-threaded Debug (/MTd)
This is a very problematic scenario in versions of Visual Studio prior to VS2012. The issue is that you have more than one version of the CRT loaded in your process. One used by your EXE, another used by the DLL. This can cause many subtle problems, and not so subtle problems like this crash.
The CRT has global state, stuff like errno and strtok() cannot work properly when that global state is updated by one copy of the CRT and read back by another copy. Relevant to your crash, a hidden global state variable is the heap that the CRT uses to allocate memory from. Functions like malloc() and ::operator new use that heap.
This goes wrong when objects are allocated by one copy of the CRT and released by another. The pointer that's passed to free() or ::operator delete belongs to the wrong heap. What happens next depends on your operating system. A silent memory leak in XP. In Vista and up, you program runs with the debug version of the memory manager enabled. Which triggers a breakpoint when you have a debugger attached to your process to tell you that there's a problem with the pointer. The dialog in your screenshot is the result. It isn't otherwise very clear to me how inlining the constructor could make a difference, the fundamental issue however is that your code invokes undefined behavior. Which has a knack for producing random outcomes.
There are two approaches available to solve this problem. The first one is the simple one, just build both your EXE and your DLL project with the /MD compile option instead. This selects the DLL version of the CRT. It is now shared by both modules and you'll only have a single copy of the CRT in your process. So there is no longer a problem with having one module allocating and another module releasing memory, the same heap is used.
This will work fine to solve your problem but can still become an issue later. A DLL tends to live a life of its own and may some day be used by another EXE that was built with a different version of the CRT. The CRT will now again not be shared since they'll use different versions of the DLL, invoking the exact same failure mode you are seeing today.
The only way to guarantee that this cannot happen is to design your DLL interface carefully. And ensure that there will never be a case where the DLL allocates memory that the client code needs to release. That requires giving up on a lot of C++ goodies. You for example can never write a function that returns a C++ object, like std::string. And you can never allow an exception to cross the module boundary. You are basically down to a C-style interface. Note how COM addresses this problem by using interface-based programming techniques and a class factory plus reference counting to solve the memory management problem.
VS2012 has a counter-measure against this problem, it has a CRT version that allocates from the default process heap. Which solves this particular problem, not otherwise a workaround for the global state issue for other runtime functions. And adds some new problems, a DLL compiled with /MT that gets unloaded that doesn't release all of its allocations now causes an unpluggable leak for example.
This is an ugly problem in C++, the language fundamentally misses an ABI specification that addresses problems like this. The notion of modules is entirely missing from the language specification. Being worked on today but not yet completed. Not simple to do, it is solved in other languages like Java and the .NET languages by specifying a virtual machine, providing a runtime environment where memory management is centralized. Not the kind of runtime environment that excites C++ programmers.
I tried to reproduce your problem in VC2010 and it doesn't crash. It works with a constructor inline or not. Your problem is probably not in what you write here.
Your project is too hard to open as it seams to have its file pathes set in absolute, probably because generated with CMake. (So the files are not found by the compiler).
The problem I see in your code is that you declare the exported classes with _declspec(dllexport) directly written.
You should have a #Define to do this, and the value should be _declspec(dllimport) when read from the exe compilation. Maybe the problem comes from that.
Is there any way a program can crash before main()?
With gcc, you can tag a function with the constructor attribute (which causes the function to be run before main). In the following function, premain will be called before main:
#include <stdio.h>
void premain() __attribute__ ((constructor));
void premain()
{
fputs("premain\n", stdout);
}
int main()
{
fputs("main\n", stdout);
return 0;
}
So, if there is a crashing bug in premain you will crash before main.
Yes, at least under Windows. If the program utilizes DLLs they can be loaded before main() starts. The DllMain functions of those DLLs will be executed before main(). If they encounter an error they could cause the entire process to halt or crash.
The simple answer is: Yes.
More specifically, we can differentiate between two causes for this. I'll call them implementation-dependent and implementation-independent.
The one case that doesn't depend on your environment at all is that of static objects in C++, which was mentioned here. The following code dies before main():
#include <iostream>
class Useless {
public:
Useless() { throw "You can't construct me!"; }
};
static Useless object;
int main() {
std::cout << "This will never be printed" << std::endl;
return 0;
}
More interesting are the platform-dependent causes. Some were mentioned here. One that was mentioned here a couple of times was the usage of dynamically linked libraries (DLLs in windows, SOs in Linux, etc.) - if your OS's loader loads them before main(), they might cause your application do die before main().
A more general version of this cause is talking about all the things your binary's entry point does before calling your entry point(main()). Usually when you build your binary there's a pretty serious block of code that's called when your operating system's loader starts to run your binary, and when it's done it calls your main(). One common thing this code does is initializing the C/C++ standard library. This code can fail for any number of reasons (shortage of any kind of system resource it tries to allocate for one).
One interesting way on for a binary to execute code before main() on windows is using TLS callbacks (google will tell you more about them). This technique is usually found in malware as a basic anti-debugging trick (this trick used to fool ollydbg back then, don't know if it still does).
The point is that your question is actually equivalent to "is there a way that loading a binary would cause user code to execute before the code in main()?", and the answer is hell, yeah!
If you have a C++ program it can initialize variables and objects through functions and constructors before main is entered. A bug in any of these could cause a program to crash.
certainly in c++; static objects with contructors will get called before main - they can die
not sure about c
here is sample
class X
{
public:
X()
{
char *x = 0;
*x = 1;
}
};
X x;
int main()
{
return 0;
}
this will crash before main
Any program that relies on shared objects (DLLs) being loaded before main can fail before main.
Under Linux code in the dynamic linker library (ld-*.so) is run to supply any library dependancies well before main. If any needed libraries are not able to be located, have permissions which don't allow you to access them, aren't normal files, or don't have some symbol that the dynamic linker that linked your program thought that it should have when it linked your program then this can cause failure.
In addition, each library gets to run some code when it is linked. This is mostly because the library may need to link more libraries or may need to run some constructors (even in a C program, the libraries could have some C++ or something else that uses constroctors).
In addition, standard C programs have already created the stdio FILEs stdin, stdout, and stderr. On many systems these can also be closed. This implies that they are also free()ed, which implies that they (and their buffers) were malloc()ed, which can fail. It also suggests that they may have done some other stuff to the file descriptors that those FILE structures represent, which could fail.
Other things that could possibly happen could be if the OS were to mess up setting up the enviromental variables and/or command line arguments that were passed to the program. Code before main is likely to have had to something with this data before calling main.
Lots of things happen before main. Any of them can concievably fail in a fatal way.
I'm not sure, but if you have a global variable like this :
static SomeClass object;
int main(){
return 0;
}
The 'SomeClass' constructor could possibly crash the program before the main being executed.
There are many possibilities.
First, we need to understand what actually goes on before main is executed:
Load of dynamic libraries
Initialization of globals
One some compilers, some functions can be executed explicitly
Now, any of this can cause a crash in several ways:
the usual undefined behavior (dereferencing null pointer, accessing memory you should not...)
an exception thrown > since there is no catch, terminate is called and the program end
It's really annoying of course and possibly hard to debug, and that is why you should refrain from executing code before main as much as possible, and prefer lazy initialization if you can, or explicit initialization within main.
Of course, when it's a DLL failing and you can't modify it, you're in for a world of pain.
Sort of:
http://blog.ksplice.com/2010/03/libc-free-world/
If you compile without standard library, like this:
gcc -nostdlib -o hello hello.c
it won't know how to run main() and will crash.
Global and static objects in a C++ program will have their constructors called before the first statement in main() is executed, so a bug in one of the constructors can cause a crash.
This can't happen in C programs, though.
It depends what you mean by "before main", but if you mean "before any of your code in main is actually executed" then I can think of one example: if you declare a large array as a local variable in main, and the size of this array exceeds the available stack space, then you may well get a stack overflow on entry to main, before the first line of code executes.
A somewhat contrived example would be:
int a = 1;
int b = 0;
int c = a / b;
int main()
{
return 0;
}
It's unlikely that you'd ever do something like this, but if you're doing a lot of macro-magic, it is entirely possible.
class Crash
{
public:
Crash( int* p )
{ *p = 0; }
};
static Crash static_crash( 0 );
void main()
{
}
I had faced the same issue. The root-cause found was.. Too many local variables(huge arrays) were initialized in the main process leading the local variables size exceeding 1.5 mb.
This results in a big jump as the stack pointer is quite large and the OS detects this jump as invalid and crashes the program as it could be malicious.
To debug this.
1. Fire up GDB
2. Add a breakpoint at main
3. disassemble main
4. Check for sub $0xGGGGGGG,%esp
If this GGGGGG value is too high then you will see the same issue as me.
So check the total size of all the local variables in the main.
Sure, if there's a bug in the operating system or the runtime code. C++ is particularly notorious for this behaviour, but it can still happen in C.
You haven't said which platform/libc. In the embedded world there are frequently many things which run before main() - largely to do with platform setup - which can go wrong. (Or indeed if you are using a funky linker script on a regular OS, all bets are off, but I guess that's pretty rare.)
some platform abstraction libraries override (i personally only know of C++ libraries like Qt or ACE, which do this, but maybe some C libraries do something like that aswell) "main", so that they specify a platform-specific main like a int WINAPI WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow ); and setup some library stuff, convert the command line args to the normal int argc, char* argv[] and then call the normal int main(int argc, char* argv[])
Of course such libraries could lead to a crash when they did not implement this correctly (maybe cause of malformed command line args).
And for people who dont know about this, this may look like a crash before main
Best suited example to crash a program before main for stackoverflow:
int main() {
char volatile stackoverflow[1000000000] = {0};
return 0;
}