Solved: I upgraded from mingw 4.6.2 to 4.7.0 and it works perfectly, guess it was just a bug
I started to do some research on how terminate a multithreaded application properly and I found those 2 post(first, second) about how to use QueueUserAPC to signal other threads to terminate.
I thought I should give it a try, and the application keeps crashing when I throw the exception from the APCProc.
Code:
#include <stdio.h>
#include <windows.h>
class ExitException
{
public:
char *desc;
DWORD exit_code;
ExitException(char *desc,int exit_code): desc(desc), exit_code(exit_code)
{}
};
//I use this class to check if objects are deconstructed upon termination
class Test
{
public:
char *s;
Test(char *s): s(s)
{
printf("%s ctor\n",s);
}
~Test()
{
printf("%s dctor\n",s);
}
};
DWORD CALLBACK ThreadProc(void *useless)
{
try
{
Test t("thread_test");
SleepEx(INFINITE,true);
return 0;
}
catch (ExitException &e)
{
printf("Thread exits\n%s %lu",e.desc,e.exit_code);
return e.exit_code;
}
}
void CALLBACK exit_apc_proc(ULONG_PTR param)
{
puts("In APCProc");
ExitException e("Application exit signal!",1);
throw e;
return;
}
int main()
{
HANDLE thread=CreateThread(NULL,0,ThreadProc,NULL,0,NULL);
Sleep(1000);
QueueUserAPC(exit_apc_proc,thread,0);
WaitForSingleObject(thread,INFINITE);
puts("main: bye");
return 0;
}
My question is why does this happen?
I use mingw for compilation and my OS is 64bit.
Can this be the reason?I read that you shouldn't call QueueApcProc from a 32bit app for a thread which runs in a 64bit process or vice versa, but this shouldn't be the case.
EDIT: I compiled this with visual studio's c++ compiler 2010 and it worked flawlessly, it is possible that this is a bug in gcc/mingw?
I can reproduce the same thing with VS2005. The problem is that the compiler optimizes the catch away. Why? Because according to the C++ standard it's undefined what happens if an extern "C" function exits with an exception. So the compiler assumes that SleepEx (which is extern "C") does not ever throw. After inlining of Test::Test and Test::~Test it sees that the printf doesn't throw either, and consequently if something in this block exits via an exception
Test t("thread_test");
SleepEx(INFINITE,true);
return 0;
the behavior is undefined!
In MSVC the code doesn't work with the /EHsc switch in Release build, but works with /EHa or /EHs, which tell it to assume that C function may throw. Perhaps GCC has a similar flag.
Related
Attempting to compile the following code with vcvarsall.bat in command line throws a warning saying that the exception handler is needed in the code but not called before using /EHsc.
Code:
#include <iostream>
int main()
{
std::cout << "hello world" << std::endl;
return 0;
}
The batch file:
#echo off
cl C:\Development\..\basicmath.cpp
The warning:
C:\...\ostream(746): warning C4530: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc
C:...\basicmath.cpp(10): note: see reference to function template instantiation 'std::basic_ostream<char,std::char_traits<char>> &std::operator <<<std::char_traits<char>>(std::basic_ostream<char,std::char_traits<char>> &,const char *)' being compiled
Line 743 - 754 of ostream line 746 (from error) is _TRY:
if (!_Ok) {
_State |= ios_base::badbit;
} else { // state okay, insert
_TRY_IO_BEGIN
if ((_Ostr.flags() & ios_base::adjustfield) != ios_base::left) {
for (; 0 < _Pad; --_Pad) { // pad on left
if (_Traits::eq_int_type(_Traits::eof(), _Ostr.rdbuf()->sputc(_Ostr.fill()))) {
_State |= ios_base::badbit; // insertion failed, quit
break;
}
}
}
Adding /EHsc to my batch file will allow it to run but I would like to know why this is. Why does this block of code from the file for output require EHsc to be called?
MSDOCS says EHsc is for cleanup to prevent memory leaks, what is causing the leak and why do they need an external program to fix the leak instead of fixing it in the same file (this may sound rude but its just ignorant)?
Edit: thank you for pointing out its a warning and not an error.
Short answer:
Add /EHs or /EHsc to your compilation options as the documentation suggests. It's the most portable option regarding exceptions handling, if you will ever need to execute the same code on Unix machine.
Long answer:
There are two parts to this question. First is why the warning occurs in iostream and the second is what does the warning mean.
Why are there exceptions in iostream?
The default behaviour of streams in C++ is exceptionless - any failure is represented by setting an internal fail bit, accessible with eof(), fail() and bad() functions. However, you can change this behaviour to throwing exceptions on failure by using exceptions() method on stream. You can choose which fail bits trigger exceptions, but the main point is that the code must be there by standard. The warning seems to analyze only that - it notices a possible path where throw occurs and reports a warning.
What does the warning mean?
From the Microsoft documentation (emphasis mine):
By default (that is, if no /EHsc, /EHs, or /EHa option is specified), the compiler supports SEH handlers in the native C++ catch(...) clause. However, it also generates code that only partially supports C++ exceptions . The default exception unwinding code doesn't destroy automatic C++ objects outside of try blocks that go out of scope because of an exception.
The issue is that (for some reason) MSVC compiler by default generates assembly which is wrong according to the standard. Stack unwinding will not be perfomerd when exception is thrown, which may cause memory leaks and other unexpected behaviours.
An example correct C++ code, which has a memory leak under the default setting:
void foo()
{
std::string str = "This is a very long string. It definitely doesn't use Small String Optimization and it must be allocated on the heap."
std::cout << str;
throw std::runtime_error{"Oh no, something went wrong"};
}
int main()
{
try
{
foo();
}
catch (std::exception&)
{
// str in foo() was possibly not released, because it wasn't deleted when exception was thrown!
}
}
So the final answer would be:
If you plan to use Structured Exceptions (like divide-by-zero or invalid memory access errors) or use a library that uses them, use /EHa
If you don't need to catch SEs, choose /EHs for compatibility with C++ standard and portability
Never leave the defaults, always set /EH to one alternative or another, otherwise you will have to deal with strange behaviours when using exceptions.
That's a warning so your current program compiles fine. But problems come up in programs as such:
#include <exception>
#include <iostream>
struct A{
A(int x):x(x) {
std::cout<<"Contructed A::"<<x<<'\n';
}
~A() {
std::cout<<"Destructed A::"<<x<<'\n';
}
private:
int x;
};
void foo() {
A a{2};
throw std::bad_exception{};
}
int main()
{
A a {1};
try {
foo();
} catch(const std::bad_exception& ex) {
std::cout<<ex.what()<<'\n';
}
return 0;
}
Using cl test.cpp yields the output:
Contructed A::1
Contructed A::2
bad exception
Destructed A::1
While using cl test.cpp /EHsc yields:
Contructed A::1
Contructed A::2
Destructed A::2
bad exception
Destructed A::1
This behavior is explained by the documentation for the warning C4530:
When the /EHsc option isn't enabled, automatic storage objects in the
stack frames between the throwing function and the function where the
exception is caught don't get destroyed. Only the automatic storage
objects created in a try or catch block get destroyed, which can lead
to significant resource leaks and other unexpected behavior.
That explains a {2} not being destructed when the program wasn't compiled with /EHsc.
And of course,
If no exceptions can possibly be thrown in your executable, you may
safely ignore this warning.
So, for a program like
#include <cstdio>
int main()
{
std::printf("hello world\n");
return 0;
}
cl.exe quietly compiles.
In my application I'm using coroutine2 to generate some objects which I have to decode from a stream. These objects are generated using coroutines. My problem is that as soon as I reach the end of the stream and would theoretically throw std::ios_base::failure my application crashes under certain conditions.
The function providing this feature is implemented in C++, exported as a C function and called from C#. This all happens on a 32bit process on Windows 10 x64. Unfortunately it only reliably crashes when I start my test from C# in debugging mode WITHOUT the native debugger attached. As soon as I attach the native debugger everything works like expected.
Here is a small test application to reproduce this issue:
Api.h
#pragma once
extern "C" __declspec(dllexport) int __cdecl test();
Api.cpp
#include <iostream>
#include <vector>
#include <sstream>
#include "Api.h"
#define BOOST_COROUTINES2_SOURCE
#include <boost/coroutine2/coroutine.hpp>
int test()
{
using coro_t = boost::coroutines2::coroutine<bool>;
coro_t::pull_type source([](coro_t::push_type& yield) {
std::vector<char> buffer(200300, 0);
std::stringstream stream;
stream.write(buffer.data(), buffer.size());
stream.exceptions(std::ios_base::eofbit | std::ios_base::badbit | std::ios_base::failbit);
try {
std::vector<char> dest(100100, 0);
while (stream.good() && !stream.eof()) {
stream.read(&dest[0], dest.size());
std::cerr << "CORO: read: " << stream.gcount() << std::endl;
}
}
catch (const std::exception& ex) {
std::cerr << "CORO: caught ex: " << ex.what() << std::endl;
}
catch (...) {
std::cerr << "CORO: caught unknown exception." << std::endl;
}
});
std::cout << "SUCCESS" << std::endl;
return 0;
}
C#:
using System;
using System.Runtime.InteropServices;
namespace CoroutinesTest
{
class Program
{
[DllImport("Api.dll", EntryPoint = "test", CharSet = CharSet.Ansi, CallingConvention = CallingConvention.Cdecl)]
internal static extern Int32 test();
static void Main(string[] args)
{
test();
Console.WriteLine("SUCCESS");
}
}
}
Some details:
We are using Visual Studio 2015 14 and dynamically link the c++ runtime.
The test library statically links Boost 1.63.0.
We also tried to reproduce this behaviour with calling the functionallity directly from c++ and from python. Both tests have not been successful so far.
If you start the c# code with CTRL F5 (meaning without the .net debugger) everything will also be fine. Only if you start it with F5 (meaning the .NET Debugger attached) the visual studio instance will crash. Also be sure not to enable the native debugger!
Note: If we don't use the exceptions in the stream, everything seams to be fine as well. Unfortunately the code decoding my objects makes use of them and therefore I cannot avoid this.
It would be amazing if you had some additional hints on what might go wrong here or a solution. I'm not entirely sure if this is a boost bug, could also be the c# debugger interfering with boost-context.
Thanks in advance! Best Regards, Michael
I realize this question is old but I just finished reading a line in the docs that seemed pertinent:
Windows using fcontext_t: turn off global program optimization (/GL) and change /EHsc (compiler assumes that functions declared as extern "C" never throw a C++ exception) to /EHs (tells compiler assumes that functions declared as extern "C" may throw an exception).
This is just a guess but in your coroutine I think you are supposed to push a boolean to your sink (named as yield in your code) and the code is not doing it.
While reviewing bit of code, I came across a buggy std::terminate() handler
that wasn't terminating the program, but returning. Based on the documentation for std::set_terminate(), I think this falls within the realms of undefined - or at least implementation defined - behaviour.
Under Linux, and compiled using GCC, I found that cores were being dumped, which implies that some guardian angel was calling abort() or something similar on our behalf.
So I wrote the following test snippet, which confirmed my hunch. It looks like GCC or it's std library does wrap std::terminate() handlers so they do terminate the program.
#include <iostream>
#include <exception>
#include <dlfcn.h>
// Compile using
// g++ main.cpp -ldl
// Wrap abort() in my own implementation using
// dlsym(), so I can see if GCC generates code to
// call it if my std::terminate handler doesn't.
namespace std
{
void abort()
{
typedef void (*aborter)();
static aborter real_abort = 0x0;
if (0x0 == real_abort)
{
void * handle = 0x0;
handle = dlsym(RTLD_NEXT, "abort");
if (handle)
{
real_abort = (aborter )(handle);
}
}
std::cout << "2. Proof that GCC calls abort() if my buggy\n"
<< " terminate handler returns instead of terminating."
<< std::endl;
if (real_abort)
{
real_abort();
}
}
}
// Buggy terminate handler that returns instead of terminating
// execution via abort (or exit)
void buggyTerminateHandler()
{
std::cout << "1. In buggyTerminateHandler." << std::endl;
}
int main (int argc, char ** argv)
{
// Set terminate handler
std::set_terminate(buggyTerminateHandler);
// Raise unhandled exception
throw 1;
}
That a compiler (or library) would wrap std::terminate() handlers seems sensible to be, so at a guess I'd assume that most compilers do something along these lines.
Can anyone advise regarding the behaviour on Windows using Visual Studio or OS X using GCC?
Let's consider the following three files.
tclass.h:
#include <iostream>
#include <vector>
template<typename rt>
class tclass
{
public:
void wrapper()
{
//Storage is empty
for(auto it:storage)
{
}
try
{
thrower();
}
catch(...)
{
std::cout << "Catch in wrapper\n";
}
}
private:
void thrower(){}
std::vector<int> storage;
};
spec.cpp:
#include "tclass.h"
//The exact type does not matter here, we just need to call the specialized method.
template<>
void tclass<long double>::thrower()
{
//Again, the exception may have any type.
throw (double)2;
}
main.cpp:
#include "tclass.h"
#include <iostream>
int main()
{
tclass<long double> foo;
try
{
foo.wrapper();
}
catch(...)
{
std::cerr << "Catch in main\n";
return 4;
}
return 0;
}
I use Linux x64, gcc 4.7.2, the files are compiled with this command:
g++ --std=c++11 *.cpp
First test: if we run the program above, it says:
terminate called after throwing an instance of 'double'
Aborted
Second test: if we comment for(auto it:storage) in the tclass.h file, the program will catch the exception in main function. WATWhy? Is it a stack corruption caused by an attempt to iterate over the empty vector?
Third test: lets uncomment back the for(auto it:storage) line and move the method specialization from spec.cpp to main.cpp. Then the exception is caught in wrapper. How is it possible and why does possible memory corruption not affect this case?
I also tried to compile it with different optimization levels and with -g, but results were the same.
Then I tried it on Windows 7 x64, VS2012 express, compiling with x64 version of cl.exe with no extra command line arguments. At the first test this program produced no output, so I think it just crashed silently, so the result is similar with Linux version. For the second test it produced no output again, so result is different from Linux. For the third test the result was similar with Linux result.
Are there any errors in this code so they can lead to such behavior? May the results of the first test be caused by possible bug in compilers?
With your code, I have with gcc 4.7.1:
spec.cpp:6: multiple definition of 'tclass<long double>::thrower()'
You may correct your code by declaring the specialization in your .h as:
template<> void tclass<long double>::thrower();
I'm working on an application written in Visual Studio 6 (I know, FML) that is calling functions in a DLL using LoadLibrary and GetProcAddress. The newer code can't compile in VC6, and needs a newer compiler. The DLL has a few functions that construct a C++ object, and then the VC6 program uses the object through an abstract class.
This works just fine usually, but it runs into problems when the functions retrieved by GetProcAddress throw exceptions -- even when the exceptions are caught within the DLL. I've noticed that this doesn't happen when the abstract class's methods throw an exception. Things work normally in that case.
What am I doing wrong here? How can I make VC6 generate code to handle the exceptions properly?
Edit: Here's an example of a function that causes the program to crash:
extern "C" __declspec(dllexport) Box* getBox(const char* addr)
{
try {
return createBox(addr);
} catch (std::exception& ex) {
LOG_ERROR("Open failed: " << ex.what());
return 0;
} catch (...) {
LOG_ERROR("Error while opening.");
return 0;
}
}
You cannot do inheritance cross compiler versions like that. It almost works but exceptions and a few other things go crazy.