Including Boost asio::write.hpp file stops exceptions from being caught? - c++

After narrowing down everything I see that when I include boost/asio/write.hpp, the exception wont be caught and the application terminates on windows (This app. has requested to terminate...)
However, when I comment just this include line, the exception works perfectly.
#include <stdexcept>
#include <iostream>
#include <boost/asio/write.hpp>
int main() {
try {
throw std::logic_error("aajj");
} catch (std::exception &e) {
std::cout << "Caught:" << e.what() << std::endl;
}
}
invoked build using these gcc settings
g++ -D_WIN32_WINNT=0x0601 -O0 -g3 -Wall -c -fmessage-length=0 -o "tests\\so_main.o" "..\\tests\\so_main.cpp"
g++ -static-libgcc -static-libstdc++ -Xlinker --enable-auto-import -o client.exe "tests\\so_main.o" -lboost_system-mgw45-mt-1_55 -lws2_32

Related

Weird things happened when programming in c++20 with fmodules-ts

I have two source files: main.cxx and env.cxx. And main.cxx depends on env.cxx
If I compile them with
c++ -fmodules-ts -std=c++20 -Wall -g env.cxx main.cxx, then everything works well.
But if I compile them with
c++ -fmodules-ts -std=c++20 -Wall -g -c env.cxx
c++ -fmodules-ts -std=c++20 -Wall -g -c main.cxx
c++ -fmodules-ts -std=c++20 -Wall -g env.o main.o
I get segmentation fault.
I do not know whether it is a gcc bug or not, because it works well again if I rewrite them in c++17 style, removing the module declarations.
My compiler is gcc 11.1.0 (The newest version up to now)
Here is env.cxx:
module;
#include <cstdlib>
#include <string>
#include <string_view>
#include <stdexcept>
export module env;
export namespace env
{
std::string get_env(const char* name)
{
char* p = ::getenv(name);
if (p == nullptr)
throw std::runtime_error("environment variable not found");
return std::string(p);
}
// These two functions are useless this time.
std::string get_env(const std::string& name)
{
return get_env(name.data());
}
std::string get_env(std::string_view name)
{
std::string s{name};
return get_env(s.data());
}
}
And main.cxx
module;
#include <iostream>
#include <fstream>
import env;
export module main;
int main(int argc, char** argv)
{
std::cout << env::get_env("HOME") << std::endl;
return 0;
}
Can you reproduce this?
Update 2021.06-19 15:44
In fact the two compiling commands above both work well, and the way that produces segfault is actually:
c++ -fmodules-ts -std=c++20 -Wall -g -c env.cxx
c++ -fmodules-ts -std=c++20 -Wall -g -c main.cxx
c++ -fmodules-ts -std=c++20 -Wall -g main.o env.o
Note that the place of main.o and env.o has been changed.

Why is this thrown object not caught when using link-time optimization?

Suppose I have a function in a shared library throwing some object, and a main executable calling that function and attempting to catch that object, e.g.
decl.h
#ifndef DECL_H
#define DECL_H
void pitch();
struct MyException {
MyException();
MyException(const MyException&);
~MyException();
};
#endif
pitch.cpp
#include "decl.h"
MyException::MyException() {}
MyException::MyException(const MyException&) {}
MyException::~MyException() {}
void pitch() {
throw MyException();
}
main.cpp
#include <stdio.h>
#include "decl.h"
int main() {
try { pitch(); }
catch (MyException& e) { printf("MyException()\n"); }
catch (...) { printf("[unknown exception]\n"); }
}
Makefile
all:
$(CXX) -shared -fPIC $(CXXFLAGS) $(LTO) pitch.cpp -o libpitch.so
$(CXX) -g $(CXXFLAGS) $(LTO) main.cpp -L. -lpitch
./a.out
If I compile without link-time optimization, all is well:
CXXFLAGS="-g -O2" LTO="" make
c++ -shared -fPIC -g -O2 pitch.cpp -o libpitch.so
c++ -g -g -O2 main.cpp -L. -lpitch
./a.out
MyException()
But once I turn on link-time optimization, the thrown object is no longer caught:
CXXFLAGS="-g -O2" LTO="-flto" make
c++ -shared -fPIC -g -O2 -flto pitch.cpp -o libpitch.so
c++ -g -g -O2 -flto main.cpp -L. -lpitch
./a.out
[unknown exception]
Is this expected behavior? Is this a problem in my C++ code, or something else? This is on macOS 10.14.5:
$ c++ -v
Apple LLVM version 10.0.1 (clang-1001.0.46.4)
Target: x86_64-apple-darwin18.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
This is not expected behavior.
I tested it on linux (ubuntu) with gcc 4.x, 5.x, 6.x, 7.x and 8.x and with clang 6.0. All work just fine with -O2 -flto.
Unfortunately, since I cannot reproduce it (I don't have a Mac) I can't tell you what is wrong, but it isn't your code imho.
EDIT:
I can't in good conscious say there is "nothing wrong" with your code (although I tested it as shown by you - HOWEVER, I needed to set LD_LIBRARY_PATH="." for ./a.out to work (so perhaps you are loading some other library?)) without showing you how I'd write the above code:
//decl.h:
#pragma once
#include <exception>
void pitch();
struct MyException : public std::exception
{
};
//pitch.cpp:
#include "decl.h"
void pitch()
{
throw MyException();
}
// main.cpp:
#include "decl.h"
#include <iostream>
int main()
{
try
{
pitch();
}
catch (MyException const& e)
{
std::cout << "MyException(): " << e.what() << std::endl;
}
catch (...)
{
std::cerr << "[unknown exception]" << std::endl;
}
}

Linking unwind library statically results in core dump

This code, compiles with g++ using -std=c++11. Linking with g++, when libunwind is linked in statically program crashes after catching in exerciseBug (after rethrow).
LDFLAGS that result in crash:
-Wl,-Bstatic -lunwind -Wl,-Bdynamic -llzma
when libunwind is linked in dynamically, code works as normally. LDFLAGS that result in normal operation:
-Wl,-Bstatic -Wl,-Bdynamic -lunwind -llzma
#include <iostream>
using namespace std;
void exerciseBug()
{
try {
throw exception();
} catch (exception &err) {
cerr << "caught in exerciseBug\n";
throw;
}
}
int main()
{
try {
exerciseBug();
} catch (exception &err) {
cerr << "caught in main\n";
::exit(0);
}
}
Question is why is does libunwind need to be dynamically linked in?
(Running on Ubuntu 16.04; g++ (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609)

Segmentation fault on icpc-compiled program

I am having problems understanding the segmentation fault I receive when trying to run icpc-compiled programs.
A simple example consists of the following files:
// Filename: include/lib1.h
#include <string>
namespace Lib1 {
// Template initialization, T: int, double
template< typename T>
T function1( T x, T y );
// Give me the version
std::string VERSION(void);
}
// Filename: include/lib2.h
#include <string>
namespace Lib2 {
// Give me the version
std::string VERSION(void);
}
// Filename: src/main.cpp
#include <iostream>
#include <string>
#include "lib1.h"
#include "lib2.h"
int main( int argc, char* argv[] ) {
std::cout << "Lib1::VERSION() :" << Lib1::VERSION()
<< std::endl;
std::cout << "Lib2::VERSION() :" << Lib2::VERSION()
<< std::endl;
double x = 1., y = 2.;
std::cout << "Lib1::function1(x, y): "
<< Lib1::function1(x, y)
<< std::endl;
return 0;
}
// Filename: src/lib1/lib1.cpp
#include <string>
#include "lib1.h"
template< typename T >
T Lib1::function1( T x, T y ) {
return x * y;
}
std::string Lib1::VERSION(void) {
return std::string("v0.0.2");
}
// Instantiation for dynamic library
template double Lib1::function1(double, double);
template int Lib1::function1(int, int);
// Filename: src/lib2/lib2.cpp
#include <string>
#include "lib2.h"
std::string Lib2::VERSION(void) {
return std::string("v0.0.1");
}
In this simple, stupid example, when I compile the files using
clang++ -Wall -c -fPIC -I include -o liblib1.o src/lib1/lib1.cpp
clang++ -Wall -shared -o liblib1.so liblib1.o
clang++ -Wall -c -fPIC -I include -o liblib2.o src/lib2/lib2.cpp
clang++ -Wall -shared -o liblib2.so liblib2.o
clang++ -Wall -o main.out -I include -L ./ -llib1 -llib2 src/main.cpp
the program runs fine (provided that I modify my LD_LIBRARY_PATH environment variable properly. However, when I use
icpc -Wall -c -fPIC -I include -o liblib1.o src/lib1/lib1.cpp
icpc -Wall -shared -o liblib1.so liblib1.o
icpc -Wall -c -fPIC -I include -o liblib2.o src/lib2/lib2.cpp
icpc -Wall -shared -o liblib2.so liblib2.o
icpc -Wall -o main.out -I include -L ./ -llib1 -llib2 src/main.cpp
then the program gives me:
[1] 27397 segmentation fault (core dumped) LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./main.out
I would appreciate if you helped me understand and solve this problem. When I did some research on the web, I came across some sources talking about memory access problems and such, but I am not doing anything fancy right now. Moreover, I tried using ddd (I am not fluent/good in gdb) and running the program there, but the program exists with the segfault immidiately after the program start. I cannot even trace the program (yes, prior to running ddd, I used -debug -g switches).
It happened to be the case that Intel Parallel Studio v16.0.3 has (known) issues for Ubuntu and Arch Linux platforms, and unfortunately these systems are not officially supported, either.
One fast workaround seems to be to downgrade to v16.0.2 for now.

How to pass arguments to a method loaded from a static library in CPP

I'm trying to write a program to use a static library of a C++ code into another C++ code. The first C++ code is hello.cpp:
#include <iostream>
#include <string.h>
using namespace std;
extern "C" void say_hello(const char* name) {
cout << "Hello " << name << "!\n";
}
int main(){
return 0;
}
The I made a static library from this code, hello.a, using this command:
g++ -o hello.a -static -fPIC hello.cpp -ldl
Here's the second C++ code to use the library, say_hello.cpp:
#include <iostream>
#include <string>
#include <dlfcn.h>
using namespace std;
int main(){
void* handle = dlopen("./hello.a", RTLD_LAZY);
cout<<handle<<"\n";
if (!handle) {
cerr<<"Cannot open library: "<<dlerror()<<'\n';
return 1;
}
typedef void (*hello_t)();
dlerror(); // reset errors
hello_t say_hello = (hello_t) dlsym(handle, "say_hello");
const char *dlsym_error = dlerror();
if (dlsym_error) {
cerr<<"Cannot load symbol 'say_hello': "<<dlsym_error<<'\n';
dlclose(handle);
return 1;
}
say_hello("World");
dlclose(handle);
return 0;
}
Then I compiled say_hello.cpp using:
g++ -W -ldl say_hello.cpp -o say_hello
and ran ./say_hello in the command line. I expected to get Hello World! as output, but I got this instead:
0x8ea4020
Hello ▒▒▒▒!
What is the problem? Is there any trick to make compatibility for method's argument like what we use in ctypes or what?
If it helps I use a lenny.
EDIT 1:
I have changed the code and used a dynamic library, 'hello.so', which I've created using this command:
g++ -o hello.so -shared -fPIC hello.cpp -ldl
The 6th line of the code changed to:
void* handle = dlopen("./hello.so", RTLD_LAZY);
When I tried to compile say_hello.cpp, I got this error:
say_hello.cpp: In function ‘int main()’:
say_hello.cpp:21: error: too many arguments to function
I also tried to compile it using this line:
g++ -Wall -rdynamic say_hello.cpp -ldl -o say_hello
But same error raised. So I removed the argument "World" and the it has been compiled with no error; but when I run the executable, I get the same output like I have mentioned before.
EDIT 2:
Based on #Basile Starynkevitch 's suggestions, I changed my say_hello.cpp code to this:
#include <iostream>
#include <string>
#include <dlfcn.h>
using namespace std;
int main(){
void* handle = dlopen("./hello.so", RTLD_LAZY);
cout<<handle<<"\n";
if (!handle) {
cerr<<"Cannot open library: "<<dlerror()<<'\n';
return 1;
}
typedef void hello_sig(const char *);
void* hello_ad = dlsym(handle, "say_hello");
if (!hello_ad){
cerr<<"dlsym failed:"<<dlerror()<<endl;
return 1;
}
hello_sig* fun = reinterpret_cast<hello_sig*>(hello_ad);
fun("from main");
fun = NULL;
hello_ad = NULL;
dlclose(handle);
return 0;
}
Before that, I used below line to make a .so file:
g++ -Wall -fPIC -g -shared hello.cpp -o hello.so
Then I compiled say_hello.cpp wth this command:
g++ -Wall -rdynamic -g say_hello.cc -ldl -o say_hello
And then ran it using ./say_hello. Now everything is going right. Thanks to #Basile Starynkevitch for being patient about my problem.
Functions never have null addresses, so dlsym on a function name (or actually on any name defined in C++ or C) cannot be NULL without failing:
hello_t say_hello = (hello_t) dlsym(handle, "say_hello");
if (!say_hello) {
cerr<<"Cannot load symbol 'say_hello': "<<dlerror()<<endl;
exit(EXIT_FAILURE);
};
And dlopen(3) is documented to dynamically load only dynamic libraries (not static ones!). This implies shared objects (*.so) in ELF format. Read Drepper's paper How To Use Shared Libraries
I believe you might have found a bug in dlopen (see also its POSIX dlopen specification); it should fail for a static library hello.a; it is always used on position independent shared libraries (like hello.so).
You should dlopen only position independent code shared objects compiled with
g++ -Wall -O -shared -fPIC hello.cpp -o hello.so
or if you have several C++ source files:
g++ -Wall -O -fPIC src1.cc -c -o src1.pic.o
g++ -Wall -O -fPIC src2.cc -c -o src2.pic.o
g++ -shared src1.pic.o src2.pic.o -o yourdynlib.so
you could remove the -O optimization flag or add -g for debugging or replace it with -O2 if you want.
and this works extremely well: my MELT project (a domain specific language to extend GCC) is using this a lot (generating C++ code, forking a compilation like above on the fly, then dlopen-ing the resulting shared object). And my manydl.c example demonstrates that you can dlopen a big lot of (different) shared objects on Linux (typically millions, and hundred of thousands at least). Actually the limitation is the address space.
BTW, you should not dlopen something having a main function, since main is by definition defined in the main program calling (perhaps indirectly) dlopen.
Also, order of arguments to g++ matters a lot; you should compile the main program with
g++ -Wall -rdynamic say_hello.cpp -ldl -o say_hello
The -rdynamic flag is required to let the loaded plugin (hello.so) call functions from inside your say_hello program.
For debugging purposes always pass -Wall -g to g++ above.
BTW, you could in principle dlopen a shared object which don't have PIC (i.e. was not compiled with -fPIC); but it is much better to dlopen some PIC shared object.
Read also the Program Library HowTo and the C++ dlopen mini-howto (because of name mangling).
example
File helloshared.cc (my tiny plugin source code in C++) is
#include <iostream>
#include <string.h>
using namespace std;
extern "C" void say_hello(const char* name) {
cout << __FILE__ << ":" << __LINE__ << " hello "
<< name << "!" << endl;
}
and I am compiling it with:
g++ -Wall -fPIC -g -shared helloshared.cc -o hello.so
The main program is in file mainhello.cc :
#include <iostream>
#include <string>
#include <dlfcn.h>
#include <stdlib.h>
using namespace std;
int main() {
cout << __FILE__ << ":" << __LINE__ << " starting." << endl;
void* handle = dlopen("./hello.so", RTLD_LAZY);
if (!handle) {
cerr << "dlopen failed:" << dlerror() << endl;
exit(EXIT_FAILURE);
};
// signature of loaded function
typedef void hello_sig_t(const char*);
void* hello_ad = dlsym(handle,"say_hello");
if (!hello_ad) {
cerr << "dlsym failed:" << dlerror() << endl;
exit(EXIT_FAILURE);
}
hello_sig_t* fun = reinterpret_cast<hello_sig_t*>(hello_ad);
fun("from main");
fun = NULL; hello_ad = NULL;
dlclose(handle);
cout << __FILE__ << ":" << __LINE__ << " ended." << endl;
return 0;
}
which I compile with
g++ -Wall -rdynamic -g mainhello.cc -ldl -o mainhello
Then I am running ./mainhello with the expected output:
mainhello.cc:7 starting.
helloshared.cc:5 hello from main!
mainhello.cc:24 ended.
Please notice that the signature hello_sig_t in mainhello.cc should be compatible (homomorphic, i.e. the same as) with the function say_hello of the helloshared.cc plugin, otherwise it is undefined behavior (and you probably would have a SIGSEGV crash).