Segfault with asio standalone when classes in separate files - c++

The below is as minimal of an example as I can get. It does need to be in separate files as that seems to be what causes the segmentation fault error.
I'm using Mingw x32 4.8.1 with Asio standalone 1.10.6 . I've also tested with TDM GCC 4.7.1 and Mingw x64 4.8.1. All of these under Windows produce the same segfault. There's no such issue under Linux with the latest version of GCC.
edit: I've just finished testing on Mingw-w64 5.2.0 (the Mingw-Builds build of it). Same problem.
I've also tested compiling this with TDM GCC 4.8.1 on a fresh virtual machine and get the same segfault. Edit: I've also now tested on a completely different machine with TDM GCC 4.8.1
In all cases I'm using -std=c++11, -g and -Wall. I've also compiled with -g and have the same result. I need the C++11 flag because I don't want a dependency on boost, just asio.
With the following code in a single main.cpp file there are not problems and the program seems to run as expected. However, if I put each class into it's own *.hpp and *.cpp file, I get a segfault in the Server classes constructor.
I further went back and put everything back into main.cpp and started moving each class one by one. The segfault begins occuring after the final class, Client is put in it's own files.
In addition, as I was putting all the classes into one file and moving them etc, I made sure that any un-needed object files weren't linked to my .exe.
This code started a lot larger but it's narrowed down to this.
Server.hpp
#ifndef SERVER_HPP_INCLUDED
#define SERVER_HPP_INCLUDED
#include <string>
#include <memory>
#define ASIO_STANDALONE
#include <asio.hpp>
using namespace asio::ip;
namespace network
{
class Server
{
public:
Server(asio::io_service& ioService, uint16_t port);
private:
tcp::acceptor m_acceptor;
};
}
#endif // SERVER_HPP_INCLUDED
Server.cpp
#include "Server.hpp"
using namespace network;
#include <iostream>
Server::Server(asio::io_service& ioService, uint16_t port)
: m_acceptor(ioService, tcp::endpoint(tcp::v4(),port))
{
}
Client.hpp
#ifndef CLIENT_HPP_INCLUDED
#define CLIENT_HPP_INCLUDED
#include <vector>
#define ASIO_STANDALONE
#include <asio.hpp>
using namespace asio::ip;
namespace network
{
class Client
{
public:
Client(asio::io_service& ioService);
private:
asio::steady_timer m_timer;
};
}
#endif // CLIENT_HPP_INCLUDED
Client.cpp
#include "Client.hpp"
using namespace network;
#include <iostream>
Client::Client(asio::io_service& ioService)
: m_timer(ioService)
{
}
main.cpp
#include <iostream>
#define ASIO_STANDALONE
#include <asio.hpp>
using namespace asio::ip;
#include "Server.hpp"
int main()
{
try
{
uint16_t peerRequestPort = 63000;
asio::io_service io_service;
network::Server server(io_service,peerRequestPort);
}
catch(std::exception& e)
{
std::cout << e.what() << std::endl;
}
return 0;
}
Here's the callstack from GDB:
#0 00406729 asio::detail::service_registry::keys_match(key1=..., key2=...) (F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/detail/impl/service_registry.ipp:89)
#1 ?? 0x0040696e in asio::detail::service_registry::do_use_service (this=0x5d2f10, key=..., factory=0x406b44 <asio::detail::service_registry::create<asio::socket_acceptor_service<asio::ip::tcp> >(asio::io_service&)>) (F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/detail/impl/service_registry.ipp:113)
#2 004068B6 asio::detail::service_registry::use_service<asio::socket_acceptor_service<asio::ip::tcp> >(this=0x5d2f10) (F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/detail/impl/service_registry.hpp:47)
#3 00403857 asio::use_service<asio::socket_acceptor_service<asio::ip::tcp> >(ios=...) (F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/impl/io_service.hpp:32)
#4 004039B3 asio::basic_io_object<asio::socket_acceptor_service<asio::ip::tcp>, true>::basic_io_object(this=0x28fe48, io_service=...) (F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/basic_io_object.hpp:182)
#5 00403B29 asio::basic_socket_acceptor<asio::ip::tcp, asio::socket_acceptor_service<asio::ip::tcp> >::basic_socket_acceptor(this=0x28fe48, io_service=..., endpoint=..., reuse_addr=true) (F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/basic_socket_acceptor.hpp:137)
#6 00401D3B network::Server::Server(this=0x28fe48, ioService=..., port=63000) (F:\GameDev\Dischan\Tests\Server.cpp:7)
#7 004018F1 main() (F:\GameDev\Dischan\Tests\main.cpp:17)
And finally here's the output from Dr Memory:
Dr. Memory version 1.8.0 build 8 built on Sep 9 2014 16:27:02
Dr. Memory results for pid 5296: "tests.exe"
Application cmdline: "tests.exe"
Recorded 108 suppression(s) from default C:\Program Files (x86)\Dr. Memory\bin\suppress-default.txt
Error #1: UNADDRESSABLE ACCESS: reading 0x00000007-0x0000000b 4 byte(s)
# 0 asio::detail::service_registry::keys_match [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/detail/impl/service_registry.ipp:89]
# 1 asio::detail::service_registry::do_use_service [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/detail/impl/service_registry.ipp:113]
# 2 asio::detail::service_registry::use_service<> [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/detail/impl/service_registry.hpp:47]
# 3 asio::use_service<> [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/impl/io_service.hpp:32]
# 4 asio::basic_io_object<>::basic_io_object [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/basic_io_object.hpp:182]
# 5 asio::basic_socket_acceptor<>::basic_socket_acceptor [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/basic_socket_acceptor.hpp:137]
# 6 network::Server::Server [F:/GameDev/Dischan/Tests/Server.cpp:7]
# 7 main [F:/GameDev/Dischan/Tests/main.cpp:17]
Note: #0:00:00.780 in thread 7464
Note: instruction: mov 0x04(%eax) -> %eax
Error #2: LEAK 36 direct bytes 0x02530860-0x02530884 + 124 indirect bytes
# 0 replace_operator_new [d:\drmemory_package\common\alloc_replace.c:2609]
# 1 asio::io_service::io_service [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/impl/io_service.ipp:39]
# 2 main [F:/GameDev/Dischan/Tests/main.cpp:15]
===========================================================================
FINAL SUMMARY:
DUPLICATE ERROR COUNTS:
SUPPRESSIONS USED:
ERRORS FOUND:
1 unique, 1 total unaddressable access(es)
0 unique, 0 total uninitialized access(es)
0 unique, 0 total invalid heap argument(s)
0 unique, 0 total GDI usage error(s)
0 unique, 0 total handle leak(s)
0 unique, 0 total warning(s)
1 unique, 1 total, 160 byte(s) of leak(s)
0 unique, 0 total, 0 byte(s) of possible leak(s)
ERRORS IGNORED:
14 potential error(s) (suspected false positives)
(details: C:\Users\User\AppData\Roaming\Dr. Memory\DrMemory-tests.exe.5296.000\potential_errors.txt)
12 potential leak(s) (suspected false positives)
(details: C:\Users\User\AppData\Roaming\Dr. Memory\DrMemory-tests.exe.5296.000\potential_errors.txt)
24 unique, 24 total, 2549 byte(s) of still-reachable allocation(s)
(re-run with "-show_reachable" for details)
Details: C:\Users\User\AppData\Roaming\Dr. Memory\DrMemory-tests.exe.5296.000\results.txt
I just cannot see why I'm getting a segfault. Even after commenting out all of the meaningful code it still occurs.
EDIT
I've edited the code above to show that just the Server constructor appears to cause an issue and the contents of each file. (I've again ensured that only these object files are compiled and linked).
EDIT 2
I've tested this with the TDM GCC 4.7.1, Mingw Builds x64 4.8.1 and Mingw Builds x32 4.8.1. Same result for all of them.
EDIT 3
I've further reduced the code way down. Now, in Client, if I remove any asio objects that require an asio::io_service& to be constructed, then there is no segfault. But any of the asio types I've tried so far have all produced the same segfault. This isn't a problem in the Server class for instance, which has an asio::acceptor. The far out thing is that there is no instance of Client created, so why it affects the program and produces a segfault in Servers constructor is weird.
EDIT 4
I've now completely removed Server.hpp and Server.cpp and have updated main.cpp to this:
main.cpp
#include <iostream>
#define ASIO_STANDALONE
#include <asio.hpp>
using namespace asio::ip;
int main()
{
try
{
uint16_t peerRequestPort = 63000;
asio::io_service io_service;
auto protocol = tcp::v4();
tcp::endpoint endpoint(protocol,peerRequestPort);
tcp::acceptor m_acceptor(io_service, endpoint);
}
catch(std::exception& e)
{
std::cout << e.what() << std::endl;
}
return 0;
}
I still get the segfault and the callstack reflects the lack of Server constructor. The segfault is still in the same place. DrMemory results look about the same as well.
Same as before, if I don't link the Client object file I have no issue.
EDIT 5
As requested, here is the build log from Code::Blocks
g++.exe -std=c++11 -Wall -D_WIN32_WINNT=0x0501 -g -I..\..\asio-1.10.6\asio-1.10.6\include -c F:\GameDev\Dischan\Tests\Client.cpp -o obj\Debug\Client.o
g++.exe -std=c++11 -Wall -D_WIN32_WINNT=0x0501 -g -I..\..\asio-1.10.6\asio-1.10.6\include -c F:\GameDev\Dischan\Tests\main.cpp -o obj\Debug\main.o
g++.exe -o Build\Debug\Windows\Tests.exe obj\Debug\Client.o obj\Debug\main.o -lws2_32 -lwsock32
Output file is Build\Debug\Windows\Tests.exe with size 723.02 KB
Process terminated with status 0 (0 minute(s), 3 second(s))
0 error(s), 0 warning(s) (0 minute(s), 3 second(s))
EDIT 6
I'm starting to go outside of my abilities now but I've managed to track down some of the problem (and I'm learning some new stuff which is cool).
It appears that when an object which requires an asio::io_service& is created, it adds "services" to the io_service. These are static across all io_service instances. So, when the service request is made, there's a loop that is iterated through which appears to be a linked list of already created services. If the requested service hasn't already been created; it's then created.
This info is from the reference for io_service and from looking at service_registry.ipp (line 111).
This is done internally with a call to service_registry::do_use_service. The service_registry has a member named first_service_ of type asio::io_service::service*. This first service should have a member named next_ which is the linked list part I mentioned.
At the time of the first call to service_registry::do_use_service (when the asio::acceptor is constructed) however, the first_service_ member has a value of 0xffffffff which is obviously not right. So I believe that's the root of the segfault.
Why this member has the value 0xffffffff is beyond me. It was my understanding that only old/quirky machines reserved this address for null pointers... but I concede I could be way off.
I did just quickly check that by doing this:
int* p = nullptr;
if(p)
std::cout << "something" << std::endl;
and set a breakpoint to read the value. The value for p is 0x0, not 0xffffffff.
So, I set a breakpoint on the constructor for service_registry (asio/detail/impl/service_registry.hpp) and on the destructor (in case it was explicitly called somewhere) and on the three methods do_has_service, do_use_service and do_add_service. My thought was to try and track at what point first_service_ gets the bad value.
I've had no luck. These are the only places which could possibly alter the value of first_service_.
I take this to mean that something has corrupted the stack and changed the address for first_service_. But, I'm only a hobbyist...
I did check that the address of the this pointer for the constructor was the same as the one used for the invocation of the do_use_service to make sure that two instances weren't created or something like that.
EDIT 7
Okay, so I've now found that if I compile with the ASIO_DISABLE_THREADS I no longer get a segfault!
But, this results in an exception being thrown because I'm attempting to use threads even though I've disabled them. Which I take to mean that I would be restricted to synchronous calls and no async calls. (i.e., what's the point of using asio?)
The reference material here does say that ASIO_DISABLE_THREADS
Explicitly disables Asio's threading support, independent of whether or not Boost supports threads.
So I take it to mean that this define stops asio from using threads regardless of boost or not; which makes sense.
Why threading would cause a problem, I don't know. I'm not keen on delving that far.
I give up
I give up on asio. After looking through the code and documentation, it appears to have been developed with boost in mind more-so than as a standalone library. Apparently to the point where you need to use Boost over C++11, which I just don't care to do.
Best C/C++ Network Library looks like there's lots of alternatives.
To be honest, running my own synchronous socket calls in my own thread sounds like a better idea considering the control I'll gain. At least until asio enters the Standard Library and is implemented in Mingw-w64.
Considering that asio looks like a prime candidate to be in the standard library, or at least it's flavour, it's probably a good idea to stick with it.

I think the segmentation fault occurs because of mismatching asio type definitions between Server.hpp and Client.hpp. In your case, this could only happen if boost changes typedefs depending on defines set by <string>, <memory> or <vector>.
What I suggest to try is either:
Include the same headers in both Server/Client.hpp and main.cpp before including asio.hpp:
#include <string>
#include <memory>
#include <vector>
#define ASIO_STANDALONE
#include <asio.hpp>
Or simply include asio.hpp before any other include in your header files and remove the include from main.cpp.

From my POV, it is a gcc bug. If you use clang++ instead of g++.exe, the crash dissappears even without fiddling with #include's ordering. Passing all source files to g++ in single call also makes crash disappear. This leads me to think that there is something wrong with gcc's COFF object files emission, since clang uses linker from the same mingw distribution.
So, either use clang or apply workaround from #Wouter Huysentruit's answer.
Note that latest clang release (3.7.0) has another bug, not related with your problem, that makes linking fail. I had to use nightly snapshot, which is quite stable anyway.

I had similar problem. My app crashed on win_lock::lock(), but win_lock::win_lock() constructor was never called!
When added -D_POSIX_THREADS to compiler flags it simply started to work.
I use mingw-w64-i686-7.1.0-posix-dwarf-rt_v5-rev0 on win7

Related

How to make N executions of the same program run at different cores at 100% in Linux/macOS?

QUESTION
How can I run different instances of the same program in different cores at 100%?
CONTEXT
I am running a C++11 code in an iMac Pro (2017) with OS High Sierra 10.13.6. The corresponding executable is called 'bayesian_estimation'.
When I run one instance of this program, one of the cores is doing that task at 100%, as you can see here:
If I run more instances, the CPU% of each of them goes down. But most of the cores remain idle! Why are not they being used? See, for example, what happens when 3 'bayesian_estimation' processes are running:
Or when I execute 7:
Ideally in the last picture, I would like to have 7 cores completely busy, each of them running one 'bayesian_estimation' process.
EDIT 1
I proceed to give more information that might help to identify the problem. I compiled my code as follows:
g++ -std=c++11 -Wall -g bayesian_estimation.cpp -o bayesian_estimation -O2 -larmadillo
And all libraries and packages that I have used are the following:
#include <iostream> // Standard input and output functions.
#include <iomanip> // Manipulate stream input and output functions.
#include <armadillo> // Load Armadillo library.
#include <sys/stat.h> // To obtain information from files (e.g., S_ISDIR).
#include <dirent.h> // Format of directory entries.
#include <vector> // To deal with vectors.
I identified the origin of the bottleneck that #bolov mentions in the comments. It arises due to the use of arma_rng::set_seed_random() in the code to generate random numbers with the Armadillo library. If I remove that line of the code, the problem is gone.
A question going deeper into this issue and providing with a reproducible example is posted here.

Can't resolve namespace member 'thread'

I wanted to practice with standard C++ threads instead of UNIX ones, but soon encountered a problem, whenever I write std::thread CLion underlines it with red and says Can't resolve namespace member 'thread'. I checked my CMake file it's set for C++11. I reinstalled the latest version of MinGW (6.3.0) and ticked a box with G++ compiler. I have been told by my friend that he uses Cygwin and everything works. But is it still possible to make it work with MinGW?
#include <iostream>
#include <thread>
#define BUFFER_SIZE 3
#define PROD_NUM 3
#define CONS_NUM 2
void produce(){
//production
}
void consume(){
//consumption
}
int main() {
std::cout << "Hello, World!" << std::endl;
int i,j;
std::thread producer(produce);
std::thread consumer (consume);
return 0;
}
The code itself has literally nothing
EDIT
in thread library there is
#pragma GCC system_header
#if __cplusplus < 201103L
# include <bits/c++0x_warning.h>
#else
#include <chrono>
#include <functional>
#include <memory>
#include <cerrno>
#include <bits/functexcept.h>
#include <bits/functional_hash.h>
#include <bits/gthr.h>
#if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1)
namespace std _GLIBCXX_VISIBILITY(default)
{
_GLIBCXX_BEGIN_NAMESPACE_VERSION
/**
* #defgroup threads Threads
* #ingroup concurrency
*
* Classes for thread support.
* #{
*/
/// thread
class thread
{
public:
// Abstract base class for types that wrap arbitrary functors to be
// invoked in the new thread of execution.
struct _State
{
virtual ~_State();
virtual void _M_run() = 0;
};
can you make sure if the library is available in the CLion toolchain? For example Cygwin does have the include.
CLion shows things red when it can't link codes with the library.
It is possibly a host environment variable error. Make sure your CMakeLists.txt is working and your environment variables, standard library linkage is correct as well as your compiler setup.
Compiler version and and standard libraries compatible. (e.g. you are using a cross-compiler (RasPi, Android) but environment vars shows host library etc. will make it fail)
Check this relevant post, it may help.
C++11 std::threads vs posix threads
Ok, so I finally solved the problem. I installed Cygwin and in CLion Settings I manually linked C/C++ compilers (for some reason CLion was unable to auto-detect them). Cleared all and re-indexed the project. Now it shows no errors and code compiles.
Regarding MinGW, I read on cplusplus.com some posts regarding the issue but they were about previous versions of MinGW and it was said that they finally fixed it, however I tell: No, they didn't. Here there is a nice repository and its README file suggests that thread of win32 rely on gthreads, however i found gthread file in my libraries everything seemed ok... so still need to investigate the issue. Write your ideas and experience here if you know more.
As for now solution is Cygwin, use it instead of MinGW.
P.S. Thanks #KillzoneKid for links

Why can't I use <experimental/barrier>?

I want to use std::experimental::barrier in my cpp multi-threaded code. But even if I write a code like this:
#include <iostream>
#include <thread>
#include <experimental/barrier>
int main () {
return 0;
}
the compiler throws an error saying that:
experimental/barrier: No such file or directory
#include <experimental/barrier>
^`
I am using g++ version 6.3.0 on my Ubuntu machine.
This is the command I am trying:
g++ -pthread -std=c++11 top.cpp -o top_new
Currently this library is not yet available.
Maybe this will be usefull:
The GNU C++ Library Manual -> Part III. Extensions -> 30. Concurrency
The file <ext/concurrence.h> contains all the higher-level constructs for playing with threads. In contrast to the atomics layer, the concurrence layer consists largely of types. All types are defined within namespace __gnu_cxx.
...
In addition, there are two macros
_GLIBCXX_READ_MEM_BARRIER
_GLIBCXX_WRITE_MEM_BARRIER
Which expand to the appropriate write and read barrier required by the host hardware and operating system.

C++ sleep pthread conflict

I am writing a code about the deadlocks and their detection, i use eclipse Juno C/C++ on Ubuntu 12.10, 64 bit.
The problem is when i use
sleep(1)
, i get this
sleep was not declared in this scope
when i build the project, i tried to include
include < unistd.h>
, but then all the pthread functions like
pthread_join
gives me errors like
undefined reference to pthread_join
, without #include < unistd.h> such error doesn't show up.
sample code:
#include <unistd.h>
#include <iostream>
#include <semaphore.h>
#include <queue>
#include <stdlib.h>
using namespace std;
pthread_mutex_t mutex;
sem_t sem; //used for writing in the console
.......
void cross(Batman b) {
// code to check traffic from the right, use counters, condition
sem_wait(&sem);
cout << "BAT " << b.num << " from " << b.direction << " crossing" << endl;
sem_post(&sem);
sleep(1);
}
........
p.s. i followed these instructions to get pthreads working in other project and i did the same for this project
http://blog.asteriosk.gr/2009/02/15/adding-pthread-to-eclipse-for-using-posix-threads/
p.s. i am working on this project with a friend and i used the same code he uses and still get those errors, while he doesn't
when you #include < unistd.h>, you fixed sleep function look up issue, now you have pthread library issue.
Next you need to #include <pthread.h> and link your application with pthread library
It sounds like you have two different errors,
first sleep() is undefined because you forgot to include unistd.h. I'd also include pthread.h but it sounds like it might get pulled in by one of the headers you include.
Secondly it sounds like you have a linker error add, you can either compile with -pthread or add -lpthread to the linker. The reason it doesn't show up is because linking can only be done once the files have been compiled and the first error is blocking this. I'd bet that ld doesn't like -pthread for some reason (have you installed libpthread-dev?). You can try changing -pthread to -lpthread.

Xerces-c assertion error

I have downloaded and built Xerces-c on linux:
Linux xxxx 2.6.24.7-server-3mnb #1 SMP Wed Sep 9 16:34:18 EDT 2009 x86_64 Intel(R) Xeon(R) CPU 3065 # 2.33GHz GNU/Linux
Created the simple program:
#include <xercesc/sax2/XMLReaderFactory.hpp>
#include <xercesc/sax2/SAX2XMLReader.hpp>
#include <xercesc/sax2/DefaultHandler.hpp>
#include <xercesc/util/XMLUni.hpp>
//#include <xercesc/validators/common/Grammar.hpp>
XERCES_CPP_NAMESPACE_USE;
int main(int argC, char *argv[])
{
// DefaultHandler handler;
SAX2XMLReader *parser = XMLReaderFactory::createXMLReader();
delete parser;
return 0;
}
compiled it:
g++ -lcurl -o xtest test.cpp /usr/local/lib/libxerces-c.a
successful compile, run it and this is what I get:
./xtest
xtest: xercesc/util/XMemory.cpp:63: static void* xercesc_3_1::XMemory::operator new(size_t, xercesc_3_1::MemoryManager*): Assertion `manager != 0' failed.
Aborted (core dumped)
Anyone have similar experience/successfully built and used this library... how? It's becoming a real pain and apparently it's the only thing for linux that properly validates an XML document against multiple schemas with namespace support (or is it??)
It looks like you forgot to call XMLPlatformUtils::Initialize before using any xerces functionality.
Initialization must be called first in any client code.
Also, don't forget XMLPlatformUtils::Terminate() once you're done with xerces i.e. at the end of the program.
The termination call is currently optional, to aid those dynamically loading the parser to clean up before exit, or to avoid spurious reports from leak detectors.
AFAIR failing to init xerces results in the error you listed.