why is mysql refusing local connections - c++

I'm trying to use the (slightly older version of) MySQL C++ connector (see here) to open up a connection to a database and write stuff to it. My code works on my local Ubuntu 18.04 laptop. However, it doesn't work on my remote server (same OS).
I have tried connecting to tcp://localhost:3306, tcp://127.0.0.1:3306, as well as several other ports. The error I receive is
terminate called after throwing an instance of 'sql::SQLException'
what(): Unknown MySQL server host 'tcp' (2)
The server and laptop share the same OS and the same g++ compiler, almost all of the same files (except for one or two things I changed to reflect different paths), and have databases set up in the same way. I was thinking that it could've been some sort of config file issue.
Is it possible I could've botched the mysqlcppconn installation? I installed it in a pretty sketchy way--I manually moved headers into /usr/include/ and manually moved shared libraries into /usr/lib/x86_64-linux-gnu/. When you see the files in the lib/folder
libcrypto.so libmysqlcppconn-static.a libmysqlcppconn.so.7 libssl.so
libcrypto.so.1.0.0 libmysqlcppconn.so libmysqlcppconn.so.7.1.1.12 libssl.so.1.0.0
you'll notice that there's some libcrypto and libssl stuff in there--I didn't move those in.
Also, when I tried to change ip address strings to hardcoded string literals, I remember seeing a std::bad_alloc error, and google showed me some threads that were suggesting it was something to do with varying compiler versions...
Anybody have an idea what's going on here? Here's the relevant piece of c++ code, but like I said, it works on my laptop so I'm pretty sure this isn't the problem:
MarketHistoryWriter::MarketHistoryWriter(
const MySqlConfig& msql_config,
unsigned num_symbols,
const std::string& table_name,
bool printing)
: m_msql_config(msql_config), m_table_name(table_name), m_num_sym(num_symbols), m_printing(printing)
{
// configure driver and connection
m_driver = get_driver_instance();
std::string conn_str = "tcp://"
+ m_msql_config.host + ":"
+ std::to_string(m_msql_config.port);
m_conn = m_driver->connect(conn_str,
m_msql_config.credentials.username,
m_msql_config.credentials.password);
m_conn->setSchema(m_msql_config.schema);
}
Also, if it helps, here's the traceback produced by gdb:
(gdb) bt
#0 __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007ffff6834801 in __GI_abort () at abort.c:79
#2 0x00007ffff70a8957 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#3 0x00007ffff70aeab6 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#4 0x00007ffff70aeaf1 in std::terminate() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#5 0x00007ffff70aed24 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#6 0x00007ffff7638a4a in sql::mysql::MySQL_Connection::init (this=this#entry=0x555555818a00,
properties=std::map with 3 elements = {...})
at /export/home/pb2/build/sb_0-32258110-1547655664.03/mysql-connector-c++-1.1.12/driver/mysql_connection.cpp:900
#7 0x00007ffff763b5ea in sql::mysql::MySQL_Connection::MySQL_Connection (this=0x555555818a00,
_driver=<optimized out>, _proxy=..., hostName=..., userName=..., password=...)
at /export/home/pb2/build/sb_0-32258110-1547655664.03/mysql-connector-c++-1.1.12/driver/mysql_connection.cpp:146
#8 0x00007ffff763fc4f in sql::mysql::MySQL_Driver::connect (this=0x5555557fa5c0, hostName=...,
userName=..., password=...)
at /export/home/pb2/build/sb_0-32258110-1547655664.03/mysql-connector-c++-1.1.12/driver/mysql_driver.cpp:132
#9 0x00005555555b8c6e in MarketHistoryWriter::MarketHistoryWriter(MySqlConfig const&, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool) ()
#10 0x000055555559fccd in TestCppClient::TestCppClient() ()
#11 0x000055555559c451 in main ()
Edit: a smaller, more reproducible example.
I run this short program below and I get this error
root#ubuntu-s-1vcpu-1gb-nyc1-01:~/test_mysql_conn# ./main
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
Here's the program
#include <stdlib.h>
#include <iostream>
#include "mysql_connection.h"
#include <cppconn/driver.h>
#include <cppconn/exception.h>
#include <cppconn/resultset.h>
#include <cppconn/statement.h>
using namespace std;
int main(void)
{
try {
sql::Driver *driver;
sql::Connection *con;
sql::Statement *stmt;
driver = get_driver_instance();
con = driver->connect("tcp://127.0.0.1:3306", "root", "secretpassword");
//con->setSchema("ib");
delete con;
} catch (sql::SQLException &e) {
cout << "# ERR: SQLException in " << __FILE__;
cout << "(" << __FUNCTION__ << ") on line " << __LINE__ << endl;
cout << "# ERR: " << e.what();
cout << " (MySQL error code: " << e.getErrorCode();
cout << ", SQLState: " << e.getSQLState() << " )" << endl;
}
cout << endl;
return EXIT_SUCCESS;
}
Here's the makefile:
CXXFLAGS=-pthread -Wall -Wno-switch -std=c++11
LDFLAGS=-lmysqlcppconn
INCLUDES=-I/usr/include/cppconn
TARGET=main
$(TARGET):
$(CXX) $(CXXFLAGS) $(INCLUDES) ./*.cpp -o$(TARGET) $(LDFLAGS)
clean:
rm -f $(TARGET) *.o

I was correct that the code was "fine." I fixed this by reinstalling mysqlcppconnector. Instead of using 1.1.12, I reinstalled with the most recent edition. Also, I didn't install by manually copying files around, I installed with
sudo apt-get install libmysqlcppconn-dev
I don't know if this is a coincidence, but I also notice that clicking on "Looking for previous GA versions?" at the download page doesn't redirect you to version 1.1.12 anymore--it directs you to 1.1.13.
After this the program ran. This issue is unrelated, but it gave me the
Access Denied for User 'root'#'localhost'
error. I checked the permissions by typing SELECT user,authentication_string,plugin,host FROM mysql.user;, saw that localhost didn't have mysql_native_password permissions (only auth_socket), so I changed that by doing
ALTER USER 'root'#'localhost' IDENTIFIED WITH mysql_native_password BY 'secret_password_here';
Hope this helps someone else--this was a strange one.

Related

Segfault caused by conflict between netCDF and HDF5 libraries

UPDATE: I've found a partial workaround. See bottom of this post.
After a number of hours of debugging a program, I've found that there is some kind of conflict between the netCDF and HDF5 libraries (the program reads/writes files of both formats).
I've boiled down the code to a tiny program that shows the issue.
This program segfaults:
#include <iostream>
#include <string>
#include "H5Cpp.h"
#include <netcdf>
using namespace std;
void stupidfunction() // Note that this is never called.
{
H5::Group grp1; // The mere potential existence of this makes netcdf segfault!
}
int main(int argn, char ** args)
{
std::string outputFilename = "/tmp/test.nc";
try
{
std::cout << "Now opening " << outputFilename << std::endl;
netCDF::NcFile sfc;
sfc.open(outputFilename, netCDF::NcFile::replace);
std::cout << "closing file" << std::endl;
sfc.close();
return true;
}
catch(netCDF::exceptions::NcException& e)
{
std::cout << "EX: " << e.what() << std::endl;
return false;
}
return 0;
}
(Compile command: h5c++ test.cpp -std=gnu++11 -O0 -g3 -lnetcdf_c++4 -lnetcdf -o test)
My installed (relevant) packages:
libnetcdf-c++4-1 4.3.1-2build1 amd64 C++ interface for scientific data access to large binary data
libnetcdf-c++4-dev 4.3.1-2build1 amd64 creation, access, and sharing of scientific data in C++
libnetcdf-dev 1:4.7.3-1 amd64 creation, access, and sharing of scientific data
libnetcdf15:amd64 1:4.7.3-1 amd64 Interface for scientific data access to large binary data
netcdf-bin 1:4.7.3-1 amd64 Programs for reading and writing NetCDF files
netcdf-doc 1:4.7.3-1 all Documentation for NetCDF
hdf5-helpers 1.10.4+repack-11ubuntu1 amd64 Hierarchical Data Format 5 (HDF5) - Helper tools
hdf5-tools 1.10.4+repack-11ubuntu1 amd64 Hierarchical Data Format 5 (HDF5) - Runtime tools
libhdf4-0 4.2.14-1ubuntu1 amd64 Hierarchical Data Format library (embedded NetCDF)
libhdf5-103:amd64 1.10.4+repack-11ubuntu1 amd64 Hierarchical Data Format 5 (HDF5) - runtime files - serial version
libhdf5-cpp-103:amd64 1.10.4+repack-11ubuntu1 amd64 Hierarchical Data Format 5 (HDF5) - C++ libraries
libhdf5-dev 1.10.4+repack-11ubuntu1 amd64 Hierarchical Data Format 5 (HDF5) - development files - serial versio
What is going on here that makes it segfault?
Is there anything I can do to avoid or fix this problem?
Any help is appreciated!
When run using gdb:
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7a47a01 in __vfprintf_internal (s=s#entry=0x7fffff7ff480,
format=format#entry=0x7ffff77e13a8 "can't locate ID",
ap=ap#entry=0x7fffff7ff5e0, mode_flags=mode_flags#entry=2)
at vfprintf-internal.c:1289
1289 vfprintf-internal.c: No such file or directory.
gdb backtrace:
(gdb) bt
#0 0x00007ffff7a47a01 in __vfprintf_internal (s=s#entry=0x7fffff7ff480, format=format#entry=0x7ffff77e13a8 "can't locate ID",
ap=ap#entry=0x7fffff7ff5e0, mode_flags=mode_flags#entry=2) at vfprintf-internal.c:1289
#1 0x00007ffff7a5cd4a in __vasprintf_internal (result_ptr=0x7fffff7ff5d8, format=0x7ffff77e13a8 "can't locate ID", args=0x7fffff7ff5e0, mode_flags=2)
at vasprintf.c:57
#2 0x00007ffff75c7e56 in H5E_printf_stack () from /usr/lib/x86_64-linux-gnu/libhdf5_serial.so.103
#3 0x00007ffff76553b9 in H5I_inc_ref () from /usr/lib/x86_64-linux-gnu/libhdf5_serial.so.103
...
many many lines repeating H5E_printf_stack, H5E__push_stack and H5I_inc_ref
...
#56139 0x00007ffff76553b9 in H5I_inc_ref () from /usr/lib/x86_64-linux-gnu/libhdf5_serial.so.103
#56140 0x00007ffff75c7c2f in H5E__push_stack () from /usr/lib/x86_64-linux-gnu/libhdf5_serial.so.103
#56141 0x00007ffff75c7e7e in H5E_printf_stack () from /usr/lib/x86_64-linux-gnu/libhdf5_serial.so.103
#56142 0x00007ffff761fc85 in H5G_loc () from /usr/lib/x86_64-linux-gnu/libhdf5_serial.so.103
#56143 0x00007ffff7546903 in H5Acreate1 () from /usr/lib/x86_64-linux-gnu/libhdf5_serial.so.103
#56144 0x00007ffff790b11b in NC4_write_provenance () from /usr/lib/x86_64-linux-gnu/libnetcdf.so.15
#56145 0x00007ffff790b5a8 in ?? () from /usr/lib/x86_64-linux-gnu/libnetcdf.so.15
#56146 0x00007ffff790b7b0 in nc4_close_hdf5_file () from /usr/lib/x86_64-linux-gnu/libnetcdf.so.15
#56147 0x00007ffff790b9ea in NC4_close () from /usr/lib/x86_64-linux-gnu/libnetcdf.so.15
#56148 0x00007ffff78ca579 in nc_close () from /usr/lib/x86_64-linux-gnu/libnetcdf.so.15
#56149 0x00007ffff7f82270 in netCDF::NcFile::close() () from /usr/lib/x86_64-linux-gnu/libnetcdf_c++4.so.1
#56150 0x00005555555a7959 in main (argn=1, args=0x7fffffffe5b8) at test.cpp:29
(gdb)
Partial workaround:
If I specify the netCDF file format as classic or classic64, the error does not occur. i.e:
sfc.open(outputFilename, netCDF::NcFile::replace, netCDF::NcFile::classic);
or
sfc.open(outputFilename, netCDF::NcFile::replace, netCDF::NcFile::classic64);
I have a C program which was showing similar behaviour, I found that adding
#include <H5public.h>
:
if (H5dont_atexit() < 0)
{
fprintf(stderr, "failed HDF5 don't-atexit\n");
return 1;
}
at the start of main() fixes the issue. That does mean that files that you H5Fopen() don't get automatically H5Fclose()-ed, but possibly a lower-impact workaround.

Segmentation fault at xml parsing only with libdb2

I have a testprogram which tries to parse an example xml on SLES11, but the result is a segmentation fault.
However if I link without libdb2 than it works fine.
g++-8.3 -o testXmlParser main.cpp -m31 -lxml2
Added the -ldb2 and I get the mentioned segmentation fault and before that a "1: parser error : Document is empty"
g++-8.3 -o testXmlParser main.cpp -m31 -lxml2 -ldb2
My code:
#include <libxml/parser.h>
#include <libxml/tree.h>
#include <iostream>
int main ()
{
xmlDoc *doc = NULL;
xmlNode *root_element = NULL;
std::cout << "log1" << std::endl;
doc = xmlParseEntity("/tmp/testXML.xml");
std::cout << "log2" << std::endl;
root_element = xmlDocGetRootElement(doc);
std::cout << "root element: "<<root_element->name << std::endl ;
return 0;
}
And the callstack:
#0 0x7b30399e in free () from /lib/libc.so.6
#1 0x7bb3bb92 in destroy () from /data/db2inst1/sqllib/lib32/libdb2.so.1
#2 0x7bb3cdf4 in gzclose () from /data/db2inst1/sqllib/lib32/libdb2.so.1
#3 0x7d1896f0 in ?? () from /usr/lib/libxml2.so.2
#4 0x7d187e80 in xmlFreeParserInputBuffer () from /usr/lib/libxml2.so.2
#5 0x7d1602f4 in xmlFreeInputStream () from /usr/lib/libxml2.so.2
#6 0x7d160336 in xmlFreeParserCtxt () from /usr/lib/libxml2.so.2
#7 0x7d17427c in xmlSAXParseEntity () from /usr/lib/libxml2.so.2
#8 0x00400c02 in main ()
Could you help me solve this problem?
This is a test program, the db2 is not used here, but used in our software where this problem comes from.
The problem is that libxml requires libz and you're not linking with it.
Since Db2 includes zlib in their libraries (see stack frames #1, #2) the symbols are getting resolved by the linker.
There must be some incompatibility between the zlib that libxml expects, and the version that is embedded into Db2.
Try adding '-lz' to your compile line, before '-ldb2', so that the linker will try to use that library first.
Db2 uses zlib internally and those symbols are (incorrectly) exported. This will be addressed via APAR
IT29520: ZLIB SYMBOLS INSIDE LIBDB2.SO ARE GLOBALLY VISIBLE WHICH MEANS THEY COLLIDE WITH ZLIB SYMBOLS INSIDE LIBZ.SO
With LD_DEBUG=all you'll see how symbols mapped/resolved. You can try #memmertoIBM's suggestion or put libdb2 behind zlib in LD_LIBRARY_PTH

Why is a self-deleting, global Vulkan instance causing a segfault only when a layer is added?

I am using a global std::shared_ptr to handle automatic deletion of my Vulkan VkInstance. The pointer has a custom deleter that calls vkDestroyInstance when it goes out of scope. Everything works as expected until I enable the VK_LAYER_LUNARG_standard_validation layer at which point the vkDestroyInstance function causes a segfault.
I have added a minimal example below that produces the issue.
minimal.cpp
#include <vulkan/vulkan.h>
#include <iostream>
#include <memory>
#include <vector>
#include <cstdlib>
// The global self deleting instance
std::shared_ptr<VkInstance> instance;
int main()
{
std::vector<const char *> extensions = {VK_EXT_DEBUG_REPORT_EXTENSION_NAME};
std::vector<const char *> layers = {};
// Uncomment to cause segfault:
// layers.emplace_back("VK_LAYER_LUNARG_standard_validation");
VkApplicationInfo app_info = {};
app_info.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
app_info.pApplicationName = "Wat";
app_info.applicationVersion = VK_MAKE_VERSION(1, 0, 0);
app_info.pEngineName = "No Engine";
app_info.engineVersion = VK_MAKE_VERSION(1, 0, 0);
app_info.apiVersion = VK_API_VERSION_1_0;
VkInstanceCreateInfo instance_info = {};
instance_info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
instance_info.pApplicationInfo = &app_info;
instance_info.enabledExtensionCount =
static_cast<uint32_t>(extensions.size());
instance_info.ppEnabledExtensionNames = extensions.data();
instance_info.enabledLayerCount = static_cast<uint32_t>(layers.size());
instance_info.ppEnabledLayerNames = layers.data();
// Handles auto deletion of the instance when it goes out of scope
auto deleter = [](VkInstance *pInstance)
{
if (*pInstance)
{
vkDestroyInstance(*pInstance, nullptr);
std::cout << "Deleted instance" << std::endl;
}
delete pInstance;
};
instance = std::shared_ptr<VkInstance>(new VkInstance(nullptr), deleter);
if (vkCreateInstance(&instance_info, nullptr, instance.get()) != VK_SUCCESS)
{
std::cerr << "Failed to create a Vulkan instance" << std::endl;
return EXIT_FAILURE;
}
std::cout << "Created instance" << std::endl;
// When the program exits, everything should clean up nicely?
return EXIT_SUCCESS;
}
When running the above program as is, the output is what I would expect:
$ g++-7 -std=c++14 minimal.cpp -isystem $VULKAN_SDK/include -L$VULKAN_SDK/lib -lvulkan -o minimal
$ ./minimal
Created instance
Deleted instance
$
However as soon as I add back the VK_LAYER_LUNARG_standard_validation line:
// Uncomment to cause segfault:
layers.emplace_back("VK_LAYER_LUNARG_standard_validation");
I get
$ g++-7 -std=c++14 minimal.cpp -isystem $VULKAN_SDK/include -L$VULKAN_SDK/lib -lvulkan -o minimal
$ ./minimal
Created instance
Segmentation fault (core dumped)
$
When run with gdb the backtrace shows the segfault occurring in the VkDeleteInstance function:
$ g++-7 -std=c++14 -g minimal.cpp -isystem $VULKAN_SDK/include -L$VULKAN_SDK/lib -lvulkan -o minimal
$ gdb -ex run ./minimal
GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1
...
Starting program: /my/path/stackoverflow/vulkan/minimal
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Created instance
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff24c4334 in threading::DestroyInstance(VkInstance_T*, VkAllocationCallbacks const*) () from /my/path/Vulkan/1.1.77.0/x86_64/lib/libVkLayer_threading.so
(gdb) bt
#0 0x00007ffff24c4334 in threading::DestroyInstance(VkInstance_T*, VkAllocationCallbacks const*) () from /my/path/Vulkan/1.1.77.0/x86_64/lib/libVkLayer_threading.so
#1 0x00007ffff7bad243 in vkDestroyInstance () from /my/path/Vulkan/1.1.77.0/x86_64/lib/libvulkan.so.1
#2 0x000000000040105c in <lambda(VkInstance_T**)>::operator()(VkInstance *) const (__closure=0x617c90, pInstance=0x617c60) at minimal.cpp:38
#3 0x000000000040199a in std::_Sp_counted_deleter<VkInstance_T**, main()::<lambda(VkInstance_T**)>, std::allocator<void>, (__gnu_cxx::_Lock_policy)2>::_M_dispose(void) (this=0x617c80) at /usr/include/c++/7/bits/shared_ptr_base.h:470
#4 0x0000000000401ef0 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x617c80) at /usr/include/c++/7/bits/shared_ptr_base.h:154
#5 0x0000000000401bc7 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x6052d8 <instance+8>, __in_chrg=<optimized out>) at /usr/include/c++/7/bits/shared_ptr_base.h:684
#6 0x0000000000401b6a in std::__shared_ptr<VkInstance_T*, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x6052d0 <instance>, __in_chrg=<optimized out>) at /usr/include/c++/7/bits/shared_ptr_base.h:1123
#7 0x0000000000401b9c in std::shared_ptr<VkInstance_T*>::~shared_ptr (this=0x6052d0 <instance>, __in_chrg=<optimized out>) at /usr/include/c++/7/bits/shared_ptr.h:93
#8 0x00007ffff724bff8 in __run_exit_handlers (status=0, listp=0x7ffff75d65f8 <__exit_funcs>, run_list_atexit=run_list_atexit#entry=true) at exit.c:82
#9 0x00007ffff724c045 in __GI_exit (status=<optimized out>) at exit.c:104
#10 0x00007ffff7232837 in __libc_start_main (main=0x40108c <main()>, argc=1, argv=0x7fffffffdcf8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffdce8) at ../csu/libc-start.c:325
#11 0x0000000000400ed9 in _start ()
(gdb)
The problem can be fixed by using a local instance (inside the main function) instead of a global one so I'm thinking I might not fully understand some nuances of the Vulkan linker when using layers.
In my actual application I want to use a lazily instantiated static class to keep track of all my Vulkan objects and so I run into the same problem when the program exits.
Setup
g++: 7.3.0
OS: Ubuntu 16.04
Nvidia Driver: 390.67 (also tried 396)
Vulkan SDK: 1.1.77.0 (also tried 1.1.73)
GPU: GeForce GTX TITAN (Dual SLI if that matters?)
Global variables are a bad idea. Their destruction is unordered relative to each other in most cases.
Clean up your state in main, not at static destruction time. Simple objects that depend only on memory (a small step up from POD) and don't cross depend tend not to cause problems, but go any further and you enter a hornet's nest.
Your global shared ptr is being cleared and the destruction code run after some arbitrary global state within Vulkan is being cleared. This is causing a segfault. The interesting thing here isn't "why this segfault" but rather "how can I avoid this kind of segfault". The answer to that is "stop using global state"; nothing else really works.

Segmentation fault in std::thread::id's std::operator==

I have encountered an issue which I am not sure how to resolve. I believe it's an issue in GCC and/or libstdc++.
I am running Ubuntu 14.04 LTS with GCC 4.8.2-19ubuntu1, libstdc++3.4.19 (I believe? How do you find what version of libstdc++ library is installed on your linux machine?), and boost 1.55.
Here's the code:
// http://www.boost.org/doc/libs/1_54_0/libs/log/doc/html/log/tutorial.html
// with a slight modification to ensure we're testing with threads too
// g++ -g -O0 --std=c++11 staticlinktest.cpp -lboost_log_setup -lboost_log -lboost_system -lboost_filesystem -lboost_thread -lpthread
#define BOOST_ALL_DYN_LINK 1
#include <boost/log/trivial.hpp>
#include <thread>
#include <atomic>
#include <vector>
int main(int, char*[])
{
BOOST_LOG_TRIVIAL(trace) << "A trace severity message";
BOOST_LOG_TRIVIAL(debug) << "A debug severity message";
BOOST_LOG_TRIVIAL(info) << "An informational severity message";
BOOST_LOG_TRIVIAL(warning) << "A warning severity message";
BOOST_LOG_TRIVIAL(error) << "An error severity message";
BOOST_LOG_TRIVIAL(fatal) << "A fatal severity message";
std::atomic<bool> exiting(false);
std::vector<std::thread> threads;
for ( int i = 0; i < 8; ++i ) {
threads.push_back(std::thread([&exiting](){
while (!exiting)
BOOST_LOG_TRIVIAL(trace) << "thread " << std::this_thread::get_id() << " trace";
}));
}
usleep(1000000);
exiting = true;
std::for_each(threads.begin(), threads.end(), [](std::thread& t){
t.join();
});
return 0;
}
The issue:
Using the command line at the top, I would build with dynamic linking. Everything seems to work great. I see apparently-valid output complete with thread IDs and tracing information.
However, in my project I need to be able to use static linking. So I add in the "-static" switch to the g++ command, and comment out the #define for BOOST_ALL_DYN_LINK. It builds just fine. But when I execute the program, it runs up until the first thread gets created, then segfaults. The backtrace seems to always be the same:
#0 0x0000000000000000 in ?? ()
#1 0x0000000000402805 in __gthread_equal (__t1=140737354118912, __t2=0) at /usr/include/x86_64-linux-gnu/c++/4.8/bits/gthr-default.h:680
#2 0x0000000000404116 in std::operator== (__x=..., __y=...) at /usr/include/c++/4.8/thread:84
#3 0x0000000000404c03 in std::operator<< <char, std::char_traits<char> > (__out=..., __id=...) at /usr/include/c++/4.8/thread:234
#4 0x000000000040467e in boost::log::v2s_mt_posix::operator<< <char, std::char_traits<char>, std::allocator<char>, std::thread::id> (strm=...,
value=...) at /usr/include/boost/log/utility/formatting_ostream.hpp:710
#5 0x0000000000402939 in __lambda0::operator() (__closure=0x7bb5e0) at staticlinktest.cpp:27
#6 0x0000000000403ea8 in std::_Bind_simple<main(int, char**)::__lambda0()>::_M_invoke<>(std::_Index_tuple<>) (this=0x7bb5e0)
at /usr/include/c++/4.8/functional:1732
#7 0x0000000000403dff in std::_Bind_simple<main(int, char**)::__lambda0()>::operator()(void) (this=0x7bb5e0)
at /usr/include/c++/4.8/functional:1720
#8 0x0000000000403d98 in std::thread::_Impl<std::_Bind_simple<main(int, char**)::__lambda0()> >::_M_run(void) (this=0x7bb5c8)
at /usr/include/c++/4.8/thread:115
#9 0x000000000047ce60 in execute_native_thread_routine ()
#10 0x000000000042a962 in start_thread (arg=0x7ffff7ffb700) at pthread_create.c:312
#11 0x00000000004e5ba9 in clone ()
It looks to me as if it's trying to call a null function pointer and only when linked statically. Any thoughts? Am I doing something wrong?
Statically linking libpthread into your applications is a really bad idea.
Nevertheless, here is how to do it.
I first fixed the compile errors (I suspect that you aren't showing us the code that you are actually compiling, or boost pollutes the namespace that much), then removed the irrelevant boost stuff, that only adds noise to the question. Here is the code:
#include <atomic>
#include <chrono>
#include <iostream>
#include <thread>
#include <vector>
int main(int, char*[])
{
std::atomic<bool> exiting(false);
std::vector<std::thread> threads;
for ( int i = 0; i < 8; ++i ) {
threads.push_back(std::thread([&exiting](){
while (!exiting)
std::cout << "thread " << std::this_thread::get_id() << " trace\n";
}));
}
std::this_thread::sleep_for(std::chrono::milliseconds(1));
exiting = true;
for(auto& t : threads){
t.join();
};
return 0;
}
It runs fine if I dynamically link but crashes when statically linked:
terminate called after throwing an instance of 'std::system_error'
what(): Operation not permitted
Aborted (core dumped)
According to this e-mail the libstdc++ has to be configured appropriately if you use threads and statically link. Here are the magic flags to make it work with static linking:
g++ -std=c++11 -pedantic -pthread threads.cpp -static -Wl,--whole-archive -lpthread -Wl,--no-whole-archive
As I said before, statically linking libpthread into your application is asking for trouble.

how to catch ostream exception on linux?

My Linux C++ application crashing while writing strings into ostream object.
My original application tries to create a very bigg string output and write all string output into a stream. while writing the string into ostream object, the application crashed. At first the the crash was happened in both windows and Linux.
Now the issue fixed in Windows environment (details below). but In Linux it is crashing.
Following is the sample c++ program, that will generate the same scenario.
#include <iostream>
#include <strstream>
#include <memory>
using namespace std;
bool fillScreen(std::ostream&);
int main ()
{
auto_ptr<ostrstream> screen(new ostrstream);
bool succ = false;
try
{
succ = fillScreen(*screen);
}catch(std::exception &ex)
{
std::cerr << ex.what() << std::endl;
}
if(succ)
{
std::cout << "SCREEN Content is : " << screen->str() << std::endl;
}
else
{
std::cout << "NOTHING ON SCREEN Object " << std::endl;
}
}
bool fillScreen(ostream &scr)
{
unsigned long idx = 0;
scr.exceptions(std::ios::badbit);// throws exception in windows but not in Linux.
while (idx++ < 999999999)
{
scr << "BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH BLAHBLAHBLAH_BLAH BLAHBLAHBLAH_BLAH_" << " : " ;
scr << "BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH_";
scr << "BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH_"<< std::endl;
/*if(!(idx %100000))
{
std::cout << "Reached iteration: " << idx << std::endl;
}*/
}
return true;
}
I have added following statement, in my program
screen.exceptions(std::ios::badbit);
With this statement, my programs not crashing in windows.
While running on windows, the stream throws badbit exception and my application handle the exception and made a clean exit.
Output as follows,
Windows output: (run using cygwin)
$ ./overflow.exe
bad allocation
NOTHING ON SCREEN Object
Clean exit.
Linux Output:
[Mybuild#devlnx01 streamError]$ ./a.out
Segmentation fault (core dumped)
[Mybuild#devlnx01 streamError]$
Crashed - Not a clean exit. Even with exceptions set
screen.exceptions(std::ios::badbit);
Following is the stack trace taken using Linux gdb, (using core dump file)
Core was generated by `./a.out'.
Program terminated with signal 11, Segmentation fault.
#0 std::strstreambuf::overflow (this=0x17f8018, c=72) at ../../.././libstdc++-v3/src/strstream.cc:174
174 ../../.././libstdc++-v3/src/strstream.cc: No such file or directory.
in ../../.././libstdc++-v3/src/strstream.cc
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.107.el6.x86_64
(gdb) where
#0 std::strstreambuf::overflow (this=0x17f8018, c=72) at ../../.././libstdc++-v3/src/strstream.cc:174
#1 0x00007eff6f4e7565 in std::basic_streambuf<char, std::char_traits<char> >::xsputn (this=0x17f8018, __s=<value optimized out>, __n=72)
at /export/disk1/build/GCC4.5.3/gcc-4.5.3/x86_64-unknown-linux-gnu/libstdc++-v3/include/bits/streambuf.tcc:97
#2 0x00007eff6f4ddb85 in sputn (__out=..., __s=0x401038 "BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH BLAHBLAHBLAH_BLAH BLAHBLAHBLAH_BLAH_", __n=72)
at /export/disk1/build/GCC4.5.3/gcc-4.5.3/x86_64-unknown-linux-gnu/libstdc++-v3/include/streambuf:429
#3 __ostream_write<char, std::char_traits<char> > (__out=...,
__s=0x401038 "BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH BLAHBLAHBLAH_BLAH BLAHBLAHBLAH_BLAH_", __n=72)
at /export/disk1/build/GCC4.5.3/gcc-4.5.3/x86_64-unknown-linux-gnu/libstdc++-v3/include/bits/ostream_insert.h:48
#4 std::__ostream_insert<char, std::char_traits<char> > (__out=...,
__s=0x401038 "BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH BLAHBLAHBLAH_BLAH BLAHBLAHBLAH_BLAH_", __n=72)
at /export/disk1/build/GCC4.5.3/gcc-4.5.3/x86_64-unknown-linux-gnu/libstdc++-v3/include/bits/ostream_insert.h:99
#5 0x00007eff6f4dde0f in std::operator<< <std::char_traits<char> > (__out=...,
__s=0x401038 "BLAHBLAHBLAH_BLAH_BLAHBLAHBLAH_BLAH BLAHBLAHBLAH_BLAH BLAHBLAHBLAH_BLAH_")
at /export/disk1/build/GCC4.5.3/gcc-4.5.3/x86_64-unknown-linux-gnu/libstdc++-v3/include/ostream:513
#6 0x0000000000400d82 in fillScreen (scr=...) at overflow.cxx:35
#7 0x0000000000400c31 in main () at overflow.cxx:14
version and compiler details.
Windows 2008 (64bit) - VS2008
rhel62(64bit) gcc version 4.4.7
Compilation arguments.
$g++ overflow.cxx -g3 -m64 -O0 -ggdb
In Windows it is exiting properly, But in Linux this is crashing with segmentation fault error.
All i am looking for is my application should do clean exit. i do not want it to exit with segmentation fault error.
I am not sure how to handle this in Linux, Can any one guide me on this.
This seems to be an issue with the standard library shipping with your copy of gcc on linux.
Update your compiler (4.8.1 is the current version of gcc, as of 30. September 2013) and you'll get the expected behaviour as you can see in this demo.
Side note: auto_ptr shouldn't be used anymore. Use unique_ptr or shared_ptr. In this case, neither is necessary, drop the new.