Xerces-c assertion error - c++

I have downloaded and built Xerces-c on linux:
Linux xxxx 2.6.24.7-server-3mnb #1 SMP Wed Sep 9 16:34:18 EDT 2009 x86_64 Intel(R) Xeon(R) CPU 3065 # 2.33GHz GNU/Linux
Created the simple program:
#include <xercesc/sax2/XMLReaderFactory.hpp>
#include <xercesc/sax2/SAX2XMLReader.hpp>
#include <xercesc/sax2/DefaultHandler.hpp>
#include <xercesc/util/XMLUni.hpp>
//#include <xercesc/validators/common/Grammar.hpp>
XERCES_CPP_NAMESPACE_USE;
int main(int argC, char *argv[])
{
// DefaultHandler handler;
SAX2XMLReader *parser = XMLReaderFactory::createXMLReader();
delete parser;
return 0;
}
compiled it:
g++ -lcurl -o xtest test.cpp /usr/local/lib/libxerces-c.a
successful compile, run it and this is what I get:
./xtest
xtest: xercesc/util/XMemory.cpp:63: static void* xercesc_3_1::XMemory::operator new(size_t, xercesc_3_1::MemoryManager*): Assertion `manager != 0' failed.
Aborted (core dumped)
Anyone have similar experience/successfully built and used this library... how? It's becoming a real pain and apparently it's the only thing for linux that properly validates an XML document against multiple schemas with namespace support (or is it??)

It looks like you forgot to call XMLPlatformUtils::Initialize before using any xerces functionality.
Initialization must be called first in any client code.
Also, don't forget XMLPlatformUtils::Terminate() once you're done with xerces i.e. at the end of the program.
The termination call is currently optional, to aid those dynamically loading the parser to clean up before exit, or to avoid spurious reports from leak detectors.
AFAIR failing to init xerces results in the error you listed.

Related

Invalid Arguments Error Using PTHREAD_MUTEX_INITIALIZER In GNU MCU Eclipse

I am using GNU MCU Eclipse 4.7.2-202001271244 with GCC Linaro 7.5.0 (arm-linux-gnueabihf). Both are the latest versions as of now
I set up a simple project that uses pthreads and when I try to use PTHREAD_MUTEX_INITIALIZER I get an error in Eclipse even though g++ compiles it without error. The error from Eclipse is:
Invalid arguments ' Candidates are: () (const
{C:\Download\gcc-linaro-7.5.0-2019.12-i686-mingw32_arm-linux-gnueabihf\arm-linux-gnueabihf\libc\usr\include\bits\pthreadtypes.h:1809}
&) '
Looking at pthreadtypes.h, line 1809 is not even a valid line number
My test code is very simple:
#include <pthread.h>
int main(const int argc, const char* const argv[])
{
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_lock(&lock);
return 0;
}
I am pretty confident my project is set up correctly because I can take the g++ generated executable and run it successfully on my ARM processor
Does anyone have any ideas or suggestions on resolving this issue? Any thoughts would be greatly appreciated. I have tried searching for information with no luck

Why will execl(...) on the find program from find-utils not output anything?

I wrote a simple test program to try to call execl(...) with the path of the find command as a test. There is no output on stdout, no matter the parameters sent to the find program. Why is this happening? Here's the program:
#include <unistd.h>
#include <sys/types.h>
#include <cstdio>
#include <cerrno>
int main(int argc, char** argv)
{
if(execl("/usr/bin/find", "/usr/bin/find", "/", "-maxdepth", "1", "-name", "bin", (char*)NULL) == -1)
{
perror("In QueryRequest::Client_Execute(): ");
_exit(1);
}
return 0;
}
Here's the compilation and run test of the program above; note that there is no output from it. Executing find from the console with the above parameters yields non-empty output. What is the problem here and how can I get past it?
[main#main-pc src]$ g++ test.cpp -o test
[main#main-pc src]$ ./test
[main#main-pc src]$
The specific meta-info about the targeted system:
Linux 4.9.66-1-MANJARO #1 SMP PREEMPT Thu Nov 30 14:08:24 UTC 2017
Using the -print argument to find does not change the outcome. The behaviour is as expected on other systems, including a 4.9.66-1-MANJARO and another ARCH-based proprietary distro using a 4.11 kernel. I've compiled it with g++ 7.2 and other 4.x versions.
Read carefully the documentation of execl(3) and of the execve(2) system call (which gets called by execl).
Notice that execl and execve returns only on failure. When they are successful, they do not return (since the calling process is changing entirely its virtual address space to run the new executable).
To debug your issue, you could temporarily replace /usr/bin/find by /bin/echo, and/or perhaps also use strace(1), e.g. strace ./test.
BTW, using test as your program name is poor taste, since conflicting with the standard test(1) (e.g. the bash test builtin) .... So I strongly recommend using another name, e.g. mytest ....
Of course, read also carefully the documentation of find(1).
BTW, on my Debian system your program (renamed as curious) works and outputs /bin
Notice that you could avoid running a find process from your C program by using nftw(3).
Also, remember that C and C++ are different languages (and your code looks like C, but then you should #include <stdio.h> and #include <errno.h>). Don't forget to compile with all warnings and debug info, so with -Wall -Wextra -g for GCC. Learn to use the debugger gdb.
the posted code does not compile, with a C compiler.
Suggest using the following code:
#include <unistd.h> // execl(), _exit()
#include <stdio.h> // perror()
int main( void )
{
execl("/usr/bin/find", "/usr/bin/find", "/", "-maxdepth", "1", "-name", "bin", NULL);
perror("In QueryRequest::Client_Execute(): ");
_exit(1);
}

Segfault with asio standalone when classes in separate files

The below is as minimal of an example as I can get. It does need to be in separate files as that seems to be what causes the segmentation fault error.
I'm using Mingw x32 4.8.1 with Asio standalone 1.10.6 . I've also tested with TDM GCC 4.7.1 and Mingw x64 4.8.1. All of these under Windows produce the same segfault. There's no such issue under Linux with the latest version of GCC.
edit: I've just finished testing on Mingw-w64 5.2.0 (the Mingw-Builds build of it). Same problem.
I've also tested compiling this with TDM GCC 4.8.1 on a fresh virtual machine and get the same segfault. Edit: I've also now tested on a completely different machine with TDM GCC 4.8.1
In all cases I'm using -std=c++11, -g and -Wall. I've also compiled with -g and have the same result. I need the C++11 flag because I don't want a dependency on boost, just asio.
With the following code in a single main.cpp file there are not problems and the program seems to run as expected. However, if I put each class into it's own *.hpp and *.cpp file, I get a segfault in the Server classes constructor.
I further went back and put everything back into main.cpp and started moving each class one by one. The segfault begins occuring after the final class, Client is put in it's own files.
In addition, as I was putting all the classes into one file and moving them etc, I made sure that any un-needed object files weren't linked to my .exe.
This code started a lot larger but it's narrowed down to this.
Server.hpp
#ifndef SERVER_HPP_INCLUDED
#define SERVER_HPP_INCLUDED
#include <string>
#include <memory>
#define ASIO_STANDALONE
#include <asio.hpp>
using namespace asio::ip;
namespace network
{
class Server
{
public:
Server(asio::io_service& ioService, uint16_t port);
private:
tcp::acceptor m_acceptor;
};
}
#endif // SERVER_HPP_INCLUDED
Server.cpp
#include "Server.hpp"
using namespace network;
#include <iostream>
Server::Server(asio::io_service& ioService, uint16_t port)
: m_acceptor(ioService, tcp::endpoint(tcp::v4(),port))
{
}
Client.hpp
#ifndef CLIENT_HPP_INCLUDED
#define CLIENT_HPP_INCLUDED
#include <vector>
#define ASIO_STANDALONE
#include <asio.hpp>
using namespace asio::ip;
namespace network
{
class Client
{
public:
Client(asio::io_service& ioService);
private:
asio::steady_timer m_timer;
};
}
#endif // CLIENT_HPP_INCLUDED
Client.cpp
#include "Client.hpp"
using namespace network;
#include <iostream>
Client::Client(asio::io_service& ioService)
: m_timer(ioService)
{
}
main.cpp
#include <iostream>
#define ASIO_STANDALONE
#include <asio.hpp>
using namespace asio::ip;
#include "Server.hpp"
int main()
{
try
{
uint16_t peerRequestPort = 63000;
asio::io_service io_service;
network::Server server(io_service,peerRequestPort);
}
catch(std::exception& e)
{
std::cout << e.what() << std::endl;
}
return 0;
}
Here's the callstack from GDB:
#0 00406729 asio::detail::service_registry::keys_match(key1=..., key2=...) (F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/detail/impl/service_registry.ipp:89)
#1 ?? 0x0040696e in asio::detail::service_registry::do_use_service (this=0x5d2f10, key=..., factory=0x406b44 <asio::detail::service_registry::create<asio::socket_acceptor_service<asio::ip::tcp> >(asio::io_service&)>) (F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/detail/impl/service_registry.ipp:113)
#2 004068B6 asio::detail::service_registry::use_service<asio::socket_acceptor_service<asio::ip::tcp> >(this=0x5d2f10) (F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/detail/impl/service_registry.hpp:47)
#3 00403857 asio::use_service<asio::socket_acceptor_service<asio::ip::tcp> >(ios=...) (F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/impl/io_service.hpp:32)
#4 004039B3 asio::basic_io_object<asio::socket_acceptor_service<asio::ip::tcp>, true>::basic_io_object(this=0x28fe48, io_service=...) (F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/basic_io_object.hpp:182)
#5 00403B29 asio::basic_socket_acceptor<asio::ip::tcp, asio::socket_acceptor_service<asio::ip::tcp> >::basic_socket_acceptor(this=0x28fe48, io_service=..., endpoint=..., reuse_addr=true) (F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/basic_socket_acceptor.hpp:137)
#6 00401D3B network::Server::Server(this=0x28fe48, ioService=..., port=63000) (F:\GameDev\Dischan\Tests\Server.cpp:7)
#7 004018F1 main() (F:\GameDev\Dischan\Tests\main.cpp:17)
And finally here's the output from Dr Memory:
Dr. Memory version 1.8.0 build 8 built on Sep 9 2014 16:27:02
Dr. Memory results for pid 5296: "tests.exe"
Application cmdline: "tests.exe"
Recorded 108 suppression(s) from default C:\Program Files (x86)\Dr. Memory\bin\suppress-default.txt
Error #1: UNADDRESSABLE ACCESS: reading 0x00000007-0x0000000b 4 byte(s)
# 0 asio::detail::service_registry::keys_match [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/detail/impl/service_registry.ipp:89]
# 1 asio::detail::service_registry::do_use_service [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/detail/impl/service_registry.ipp:113]
# 2 asio::detail::service_registry::use_service<> [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/detail/impl/service_registry.hpp:47]
# 3 asio::use_service<> [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/impl/io_service.hpp:32]
# 4 asio::basic_io_object<>::basic_io_object [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/basic_io_object.hpp:182]
# 5 asio::basic_socket_acceptor<>::basic_socket_acceptor [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/basic_socket_acceptor.hpp:137]
# 6 network::Server::Server [F:/GameDev/Dischan/Tests/Server.cpp:7]
# 7 main [F:/GameDev/Dischan/Tests/main.cpp:17]
Note: #0:00:00.780 in thread 7464
Note: instruction: mov 0x04(%eax) -> %eax
Error #2: LEAK 36 direct bytes 0x02530860-0x02530884 + 124 indirect bytes
# 0 replace_operator_new [d:\drmemory_package\common\alloc_replace.c:2609]
# 1 asio::io_service::io_service [F:/GameDev/asio-1.10.6/asio-1.10.6/include/asio/impl/io_service.ipp:39]
# 2 main [F:/GameDev/Dischan/Tests/main.cpp:15]
===========================================================================
FINAL SUMMARY:
DUPLICATE ERROR COUNTS:
SUPPRESSIONS USED:
ERRORS FOUND:
1 unique, 1 total unaddressable access(es)
0 unique, 0 total uninitialized access(es)
0 unique, 0 total invalid heap argument(s)
0 unique, 0 total GDI usage error(s)
0 unique, 0 total handle leak(s)
0 unique, 0 total warning(s)
1 unique, 1 total, 160 byte(s) of leak(s)
0 unique, 0 total, 0 byte(s) of possible leak(s)
ERRORS IGNORED:
14 potential error(s) (suspected false positives)
(details: C:\Users\User\AppData\Roaming\Dr. Memory\DrMemory-tests.exe.5296.000\potential_errors.txt)
12 potential leak(s) (suspected false positives)
(details: C:\Users\User\AppData\Roaming\Dr. Memory\DrMemory-tests.exe.5296.000\potential_errors.txt)
24 unique, 24 total, 2549 byte(s) of still-reachable allocation(s)
(re-run with "-show_reachable" for details)
Details: C:\Users\User\AppData\Roaming\Dr. Memory\DrMemory-tests.exe.5296.000\results.txt
I just cannot see why I'm getting a segfault. Even after commenting out all of the meaningful code it still occurs.
EDIT
I've edited the code above to show that just the Server constructor appears to cause an issue and the contents of each file. (I've again ensured that only these object files are compiled and linked).
EDIT 2
I've tested this with the TDM GCC 4.7.1, Mingw Builds x64 4.8.1 and Mingw Builds x32 4.8.1. Same result for all of them.
EDIT 3
I've further reduced the code way down. Now, in Client, if I remove any asio objects that require an asio::io_service& to be constructed, then there is no segfault. But any of the asio types I've tried so far have all produced the same segfault. This isn't a problem in the Server class for instance, which has an asio::acceptor. The far out thing is that there is no instance of Client created, so why it affects the program and produces a segfault in Servers constructor is weird.
EDIT 4
I've now completely removed Server.hpp and Server.cpp and have updated main.cpp to this:
main.cpp
#include <iostream>
#define ASIO_STANDALONE
#include <asio.hpp>
using namespace asio::ip;
int main()
{
try
{
uint16_t peerRequestPort = 63000;
asio::io_service io_service;
auto protocol = tcp::v4();
tcp::endpoint endpoint(protocol,peerRequestPort);
tcp::acceptor m_acceptor(io_service, endpoint);
}
catch(std::exception& e)
{
std::cout << e.what() << std::endl;
}
return 0;
}
I still get the segfault and the callstack reflects the lack of Server constructor. The segfault is still in the same place. DrMemory results look about the same as well.
Same as before, if I don't link the Client object file I have no issue.
EDIT 5
As requested, here is the build log from Code::Blocks
g++.exe -std=c++11 -Wall -D_WIN32_WINNT=0x0501 -g -I..\..\asio-1.10.6\asio-1.10.6\include -c F:\GameDev\Dischan\Tests\Client.cpp -o obj\Debug\Client.o
g++.exe -std=c++11 -Wall -D_WIN32_WINNT=0x0501 -g -I..\..\asio-1.10.6\asio-1.10.6\include -c F:\GameDev\Dischan\Tests\main.cpp -o obj\Debug\main.o
g++.exe -o Build\Debug\Windows\Tests.exe obj\Debug\Client.o obj\Debug\main.o -lws2_32 -lwsock32
Output file is Build\Debug\Windows\Tests.exe with size 723.02 KB
Process terminated with status 0 (0 minute(s), 3 second(s))
0 error(s), 0 warning(s) (0 minute(s), 3 second(s))
EDIT 6
I'm starting to go outside of my abilities now but I've managed to track down some of the problem (and I'm learning some new stuff which is cool).
It appears that when an object which requires an asio::io_service& is created, it adds "services" to the io_service. These are static across all io_service instances. So, when the service request is made, there's a loop that is iterated through which appears to be a linked list of already created services. If the requested service hasn't already been created; it's then created.
This info is from the reference for io_service and from looking at service_registry.ipp (line 111).
This is done internally with a call to service_registry::do_use_service. The service_registry has a member named first_service_ of type asio::io_service::service*. This first service should have a member named next_ which is the linked list part I mentioned.
At the time of the first call to service_registry::do_use_service (when the asio::acceptor is constructed) however, the first_service_ member has a value of 0xffffffff which is obviously not right. So I believe that's the root of the segfault.
Why this member has the value 0xffffffff is beyond me. It was my understanding that only old/quirky machines reserved this address for null pointers... but I concede I could be way off.
I did just quickly check that by doing this:
int* p = nullptr;
if(p)
std::cout << "something" << std::endl;
and set a breakpoint to read the value. The value for p is 0x0, not 0xffffffff.
So, I set a breakpoint on the constructor for service_registry (asio/detail/impl/service_registry.hpp) and on the destructor (in case it was explicitly called somewhere) and on the three methods do_has_service, do_use_service and do_add_service. My thought was to try and track at what point first_service_ gets the bad value.
I've had no luck. These are the only places which could possibly alter the value of first_service_.
I take this to mean that something has corrupted the stack and changed the address for first_service_. But, I'm only a hobbyist...
I did check that the address of the this pointer for the constructor was the same as the one used for the invocation of the do_use_service to make sure that two instances weren't created or something like that.
EDIT 7
Okay, so I've now found that if I compile with the ASIO_DISABLE_THREADS I no longer get a segfault!
But, this results in an exception being thrown because I'm attempting to use threads even though I've disabled them. Which I take to mean that I would be restricted to synchronous calls and no async calls. (i.e., what's the point of using asio?)
The reference material here does say that ASIO_DISABLE_THREADS
Explicitly disables Asio's threading support, independent of whether or not Boost supports threads.
So I take it to mean that this define stops asio from using threads regardless of boost or not; which makes sense.
Why threading would cause a problem, I don't know. I'm not keen on delving that far.
I give up
I give up on asio. After looking through the code and documentation, it appears to have been developed with boost in mind more-so than as a standalone library. Apparently to the point where you need to use Boost over C++11, which I just don't care to do.
Best C/C++ Network Library looks like there's lots of alternatives.
To be honest, running my own synchronous socket calls in my own thread sounds like a better idea considering the control I'll gain. At least until asio enters the Standard Library and is implemented in Mingw-w64.
Considering that asio looks like a prime candidate to be in the standard library, or at least it's flavour, it's probably a good idea to stick with it.
I think the segmentation fault occurs because of mismatching asio type definitions between Server.hpp and Client.hpp. In your case, this could only happen if boost changes typedefs depending on defines set by <string>, <memory> or <vector>.
What I suggest to try is either:
Include the same headers in both Server/Client.hpp and main.cpp before including asio.hpp:
#include <string>
#include <memory>
#include <vector>
#define ASIO_STANDALONE
#include <asio.hpp>
Or simply include asio.hpp before any other include in your header files and remove the include from main.cpp.
From my POV, it is a gcc bug. If you use clang++ instead of g++.exe, the crash dissappears even without fiddling with #include's ordering. Passing all source files to g++ in single call also makes crash disappear. This leads me to think that there is something wrong with gcc's COFF object files emission, since clang uses linker from the same mingw distribution.
So, either use clang or apply workaround from #Wouter Huysentruit's answer.
Note that latest clang release (3.7.0) has another bug, not related with your problem, that makes linking fail. I had to use nightly snapshot, which is quite stable anyway.
I had similar problem. My app crashed on win_lock::lock(), but win_lock::win_lock() constructor was never called!
When added -D_POSIX_THREADS to compiler flags it simply started to work.
I use mingw-w64-i686-7.1.0-posix-dwarf-rt_v5-rev0 on win7

MongoDB initialize failed, DuplicateKey

i want to use MongoDB from my C++ application, I've downloaded the MongoDb binary Version 3.0.5 and the legacy C++ Driver Version 1.0.5 from Git and installed both.
I followed the instruction with this code:
#include <cstdio>
#include <mongo/bson/bson.h>
#include <mongo/client/dbclient.h>
int main(int argc, char *argv[])
{
mongo::Status status = mongo::client::initialize();
return 0;
}
After Compile and Run i get to following message:
Attempt to add global initialiser failed, status: DuplicateKey GlobalLogManager Abort
Some ideas ?
I can reproduce the same behavior when using legacy driver compiled for C++03 (default) and application code compiled with C++11/C++14. It always segfaults and sometimes writes the same message (depending optimization level). See the related bug on mongo website.
The solution is to either:
compile your code with C++03 (-std=c++03)
recompile the driver with C++11 support.
For the C++11 support in driver, simply pass --c++11=on to scons.
scons --c++11=on install
Tested with GCC 4.9.1

C++ using getline() prints: pointer being freed was not allocated in XCode

I'm trying to use std:getline() but getting a strange runtime error:
malloc: * error for object 0x10000a720: pointer being freed was not allocated
* set a breakpoint in malloc_error_break to debug
This is the code that produces this error:
//main.cpp
#include <iostream>
#include <sstream>
int main (int argc, char * const argv[])
{
std::istringstream my_str("demo string with spaces");
std::string word;
while (std::getline(my_str, word, ' ')) {
std::cout << word << std::endl;
}
return 0;
}
Before each word I get this error. From the comments it seems to be a OSX/XCode specific error. Any hints on that?
Update:
The error is only printed in Debug mode. If I build this code in Release mode everything is fine.
Update 2:
More info on that issue can be found here.
Solution:
Set
_GLIBCXX_FULLY_DYNAMIC_STRING=1
in your Preprocessor Macros in targets info build tab.
System info:
OSX 10.6.2 | XCode 3.2 | g++ 4.2 | debug config for i386
At least one person has reported problems with g++ 4.2.1 on Apple that seem possibly related to yours having to do with an improper configuration of the standard library with the _GLIBCXX_FULLY_DYNAMIC_STRING definition (not that I understand any of what I'm typing here).
You might get a bit of a clue from the newsgroup thread that includes this message:
http://gcc.gnu.org/ml/gcc-bugs/2009-10/msg00807.html