I have written a simple consolebased webserver that I am trying to port to Qt so I can develop a GUI for it.
In my class that controls the reading of files from harddisk I have been using exceptions to indicate when there has been an error reading the file etc.
Now when I try to run the code compiled with Qt 5.7 my catch-block doesn't pick up my exception all of a sudden. Instead it throws it all the way back and crashes.
But when I write catch(...) to pick up all kind of exceptions it works without crashing..
This is the code in my filereading function:
fstream file;
file.exceptions( ifstream::failbit | ifstream::badbit );
try{
string content;
//Open file and read the file to string
file.open(this->getDirectory() + filename, ios_base::in|ios_base::binary);
file.seekg(0, file.beg);
char tmpChar = 0;
while( file.peek() != EOF )
{
file.read(&tmpChar, sizeof(tmpChar));
content.push_back(tmpChar);
}
file.close();
unique_ptr<fileObject> tmpPtr( new fileObject(filename, content, "text/html") );
if( this->addFileToCache(move(tmpPtr) ))
return true;
return false;
}
catch(const ios_base::failure& e){
if(file.is_open()) file.close();
return false;
}
Why isn't this working with Qt? catch(...) picks up the ios_base::failure exception so I don't understand why my code doesn't work anymore..
UPDATE:
When I pick up the exception with a catch(exception &e) and print its info I get the following result: .what() returns "basic_ios::clear" and typeid(e).name() returns "NSt8ios_base7failureE".
I am compiling with MinGw 5.3.0 32-bit in QtCreator and the compiler I used when my exceptions worked was MinGw-w64 4.7.3.
The problem appears to be a bug of libstdc++ shipped with gcc >= 5.0: some parts of the standard library throw the exceptions using C++98 ABI while the catching code uses C++11 ABI. The bug is still not fixed and it seems there is no simple workaround other than downgrading the compiler and the standard library or not using the exceptions together with iostreams.
Related
Hello I'm trying to read metadata from image using exiv2, but when opening the file I get the following error: Microsoft C++ exception: std::bad_alloc
I'm using default c++ visual studio 2019 compiler.
#include <iostream>
#include "exiv2/exiv2.hpp"
inline bool file_exists(const std::string& name) {
struct stat buffer;
return (stat(name.c_str(), &buffer) == 0);
}
int main(void)
{
try
{
Exiv2::XmpParser::initialize();
::atexit(Exiv2::XmpParser::terminate);
#ifdef EXV_ENABLE_BMFF
Exiv2::enableBMFF();
#endif
const char* file = "E:/img/DJI_0001.jpg";
if (!file_exists(file)) return 0;
Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(file);
assert(image.get() != 0);
image->readMetadata();
}
catch (Exiv2::Error& e) {
std::cout << "Caught Exiv2 exception '" << e.what() << "'\n";
return -1;
}
This is probably due to an ABI incompatibility between your C++ standard library version and the one exiv2 was compiled with. I suppose you are using a pre-built exiv2 library?
You can check this by calling Exiv2::versionNumber() vs Exiv2::versionString(). The former will work but the latter probably crash because of the std::string involved.
Solution: Do not use the pre-compiled version of exiv2 but compile it yourself within the same dev environment of your main project.
I had the same problem with the open method.
I followed the instructions of matthias and build the lib myself.
First the error remained. I then created specific versions for Debug and Release and used the appropriate version for my program.
This fixed the issue on my side.
Edit: I also tried the pre-compiled version with my release build. That was also working well.
I have a unit test like this:
TEST_F( SocketServerTest, ParseTest ) {
try {
// throw InvalidAddressException( "bla" );
// auto x = dv::socket::parseEndpoint( "127.0.0.1" );
EXPECT_THROW( auto x = dv::socket::parseEndpoint( "127.0.0.1" ), InvalidAddressException );
} catch ( const InvalidAddressException &e ) {
FAIL() << boost::diagnostic_information( e, true );
} catch ( const std::exception &e ) {
FAIL() << boost::diagnostic_information( e, true );
} catch ( ... ) {
FAIL() << "bla";
}
}
That works with GCC and MSVC but with clang somehow the exception is caught by the default google test catch and I get
unknown file: Failure
C++ exception with description "Invalid address 127.0.0.1 no port number" thrown in the test body.
If I throw the exception directly in the test it works, if I call the code without the EXPECT_THROW it hits the default c++ exception handler and aborts the program.
It does the same thing on apple clang & clang 8 on osx, and clang 7 on fedora 29 but works with gcc on fedora 29 and msvc 2019 on win 10
The exception is in a shared library and is defined using preprocessor macros and cmake generated header file to do the visibility attributes.
Other exceptions that are defined in the same way work in other places and the same exception works when thrown from different code in the same library.
Ive tried pulling just this code out into a standalone cmake project but can't get it to fail in the same way
How can I debug why this is happening, I've been at it for days with no progress.
After writing all that I realized I had attribute((pure)) on parseEndpoint removing that and it all works.
In my application I'm using coroutine2 to generate some objects which I have to decode from a stream. These objects are generated using coroutines. My problem is that as soon as I reach the end of the stream and would theoretically throw std::ios_base::failure my application crashes under certain conditions.
The function providing this feature is implemented in C++, exported as a C function and called from C#. This all happens on a 32bit process on Windows 10 x64. Unfortunately it only reliably crashes when I start my test from C# in debugging mode WITHOUT the native debugger attached. As soon as I attach the native debugger everything works like expected.
Here is a small test application to reproduce this issue:
Api.h
#pragma once
extern "C" __declspec(dllexport) int __cdecl test();
Api.cpp
#include <iostream>
#include <vector>
#include <sstream>
#include "Api.h"
#define BOOST_COROUTINES2_SOURCE
#include <boost/coroutine2/coroutine.hpp>
int test()
{
using coro_t = boost::coroutines2::coroutine<bool>;
coro_t::pull_type source([](coro_t::push_type& yield) {
std::vector<char> buffer(200300, 0);
std::stringstream stream;
stream.write(buffer.data(), buffer.size());
stream.exceptions(std::ios_base::eofbit | std::ios_base::badbit | std::ios_base::failbit);
try {
std::vector<char> dest(100100, 0);
while (stream.good() && !stream.eof()) {
stream.read(&dest[0], dest.size());
std::cerr << "CORO: read: " << stream.gcount() << std::endl;
}
}
catch (const std::exception& ex) {
std::cerr << "CORO: caught ex: " << ex.what() << std::endl;
}
catch (...) {
std::cerr << "CORO: caught unknown exception." << std::endl;
}
});
std::cout << "SUCCESS" << std::endl;
return 0;
}
C#:
using System;
using System.Runtime.InteropServices;
namespace CoroutinesTest
{
class Program
{
[DllImport("Api.dll", EntryPoint = "test", CharSet = CharSet.Ansi, CallingConvention = CallingConvention.Cdecl)]
internal static extern Int32 test();
static void Main(string[] args)
{
test();
Console.WriteLine("SUCCESS");
}
}
}
Some details:
We are using Visual Studio 2015 14 and dynamically link the c++ runtime.
The test library statically links Boost 1.63.0.
We also tried to reproduce this behaviour with calling the functionallity directly from c++ and from python. Both tests have not been successful so far.
If you start the c# code with CTRL F5 (meaning without the .net debugger) everything will also be fine. Only if you start it with F5 (meaning the .NET Debugger attached) the visual studio instance will crash. Also be sure not to enable the native debugger!
Note: If we don't use the exceptions in the stream, everything seams to be fine as well. Unfortunately the code decoding my objects makes use of them and therefore I cannot avoid this.
It would be amazing if you had some additional hints on what might go wrong here or a solution. I'm not entirely sure if this is a boost bug, could also be the c# debugger interfering with boost-context.
Thanks in advance! Best Regards, Michael
I realize this question is old but I just finished reading a line in the docs that seemed pertinent:
Windows using fcontext_t: turn off global program optimization (/GL) and change /EHsc (compiler assumes that functions declared as extern "C" never throw a C++ exception) to /EHs (tells compiler assumes that functions declared as extern "C" may throw an exception).
This is just a guess but in your coroutine I think you are supposed to push a boolean to your sink (named as yield in your code) and the code is not doing it.
One of the ancient anti-pattern is people checking error status and then returning fairly useless messages like "operation failed" instead of "operation failed because ...". I want C++ file I/O operations to fail with exception and get the error message on why it failed. Specifically I want ofstream object to raise exception when file creation fails and get bit more useful message such as "permission denied" or "No file or path".
This is trivial to do in languages such as C# or Java or Python but somehow there is no well documented way to do this C++. By default, iostream objects just fail silently. There is some global errorcode that gets but I would rather have exceptions. After lot of searching, I read that you can enable exceptions using following line of code:
my_file.exceptions(flog.exceptions() | std::ios::failbit | std::ifstream::badbit);
That works but now the exception that gets raised is std::ios_base::failure and the ex.what() returns useless strings like "basic_ios::clear". As per the C++11 specs std::ios_base::failure was supposed to be inherited from system_error which has .code().message() that will give the exception message. Let's keep aside this weirdness here and not finger point to person who decided what() should not be returning actual error message :). The problem is that even when compiling with C++11 and G++ 4.8.4, I find that std::ios_base::failure is not actually inherited from system_error.
Questions
Why std::ios_base::failure is not inherited from system_error in latest G++ 4.8.4 even when compiling with C++11 mode? Is GCC's implementation of C++11 incomplete in this area or do I need to do something more?
How do I get to my goal of raising exceptions when IO operations fail in C++ and getting error messages? Is there no way to do this even in latest C++11 or C++14? What are the alternatives?
Here's the sample code. You can compile and run it here.
#include <iostream>
#include <fstream>
#include <system_error>
int main() {
try {
std::ofstream flog;
flog.exceptions(flog.exceptions() | std::ios::failbit | std::ifstream::badbit);
flog.open("~/watever/xyz.tsv", std::ios::trunc);
}
catch (const std::ios_base::failure &ex) {
std::cout << "ios_base::failure: " << ex.what();
}
catch(const std::system_error& ex) {
std::cout << "system_error: " << ex.code().message();
}
}
According to GCC's C++11 status documentation, "System error support" is fully supported.
And according to Bug 57953 - no C++11 compliant std::ios_base::failure found, std::ios_base::failure was changed in Revision 217559 to derive from system_error in C++11. If you look in the updated ios_base.h, std::ios_base::failure derives from system_error if _GLIBCXX_USE_CXX11_ABI is defined. That define is mentioned in GCC's Using Dual ABI documentation.
However, there is a regression regarding ABI issues with std::ios_base::failure that is still open, due to the fact that some pieces of the standard library do not define _GLIBCXX_USE_CXX11_ABI:
Bug 66145 - [5/6/7 Regression] std::ios_base::failure objects thrown from libstdc++.so use old ABI
the short answer is - you probably can't, at least not with GCC's current implementation anyway. Unless you can recompile everything in the library with _GLIBCXX_USE_CXX11_ABI defined.
On POSIX systems ios failures set errno so you can get meaningful error messages using that. I often do this:
std::string getenv_as_string(std::string const& var)
{
auto ptr = std::getenv(var.c_str());
return ptr ? ptr : "";
}
// ~ doesn't work from C++
const std::string HOME = getenv_as_string("HOME");
int main()
{
try
{
std::ofstream ifs;
ifs.open(HOME + "/watever/xyz.tsv", std::ios::trunc);
if(!ifs)
throw std::runtime_error(std::strerror(errno));
// Do stuff with ifs
}
catch(std::exception const& e)
{
std::cerr << e.what() << '\n';
}
}
Output:
No such file or directory
I'm currently working on a game with a plugin based architecture. The executable consists mostly of a shared library loader and a couple of interface definitions. All the interesting stuff is happening in dynamic shared libraries which are loaded at start up.
One of the library classes throws an exception under certain circumstances. I would expect to be able to catch this exception and do useful stuff with it but this is where it gets weird. See following simplified example code:
main.cpp
int main()
{
try
{
Application app;
app.loadPlugin();
app.doStuffWithPlugin();
return 0;
}
catch(const std::exception& ex)
{
// Log exception
return 1;
}
}
Application.cpp
...
void doStuffWithPlugin()
{
plugin.doStuff();
}
...
Plugin.cpp
...
void doStuff()
{
throw exception_derived_from_runtime_error("Something is wrong");
}
...
Plugin.cpp exists in a dynamic shared library which is successfully loaded and which has afterwards created an object of class Plugin. The exception_derived_from_runtime_error is defined in the application. There is no throw() or noexcept.
I would expect to catch the exception_derived_from_runtime_error in main but that doesn't happen. Compiled with GCC 4.8 using C++11 the application crashes with This application has requested the Runtime to terminate it in an unusual way..
I replaced catch(const std::exception& ex) with catch(...) but that didn't make any difference. The weird part is if i catch the exception in doStuffWithPlugin() it works. If i rethrow it using throw; it fails again but it can be caught if i use throw ex;:
Application.cpp
void doStuffWithPlugin()
{
try
{
plugin.doStuff();
}
catch(const exception_derived_from_runtime_error& ex)
{
// throw; <- Not caught in main().
// throw ex; <- Caught in main().
}
}
Hopefully somebody has an idea. Thanks for every help you can give.
As mentioned in the comments this seems to be a problem with shared libraries on Windows. The behavior occurs if the library is unloaded and an object created in this libraries remains in memory. The application seems to crash immediately. The only reference to this problems are found if gcc as an cross compiler or MinGW is used. See also https://www.sourceware.org/ml/crossgcc/2005-01/msg00022.html