I know there many subjects about this but none of them helps me.
I use in my C/C++ project std::cout and std::cerr to print info (cout) or error (cerr).
But when executing it they don't print in the right order, they seems to "group print". Sometime all cerr then all cout and sometime all cout first then all cerr.
I tried to flush() after every line, don't work. (luckily, it would be awful having to use it every time ...).
Also tried setvbuf(stdout, NULL, _IONBF, 0); same issue...
If run program directly in linux's console, order is good but eclipse console more useful due to colors.
Here code sample
#include <iostream>
int main(int argc, char** argv)
{
std::cerr << __LINE__ << std::endl;
std::cerr << __LINE__ << std::endl;
std::cout << __LINE__ << std::endl;
std::cerr << __LINE__ << std::endl;
std::cerr << __LINE__ << std::endl;
std::cout << __LINE__ << std::endl;
}
And console print
11
12
14
15
13
16
==> Wrong order ...
In this example cerr comes out before cout
Ok the situation is as follows, std::cout is added into a buffor, and std::cerr does not. std::cerr is faster and it does not need to be flushed. Also std::cerr and std::cout uses different streams.
The problem here is that std::cerr shows right away and std:cout needs to be flushed before it's showed.
Related
I tried to output contents to a file
std::locale::global(std::locale());
std::wofstream file(outfilename , std::wofstream::binary);
for (const auto & j : grid[0]) {
try {
std::wcout << L"String in WideString " << decoder->decode(j) << std::endl;
file << decoder->decode(j) << std::endl;
}
catch (std::exception& e) {
std::cout << e.what() << std::endl;
}
}
wcout stops outputting anything ( even "String in WideString" is not outputted ) after some amount of calls,
(I debugged it and it executes wcout like supposed to, after it stopped outputting text)
wofstream also stops outputting after the same amount of calls.
This is the first time I used the widestrings, streams and couts.
Thanks for looking into this.
it is the € sign, that stops wcout and wofstream from working, removing that from the input file, I get the data from, makes everything work like expected, very strange
I'm attempting to decode .gif files using giflib. The following code leads to a segfault on the final line (the output width/height is correct).
GifFileType* gif = DGifOpenFileName(filename.c_str(), &errCode);
if (gif == NULL) {
std::cout << "Failed to open .gif, return error with type " << errCode << std::endl;
return false;
}
int slurpReturn = DGifSlurp(gif);
if (slurpReturn != GIF_OK) {
std::cout << "Failed to read .gif file" << std::endl;
return false;
}
std::cout << "Opened .gif with width/height = " << gif->SWidth << " " << gif->SHeight << std::endl;
std::cout << gif->SavedImages[0].RasterBits[0] << std::endl;
Output:
Opened .gif with width/height = 921 922
zsh: segmentation fault (core dumped) ./bin/testgiflib
As I understand, giflib should populate gif->SavedImages. But it is NULL after calling DGifSlurp().
Any ideas would be appreciated.
EDIT
I've added the following lines of code following a suggestion in comments:
if (gif->SavedImages == NULL) {
std::cout <<"SavedImages is NULL" << std::endl;
}
The line is printed, indicating that SavedImages is NULL.
EDIT2
Some gifs on which this issue occurs (note that I can't get it to work on any gifs):
https://upload.wikimedia.org/wikipedia/en/3/39/Specialist_Science_Logo.gif
GIF image data, version 89a, 921 x 922
https://upload.wikimedia.org/wikipedia/commons/2/25/Nasa-logo.gif
GIF image data, version 87a, 1008 x 863
Preface: Looks like in my version of giflib, 4.1.6, the DGifOpenFileName() function takes only the filename parameter, and does not return an error code, which is an irrelevant detail.
After adjusting for the API change, and adding the necessary includes, I compiled and executed the following complete, standalone test program:
#include <gif_lib.h>
#include <iostream>
int main()
{
GifFileType* gif = DGifOpenFileName("Specialist_Science_Logo.gif");
if (gif == NULL) {
std::cout << "Failed to open .gif, return error with type " << std::endl;
return false;
}
int slurpReturn = DGifSlurp(gif);
if (slurpReturn != GIF_OK) {
std::cout << "Failed to read .gif file" << std::endl;
return false;
}
std::cout << "Opened .gif with width/height = " << gif->SWidth << " " << gif->SHeight << std::endl;
std::cout << (int)gif->SavedImages[0].RasterBits[0] << std::endl;
}
Except for the presence of the header files, the slightly different DGifOpenFilename() signature, and my tweak to cast the second output line's value to an explicit (int), this is identical to your code. Also, the code was changed to explicitly open the Specialist_Science_Logo.gif file, one of the GIF image files you were having an issue with.
This code executed successfully on Fedora x86-64, giflib 4.1.6, gcc 5.5.1, without any issues.
Instrumenting the sample code with valgrind did not reveal any memory access violations.
From this, I conclude that there is nothing wrong with the shown code. The shown code is obviously an excerpt from a larger application. The bug lies elsewhere in this application, or perhaps giflib itself, and only manifests here.
I am writing a structure into a file using the following line:
std::fstream snif::fileHandler;
fileHandler.write(reinterpret_cast<char*>(rawData), sizeof(rawDataStruct));
where rawdataStruct is:
typedef struct _rawData rawDataStruct;
now after writing the structures into the file, I am reading the structure from the beginning of the binary file using:
std::cout << "going for print data read from file\n";
snif::fileHandler.seekg(0); //, std::ios::beg);
snif::fileHandler.read(reinterpret_cast<char*>(rawData), sizeof(rawDataStruct));
if (snif::fileHandler.fail()) {
std::cerr << "reading error\n";
exit(0);
}
std::cout << "PSH flag = " << rawData->tcpFlag.PSH << std::endl
<< "source port " << rawData->sourcePort << std::endl
<< "destination port " << rawData->destinationPort << std::endl
<< " sequence number " << rawData->sequenceNumber << std::endl
<< " Acknowledge number " << rawData->acknowledgeNumber << std::endl
<< " acknowledge flag " << rawData->tcpFlag.ACK << std::endl
<< " SYN flag " << rawData->tcpFlag.SYN << std::endl
<< "FIN flag " << rawData->tcpFlag.FIN << std::endl;
but if I check my standard output, the last line geting printed is:
"going for print data read from file";
There is no code showing it, but what mode is the file opened? Hopefully it is configured for binary. To see the available options, review std::basic_fstream and std::ios_base::openmode. I suggest to make sure that the following open modes are set:
ios::binary | ios::out | ios::in | ios::trunc
Depending on what purpose is happening, ios::trunc (truncate) may have to be replaced by ios::app (append).
While doing some basic testing, it has been discovered on my C++11 compliant compiler that the
fileHandler.write(reinterpret_cast<char*>(rawData), sizeof(rawDataStruct));
has a potential problem that is easily solved by adding the & operator in front of the rawData like this:
fileHandler.write(reinterpret_cast<char*>(&rawData), sizeof(rawDataStruct));
The compiler should have given a warning, but that is contingent on the compiler version, and whether the -Wall option or better is used. This may explain how the screen output seemingly stops at the
"going for print data read from file"
message. The read function also needs the & operator in front of rawData:
snif::fileHandler.read(reinterpret_cast<char*>(&rawData), sizeof(rawDataStruct));
Perhaps a run-time exception from the reinterpret_cast<> operator is being thrown that is not being caught. It is difficult to know until the system and compiler are documented.
Additionally, if rawData is declared as a pointer, then a better variable name is pRawData, as well as posting more of the code. For example, if the pRawData is never pointing to a valid instance of the rawDataStruct, then unpredictable things will occur.
Could you guys help me decypher unknown exception that is thrown by boost::iostreams::mapped_file_sink ?
My configuration
boost 1.51
Visual Studio 2012 on Windows 7
GCC 4.7 on Ubuntu
Here is the code I have
try
{
boost::iostreams::mapped_file_params params_;
boost::iostreams::mapped_file_sink sink_;
params_.length = 0;
params_.new_file_size = 1024;
params_.path = "./test.bin";
sink_.open(params_);
sink_.close();
}
catch (std::ios::failure& ex)
{
std::cout << "\t" << "what: " << ex.what() << "\n";
}
catch (std::system_error& ex)
{
std::cout << "\t" << "code: " << ex.code() << " what: " << ex.what() << "\n";
}
catch (std::runtime_error& ex)
{
std::cout << "\t" << ex.what() << "\n";
}
catch (boost::archive::archive_exception& ex)
{
std::cout << "\t" << ex.what() << "\n";
}
catch (boost::exception& ex)
{
std::cout << "blah\n";
}
catch (std::exception& ex)
{
std::cout << "\t" << ex.what() << " --- " << typeid(ex).name() << "\n";
}
It always works in Windows.
In Ubuntu it creates empty file of given size but throws exception on open(). Subsequent execution of the code if exists doesn't cause exception.
The worst part is that I can't see the reason of the exception. I can only catch std::exception whose what() returns meaningless "std::exception".
In desperate attempt to find out what's wrong I output typeid(ex).name() which shows
N5boost16exception_detail10clone_implINS0_19error_info_injectorISt9exception
which according to Google means: boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<std::exception> >
Any ideas what's wrong?
You could run the code in a debugger and set a breakpoint in the function which actually throws an exceptions, e.g., __cxa_throw. The name of the function may be different on your system: use nm -po program | less and search for a function containing throw. Set a breakpoint in the one(s) which look most likely as if they are created by the system. If there are only few exceptions being thrown, you can also set a breakpoint into std::exception::exception().
After 50 mins of guessing I found out that problem was in length field. The documentation doesn't say that but its default value has to be -1 as stated in source code
BOOST_STATIC_CONSTANT(size_type, max_length = static_cast<size_type>(-1));
I intuitively assumed that if I set new_file_size to be more than zero it would ignore length.
I'm getting a strange error:
*** glibc detected *** findbasis: free(): invalid next size (normal): 0x0000000006a32ce0 ***
When I try to close() a std::ofstream:
void writeEvectors(int l, parameters params, PetscReal* evectors, int basis_size)
{
for (int n = 1 + l; n <= params.nmax(); n++)
{
std::stringstream fname(std::ios::out);
fname << params.getBasisFunctionFolder() << "/evectors_n" << std::setw(4) << std::setfill('0') << n << "_l" << std::setw(3) << std::setfill('0') << l;
std::ofstream out(fname.str().c_str(), std::ios::binary);
std::cerr << "write out file:" << fname.str() << " ...";
out.write((char*)( evectors + n * basis_size),sizeof(PetscReal) * basis_size);
std::cerr << "done1" << std::endl;
if (out.fail() || out.bad())
std::cerr << "bad or fail..." << std::endl;
out.close();
std::cerr << "done2" << std::endl;
}
std::cout << "done writing out all evectors?" << std::endl;
}
When run, this program never reaches the "done2" (or the "bad or fail..."), however the "done1" is reached. Also, the data that is written out is good (as in what I expect).
I'm honestly at a loss as to why this happens, I can't think of any reason "close()" would fail.
Thanks for any help.
(I'm beginning to think it is some sort of compiler bug/error. I'm running GCC 4.1.2 (!) (RHEL 5 I believe) through mpicxx)
The glibc error sounds like there's a problem with freeing memory. If you run inside Valgrind, a free memory profiler, it ought to give you a more helpful explanation of the error.
Running in Valgrind is fairly painless - just compile the executable with the -g option to add debugging flags (assuming you're using the GNU compiler) and then in your Linux terminal enter valgrind ./your_executable and see what happens.