errno doesn't change after putting a negative value in sqrt() - c++

With errno, I am trying to check if cmath functions produce a valid result. But even after I put a negative value in sqrt() or log(), errno stays at value 0.
Can anyone know why, and how I can make errno behave correctly?
The environment is macOS Monterey version 12.6.1, the compiler is gcc version 11.3.0 (Homebrew GCC 11.3.0_1) or Apple clang version 14.0.0 (clang-1400.0.29.202) (I tried the 2 compilers).
The compile command is g++ test_errno.cpp -o test_errno -std=c++14.
The piece of code I tried is directly copied from this page. The following is the code.
#include <iostream>
#include <cmath>
#include <cerrno>
#include <cstring>
#include <clocale>
int main()
{
double not_a_number = std::log(-1.0);
std::cout << not_a_number << '\n';
if (errno == EDOM) {
std::cout << "log(-1) failed: " << std::strerror(errno) << '\n';
std::setlocale(LC_MESSAGES, "de_DE.utf8");
std::cout << "Or, in German, " << std::strerror(errno) << '\n';
}
}
Its result didn't print the error messages, which should be printed if errno is set correctly.

It seems that on macOS, errno is not used, from this bug report from #GAVD's comment.
I could check that via math_errhandling value from #Pete Becker's comment.
It seems there are 2 ways of math error handling in C/C++, either with errno, or with floating-point exception, as the 2nd link above shows.
We can check which way (or both of them) the system's math library employs, via checking if macro constant math_errhandling is equal to MATH_ERREXCEPT or MATH_ERRNO, like the following (copied from the 2nd link):
std::cout << "MATH_ERRNO is "
<< (math_errhandling & MATH_ERRNO ? "set" : "not set") << '\n'
<< "MATH_ERREXCEPT is "
<< (math_errhandling & MATH_ERREXCEPT ? "set" : "not set") << '\n';
And on my system, the output is
MATH_ERRNO is not set
MATH_ERREXCEPT is set
, which means the system does not use errno for reporting math errors, but use floating-point exception.
That's why errno stays at value 0 no matter what, and I should have used std::fetestexcept() to check error conditions.
With floating-point exceptions, std::feclearexcept(FE_ALL_EXCEPT); corresponds to errno = 0;, and std::fetestexcept(FE_DIVBYZERO) corresponds to errno == ERANGE for example.

I'm going to take a stab in the dark and guess you are enabling fast-math in your build?
Without fast-math:
https://godbolt.org/z/vMo1P7Mn1
With fast-math:
https://godbolt.org/z/jEsGz7n38
The error handling within cmath tends to break things like vectorisation & constexpr (setting external global variables, is a side effect that breaks both cases). As a result, you are usually better off checking for domain errors yourself...

Related

Cout unsigned char

I'm using Visual Studio 2019: why does this command do nothing?
std::cout << unsigned char(133);
It literally gets skipped by my compiler (I verified it using step-by-step debug):
I expected a print of à.
Every output before the next command is ignored, but not the previous ones. (std::cout << "12" << unsigned char(133) << "34"; prints "12")
I've also tried to change it to these:
std::cout << unsigned char(133) << std::flush;
std::cout << (unsigned char)(133);
std::cout << char(-123);
but the result is the same.
I remember that it worked before, and some of my programs that use this command have misteriously stopped working... In a blank new project same result!
I thought that it my new custom keyboard layout could be the cause, but disabling it does not change so much.
On other online compilers it works properly, so may it be a bug of Visual Studio 2019?
The "sane" answer is: don't rely on extended-ASCII characters. Unicode is widespread enough to make this the preferred approach:
#include <iostream>
int main() {
std::cout << u8"\u00e0\n";
}
This will explicitly print the character à you requested; in fact, that's also how your browser understands it, which you can easily verify by putting into e.g. some unicode character search, which will result in LATIN SMALL LETTER A WITH GRAVE, with the code U+00E0 which you can spot in the code above.
In your example, there's no difference between using a signed or unsigned char; the byte value 133 gets written to the terminal, but the way it interprets it might differ from machine to machine, basing on how it's actually set up to interpret it. In fact, in a UTF-8 console, this is simply a wrong unicode sequence (u"\0x85" isn't a valid character) - if your OS was switched to UTF-8, that might be why you're seeing no output.
You can try to use static_cast
std::cout << static_cast<unsigned char>(133) << std::endl;
Or
std::cout << static_cast<char>(133) << std::endl;
Since in mine all of this is working, it's hard to pinpoint the problem, the common sense would point to some configuration issue.

架 (U+67B6) is not graphical with en_US.UTF-8. Whats going on?

This is a follow up question to:
std::isgraph asserts, how to fix?
After setting locale to "en_US.UTF-8", std::isgraph no longer asserts.
However, the unicode character 架 (U+67B6) is reported as false in the same function. What is going on ?
It's a unicode built on Windows platform.
If you want to test characters that are too large to fit in an unsigned char, you can try using the wide-character versions, or a Unicode library as already suggested (Which is really the better option for portable code, as it removes any system or locale based differences in behavior).
This program:
#include <clocale>
#include <cwctype>
#include <iostream>
int main() {
wchar_t x = L'\u67B6';
char *loc = std::setlocale(LC_CTYPE, "");
std::wcout << "Using locale " << loc << ".\n";
std::wcout << "Character " << x << " is graphical: " << std::boolalpha
<< static_cast<bool>(std::iswgraph(x)) << '\n';
return 0;
}
when compiled and ran on my Ubuntu test system, outputs
Using locale en_US.utf8.
Character 架 is graphical: true
You said you're using Windows, but I don't have a Windows computer available for testing, so I can't confirm if this'll work there or not.
std::isgraph is not a Unicode-aware function.
It's an antiquity from C.
From the documentation:
The behavior is undefined if the value of ch is not representable as unsigned char and is not equal to EOF.
It only takes int because .. it's an antiquity from C. Just like std::tolower.
You should be using something like ICU instead.

istream eof discrepancy between libc++ and libstdc++

The following (toy) program returns different things when linked against libstdc++ and libc++. Is this a bug in libc++ or do I not understand how istream eof() works? I have tried running it using g++ on linux and mac os x and clang on mac os x, with and without -std=c++0x. It was my impression that eof() does not return true until an attempt to read (by get() or something else) actually fails. This is how libstdc++ behaves, but not how libc++ behaves.
#include <iostream>
#include <sstream>
int main() {
std::stringstream s;
s << "a";
std::cout << "EOF? " << (s.eof() ? "T" : "F") << std::endl;
std::cout << "get: " << s.get() << std::endl;
std::cout << "EOF? " << (s.eof() ? "T" : "F") << std::endl;
return 0;
}
Thor:~$ g++ test.cpp
Thor:~$ ./a.out
EOF? F
get: 97
EOF? F
Thor:~$ clang++ -std=c++0x -stdlib=libstdc++ test.cpp
Thor:~$ ./a.out
EOF? F
get: 97
EOF? F
Thor:~$ clang++ -std=c++0x -stdlib=libc++ test.cpp
Thor:~$ ./a.out
EOF? F
get: 97
EOF? T
Thor:~$ clang++ -stdlib=libc++ test.cpp
Thor:~$ ./a.out
EOF? F
get: 97
EOF? T
EDIT: This was due to the way older versions of libc++ interpreted the C++ standard. The interpretation was discussed in LWG issue 2036, it was ruled to be incorrect and libc++ was changed.
Current libc++ gives the same results on your test as libstdc++.
old answer:
Your understanding is correct.
istream::get() does the following:
Calls good(), and sets failbit if it returns false (this adds a failbit to a stream that had some other bit set), (§27.7.2.1.2[istream::sentry]/2)
Flushes whatever's tie()'d if necessary
If good() is false at this point, returns eof and does nothing else.
Extracts a character as if by calling rdbuf()->sbumpc() or rdbuf()->sgetc() (§27.7.2.1[istream]/2)
If sbumpc() or sgetc() returned eof, sets eofbit. (§27.7.2.1[istream]/3) and failbit (§27.7.2.2.3[istream.unformatted]/4)
If an exception was thrown, sets badbit (§27.7.2.2.3[istream.unformatted]/1) and rethrows if allowed.
Updates gcount and returns the character (or eof if it couldn't be obtained).
(chapters quoted from C++11, but C++03 has all the same rules, under §27.6.*)
Now let's take a look at the implementations:
libc++ (current svn version) defines the relevant part of get() as
sentry __s(*this, true);
if (__s)
{
__r = this->rdbuf()->sbumpc();
if (traits_type::eq_int_type(__r, traits_type::eof()))
this->setstate(ios_base::failbit | ios_base::eofbit);
else
__gc_ = 1;
}
libstdc++ (as shipped with gcc 4.6.2) defines the same part as
sentry __cerb(*this, true);
if (__cerb)
{
__try
{
__c = this->rdbuf()->sbumpc();
// 27.6.1.1 paragraph 3
if (!traits_type::eq_int_type(__c, __eof))
_M_gcount = 1;
else
__err |= ios_base::eofbit;
}
[...]
if (!_M_gcount)
__err |= ios_base::failbit;
As you can see, both libraries call sbumpc() and set eofbit if and only if sbumpc() returned eof.
Your testcase produces the same output for me using recent versions of both libraries.
This was a libc++ bug and has been fixed as Cubbi noted. My bad. Details are here:
http://lwg.github.io/issues/lwg-closed.html#2036
The value of s.eof() is unspecified in the second call—it may be
true or false, and it might not even be consistent. All you can say is
that if s.eof() returns true, all future input will fail (but if it
returns false, there's no guarantee that future input will succeed).
After failure (s.fail()), if s.eof() returns true, it's likely (but
not 100% certain) that the failure was due to end of file. It's worth
considering the following scenario, however:
double test;
std::istringstream s1("");
s1 >> test;
std::cout << (s1.fail() ? "T" : "F") << (s1.eof() ? "T" : "F") << endl;
std::istringstream s2("1.e-");
s2 >> test;
std::cout << (s2.fail() ? "T" : "F") << (s2.eof() ? "T" : "F") << endl;
On my machine, both lines are "TT", despite the fact that the first
failed because there was no data (end of file), the second because the
floating point value was incorrectly formatted.
eofbit is set when there is an operation which tries to read past the end of file, the operation may not fail (if you are reading an integer and there is no end of line after the integer, I expect eofbit to be set but the read of the integer to succeed). I.E. I get and expect FT for
#include <iostream>
#include <sstream>
int main() {
std::stringstream s("12");
int i;
s >> i;
std::cout << (s.fail() ? "T" : "F") << (s.eof() ? "T" : "F") << std::endl;
return 0;
}
Here I don't expect istream::get to try and read after the returned character (i.e. I don't expect it to hang until I enter the next line if I read a \n with it), so libstd++ seems indeed right, at least in a QOI POV.
The standard description for istream::get just says "extracts a character c, if one is available" without describing how and so doesn't seem to prevent libc++ behavior.

Where is the errnos defined? Example linux c/c++ program for i2c

When something goes wrong in a classic linux c/c++ software we have the magic variable errno that gives us a clue on what just went wrong.
But where is those errors defined?
Let's take a example (it's actually a piece from a Qt app, therefore the qDebug()).
if (ioctl(file, I2C_SLAVE, address) < 0) {
int err = errno;
qDebug() << __FILE__ << __FUNCTION__ << __LINE__
<< "Can't set address:" << address
<< "Errno:" << err << strerror(err);
....
The next step is to look at what that errno was so we can decide if we just quit or try to do something about the problem.
So we maybe add a if or switch at this point.
if (err == 9)
{
// do something...
}
else
{
//do someting else
}
And my question is where to find the errors that "9" represents?
I don't like that kind of magic numbers in my code.
/Thanks
They are generally defined in /usr/include/errno.h* and are accessible by including errno.h:
#include <errno.h>
I write out both the errno value and its textual meaning when reporting an error, getting the textual meaning via strerror():
if (something_went_wrong)
{
log("Something went wrong: %s (%d)", strerror(errno), errno);
return false;
}
From a Linux shell, however, you can use the perror utility to find out what different errno values mean:
$ perror 9
OS error code 9: Bad file descriptor
EDIT:
Your code should be changed to use the symbolic value:
if (err == EBADF)
{
// do something...
}
else
{
//do someting else
}
EDIT 2: * Under Linux, at least, the actual values are defined in /usr/include/asm-generic/{errno,errno-base}.h and other places, making it a bit of a pain to find them if you want to look at them.
NAME
errno - number of last error
SYNOPSIS
#include <errno.h>
DESCRIPTION
The <errno.h> header file defines the integer variable errno, which is set by system calls and some library functions in the event of an error to indicate
what went wrong. Its value is significant only when the return value of the call indicated an error (i.e., -1 from most system calls; -1 or NULL from most
library functions); a function that succeeds is allowed to change errno.
Valid error numbers are all non-zero; errno is never set to zero by any system call or library function.
Normally to implement error handling you need to know which particular error codes can be returned by the function. This info is available in man or at http://www.kernel.org/doc/man-pages/.
For example for ioctl call you should expect the following codes:
EBADF d is not a valid descriptor.
EFAULT argp references an inaccessible memory area.
EINVAL Request or argp is not valid.
ENOTTY d is not associated with a character special device.
ENOTTY The specified request does not apply to the kind of object that the
descriptor d references.
EDIT:
If <errno.h> is included all files defining all possible error codes are included as well, so you don't really need to know their exact location.
The file is:
/usr/include/errno.h
--p

sstream not working...(STILL)

I am trying to get a double to be a string through stringstream, but it is not working.
std::string MatlabPlotter::getTimeVector( unsigned int xvector_size, double ts ){
std::string tv;
ostringstream ss;
ss << "0:" << ts << ":" << xvector_size;
std::cout << ss.str() << std::endl;
return ss.str();
}
It outputs only "0:" on my console...
I'm working on two projects, both with the same problem. I'm posting a different one, which runs into the same problem. It is posted here:
http://pastebin.com/m2dd76a63
I have three classes PolyClass.h and .cpp, and the main. The function with the problem is PrintPoly. Can someone help me out? Thanks a bunch!!
You're printing correctly, however your logic in the order of printing is incorrect.
I modified it to work they way I think you wanted it to, let me know if this helps.
http://pastebin.com/d3e6e8263
Old answer:
Your code works, though ostringstream is in the std namespace. The problem is in your file printing code.
Can I see your call to the function?
I made a test case:
// #include necessary headers
int main(void)
{
std::string s;
s = MatlabPlotter::getTimeVector(1,1.0);
}
The output I get is 0:1:1
The following code is 100% correct:
#include <iostream>
#include <sstream>
#include <string>
// removed MatlabPlotter namespace, should have no effect
std::string getTimeVector(unsigned int xvector_size, double ts)
{
// std::string tv; // not needed
std::ostringstream ss;
ss << "0:" << ts << ":" << xvector_size;
std::cout << ss.str() << std::endl;
return ss.str();
}
int main(void)
{
// all work
// 1:
getTimeVector(0, 3.1415);
// 2: (note, prints twice, once in the function, once outside)
std::cout << getTimeVector(0, 3.1415) << std::endl;
// 3: (note, prints twice, once in the function, once outside)
std::string r = getTimeVector(0, 3.1415);
std::cout << r << std::endl;
}
Find where we differ, that's likely your source of error. Because it stops at your double, I'm guessing the double you're trying to print is infinity, NaN (not a number), or some other error state.
I can't really help with the "no output" part of this, as you didn't show your code that tries to output this. As a guess, did you perhaps not put an EOL in there somehow? Some systems won't give any text output until they hit a newline. You can do this by tacking a << std::endl onto your line, or a '\n' to your string.
Since you didn't put down a using for it, you need to use the type std::ostringstream. This is similar to how you had to use "std:string" instead of just "string".
Also, were it me, I'd get rid of that temp variable and just return ss.str(); It is less code (to possibly get wrong), and probabaly less work for the program.
Well, I tried the code you linked to and it outputs
B 4
A 5
B 4
C 3
x^ + 5x^ + 3
for me before crashing although the crash happens after PrintPoly. From looking at the code this is what I'd expect it to print. Are you saying you get no integers appearing after the letters?
Thanks all for your input! Not sure of the exact error, but it must be some setting in XCode which is messing it up. I made a CMakeLists.txt file and compiled it from the terminal using
cmake -G XCode ..
and produced an XCode project. I ran it, and now it works fine...now would anyone happen to know what might cause XCode to do this? I'm running version 3.2 with the following:
64-bit
Component versions
Xcode IDE: 1610.0
Xcode Core: 1608.0
ToolSupport: 1591.0