Which functions are affected by -fno-math-errno? - c++

I have been excited by this post: https://stackoverflow.com/a/57674631/2492801 and I consider using -fno-math-errno. But I would like to be sure that I do not harm the behaviour of the software I am working on.
Therefore I have checked the (rather large) codebase to see where errno is being used and I wanted to decide whether these usages interfere with -fno-math-errno. But how to do that? The documentation says:
-fno-math-errno
Do not set errno after calling math functions that are executed with a single instruction, e.g., sqrt...
But how can I know which math functions are executed with a single instruction? Is this documented somewhere? Where?
It seems as if the codebase I use relies on errno especially when calling strtol and when working with streams. I guess that strtol is not executed with a single instruction. Is it considered to be a math function at all? How can I be sure?

You can find list of functions affected by -fno-math-errno in GCC's builtins.def (search for "ERRNO"). It seems that only some functions from math.h header (cos, sin, exp, etc.) are affected. Treatment of other standard functions that use errno (strtol, etc.) will not change under this flag.

Related

Does C++ replace built-in operators with function calls?

I was recently reading about operator overloading in C++. So, I was wondering whether the built-in operators are replaced by function calls behind the scenes.
For example, Is a + b(a and b are int types) replaced by a.operator+(b)? Or the compiler does something different?
There is no int::operator+. Whether the compiler chooses to compile a + b directly to assembly (likely) or replace it with some internal function like int __add_ints(int, int) (unlikely) is an implementation detail.
The internals of the compiler are complex. On a conceptual level, the answer is YES. Whenever a compiler sees a + b, it does have to check for known functions with the name operator+ and replace it with a call to the right function.
In practice, their are 2 important nuances to make:
The compiler knows about fundamental types (which you can't override), it doesn't need to insert a function call it can immediately insert the right 'instructions'
Inlining is an important optimization, which will remove the function call when interesting
Maybe. Many arithmetic operations map dir calypso into CPU instructions, and the compiler will just generate the appropriate code in place. If that’s not possible, the compiler will generate a call to an appropriate function, and the runtime library will have a definition of that function. Back in the olden days floating-point math was usually done with function calls. These days, CPUs for desktop systems have floating-point hardware, and floating-point math is generated as direct CPU instructions. But embedded systems often don’t have hardware for that, so the compiler generates function calls instead.
Back in the really early days, even integer math was sometimes done with function calls. Because of his, the IBM 1620 was sometimes referred to as the CADET: Can’t Add, Doesn’t Even Try.

Is it possible to see builtin function definition?

This is pure curiosity, I'm using a cmath function, that is cos, whatever.
After looking at the cmath header, I see that kind of things:
inline _GLIBCXX_CONSTEXPR float
acos(float __x)
{ return __builtin_acosf(__x); }
And there is that "__ builtin__" and I don't find any source of that with google.
I'm interested in seeing the source code of how they implement a math function, I mean I guess they use taylor and stuff, but I wanted to see how they do it. Is it hidden for proprietary reasons or can it be found?
GCC __builtin_ functions are not actual functions, that is, they are not really declared and implemented anywhere as such, but instead they are detected by the compiler and mapped to some implementation. See builtins.h, builtins.c and builtins.def. It is rather hard to track down where the actual implementations reside, but it seems that these mathematical functions are taken from the implementation of libc that you are using. For example, digging through GLibc source code, one can find at least a couple of implementations of __ieee754_acosf (which seems to be then aliased and wrapped by other function) one in C e_acosf.c and one in x86 assembly a_acosf.S (backup links, since these are unofficial GitHub mirrors e_acosf.c e_acosf.S). You could say that readability was not their first priority. You can find similar code in the musl libc source tree: acosf.c in C an acosf.s in x86.

Why does wxMkDir return 0 when successfully creating a directory?

So, I have to admit upfront I already know the answer. I'm asking so others who happen to run into the same problem can find a solution to the problem caused by incorrect documentation.
My environment is VS 2015 C++, wxWidgets 3.0.2, developing on Windows 7.
In some legacy code, calls to wxMkDir were not being checked for success. According to wxWidgets documentation, wxMkDir has a return type of bool, and returns true if successful. However, it returns 0 when successful.
Why?
The answer is two-fold: there are two functions with similar names, wxMkdir and wxMkDir, with the former being documented and the latter not documented. The second part is that the seemingly valid presumption that they will behave the same is not a valid assumption.
The undocumented function wxMkDir maps to wxCRT_MkDir, which in turn maps to wxCRT_MkDirA, then to wxPOSIX_IDENT(mkdir), which creates a platform dependent name for the mentioned POSIX function, mkdir. According to the POSIX documentation for mkdir
Upon successful completion, mkdir() shall return 0. Otherwise, -1 shall be returned, no directory shall be created, and errno shall be set to indicate the error.
So, conditionals like:
if (!wxMkDir(newDir)) {
// handle the error here
}
will fail, but:
if (wxMkDir(newDir) != 0) {
// handle the error here
}
will work as anticipated based on whether the directory was created or not.
The documented function wxMkdir is implemented in wx source file filefn.cpp, and utilizes mkdir, but with conditionals like the above to map to the appropriate bool return value.
wxMkdir() and wxMkDir() are the unfortunate and ugly exceptions to the general rule that wxWidgets provides wxFoo() wrapper for all standard (meaning either ANSI C or POSIX as, in practice, the latter is about as standard and more so than C99) functions foo() existing in both narrow (char*) and wide (wchar_t*) versions.
So, according to this general rule, you'd expect wxMkdir() to behave as std::mkdir() but unfortunately wxMkdir() predated, by quite a few years, Unicode-ification of wxWidgets and so this rule couldn't be implemented for it because of backwards compatibility and another function had to be invented to be just a wrapper for std::mkdir().
And by now, of course, the weight of backwards compatibility is even heavier and there really doesn't seem to be anything reasonable to do here -- other than advising people to use wxFileName::Mkdir() which is unambiguous.
</sad-story>

What are __builtin__functions for in C++?

I am debugging a transactional processing system which is performance sensitive.
I found a code which uses, __builtin_memcpy and __builtin_memset instead of memcpy and memset.
What are __builtin_functions for?
,to prevent the dependency problems on architecture or compiler?
Or.. is there any performance reason where __builtin_functions are prefered?
thank you :D
Traditional library functions, the standard memcpy is just a call to a function. Unfortunately, memcpy is often called for every small copies, and the overhead of calling a function, shuffling a few bytes and returning is quite a lot of overhead (especially since memcpy adds extra stuff to the beginning of the function to deal with unaligned memory, unrolling of the loop, etc, to do well on LARGE copies).
So, for the compiler to optimise those, it needs to "know" how to do for example memcpy - the solution for this is to have a function "builtin" into the compiler, which then contains code such as this:
int generate_builtin_memcpy(expr arg1, expr arg2, expr size)
{
if (is_constant(size) && eval(size) < SOME_NUMBER)
{
... do magic inline memory copy ...
}
else
{
... call "real" memcpy ...
}
}
[For retargetable compilers, there is typically one of these functions for each CPU architecture, that has different configurations as to what conditions the "real" memcpy gets called, or when an inline memcpy is used.]
The key here is that you MAY actually write your own memcpy function, that ISN'T based on __builtin_memcpy(), which is ALWAYS a function, and doesn't do the same thing as normal memcpy [you'd be a bit in trouble if you change it's behaviour a lot, since the C standard library probably calls memcpy in a few thousand places - but for example doing statistics over how many times memcpy is called, and what sizes are copies could be one such use-case].
Another big reason for using __builtin_* is that they provide code that would otherwise have to be written in inline assembler, or possibly not available at all to the programmer. Setting/getting special registers would be such a thing.
There are other techniques to solve this problem, for example clang has a LibraryPass that assumes library-calls do common functions with other alternatives, for example since printf is much "heavier" than puts, it replaces suitable printf("constant string with no formatting\n")s into puts("constant string with no formatting"), and many trigonometric and other math functions are resolved into common simple values when called with constants, etc.
Calling __builtin_* directly for functions like memcpy or sin or some such is probably the WRONG thing to do - it just makes your code less portable and not at all certain to be faster. Calling __builtin_special_function when there is no other is typically the solution in some tricky situations - but you should probably wrap it in your own function, e.g.
int get_magic_property()
{
return __builtin_get_magic_property();
}
That way, when you port to Windows, you can easily do:
int get_magic_property()
{
#if WIN32
return Win32GetMagicPropertyEx();
#else
return __builtin_magic_property();
#endif
}
__builtin_* functions are optimised functions provided by the compiler libraries. These might be builtin versions of standard library functions, such as memcpy, and perhaps more typically some of the maths functions.
Alternatively, they might be highly optimised functions for typical tasks for that particular target - eg a DSP might have built-in FFT functions
Which functions are provided as __builtin_ are determined by the developers of the compiler, and will be documented in the manuals for the compiler.
Different CPU types and compilers are designed for different use cases, and this will be reflected in the range of built-in functions provided.
Built-in functions might make use of specialised instructions in the target processor, or might trade off accuracy for speed by using lookup tables rather than calculating values directly, or any other reasonable optimisation, all of which should be documented.
These are definitely not to reduce dependency on a particular compiler or cpu, in fact quite the opposite. It actually adds a dependency, and so these might be wrapped up in preprocessor checks eg
#ifdef SOME_CPU_FLAG
#define MEMCPY __builtin_memcpy
#else
#define MEMCPY memcpy
on a compiler note, __builtin_memcpy can fall back to emitting
a memcpy function call. also less-capable
compilers the ability to simplify, by choosing the slow path of
unconditionally emitting a memcpy call.
http://lwn.net/Articles/29183/

Constexpr Math Functions

So noticed from this page that none of the math functions in c++11 seems to make use of constexpr, whereas I believe all of them could be. So that leaves me with two questions, one is why did they choose not to make the functions constexpr. And two for a function like sqrt I could probably write my own constexpr, but something like sin or cos would be trickier so is there a way around it.
Actually, because of old and annoying legacy, almost none of the math functions can be constexpr, since they all have the side-effect of setting errno on various error conditions, usually domain errors.
From "The C++ Programming Language (4th Edition)", by B. Stroustrup, describing C++11:
"To be evaluated at compile time, a function must be suitably simple: a constexpr function must consist of a single return-statement; no loops, and no local variables are allowed. Also, a constexpr function may not have side effects."
Which means that it must be inline, without for, while and if statements and local variables. Side effects are also forbidden (ex: changing of errno). Another problem is that most of math functions are FPU instructions which are not represented in pure c/c++ (they are written in assembler code). That's why non of cmath function is declared as constexpr.
So noticed from this page that none of the math functions in c++11
seems to make use of constexpr, whereas I believe all of them could
be. So that leaves me with two questions, one is why did they choose
not to make the functions constexpr.
This part is very well answered by Sebastian Redl and Adam Szaj so won't be adding anything to it.
And two for a function like sqrt I could probably write my own
constexpr, but something like sin or cos would be trickier so is there
away around it.
Yes, you can write your own version of constexpr sin, cos by using the taylor series expansions of these functions. Have a look at this super cool github repo which implements several mathematical functions as constexpr functions Morwenn/static_math