Detecting underflow during execution - c++

Is there any way to detect underflow automatically during execution?
Specifically I believe there should be a compiler option to generate code that checks for underflows and similar falgs right after mathematical operations that could cause them.
I'm talking about the G++ compiler.

C99/C++11 have floating point control functions (e.g. fetestexcept) and defined flags (including FE_UNDERFLOW) that should let you detect floating point underflow reasonably portably (i.e., with any compiler/library that supports these).
Though they're not as portable, gcc has an feenableexcept that will let you set floating point exceptions that are trapped. When one of the exceptions you've enabled fires, your program will receive a SIGFPE signal.
At least on most hardware, there's no equivalent for integer operations -- an underflow simply produces a 2's complement (or whatever) result and (for example) sets the the flags (e.g., carry and sign bits) to signal what happened. C99/C++11 do have some flags for things like integer overflow, but I don't believe they're nearly as widely supported.

Related

What happens if "-ffast-math" is enabled when linking?

I use both gcc10 and clang12 in Ubuntu. I just found that if I enable the -ffast-math flag, in my C++ project, there will be an about 4 times performance improvement.
However, if I only enable -ffast-math at compile time and not at link time, there will be no performance improvement. What does it mean to use -ffast-math when linking, and will it link to any special ffast-math libraries in the system?
P.S: This performance improvement actually makes the performance normal. I once asked a question about the poor performance of AVX instruction on Intel processor. Now I can make the performance normal as long as I use -ffast-math flag to compile and link programs on Linux, but even if I use clang and -ffast-math on windows, the performance is still poor. So I wonder if I have linked to any special system libraries under Linux.
However, if I only enable -ffast-math at compile time and not at link time, there will be no performance improvement. What does it mean to use -ffast-math when linking, and will it link to any special ffast-math libraries in the system?
Turns out gcc does link in crtfastmath.o when -ffast-math is specified for linker (undocumented feature).
For x86 see https://github.com/gcc-mirror/gcc/blob/master/libgcc/config/i386/crtfastmath.c#L83, it sets the following CPU options:
#define MXCSR_DAZ (1 << 6) /* Enable denormals are zero mode */
#define MXCSR_FTZ (1 << 15) /* Enable flush to zero mode */
Denormalized floating point numbers are much slower to handle, so that disabling them in the CPU makes floating point computations faster.
From Intel 64 and IA-32 Architectures Optimization Reference Manual:
6.5.3 Flush-to-Zero and Denormals-are-Zero Modes
The flush-to-zero (FTZ) and denormals-are-zero (DAZ) modes are not compatible with the IEEE Standard 754. They are provided to improve performance for applications where underflow is common and where the generation of a denormalized result is not necessary.
3.8.3.3 Floating-point Exceptions in SSE/SSE2/SSE3 Code
Most special situations that involve masked floating-point exceptions are handled efficiently in hardware. When a masked overflow exception occurs while executing SSE/SSE2/SSE3 code, processor hardware can handles it without performance penalty.
Underflow exceptions and denormalized source operands are usually treated according to the IEEE 754 specification, but this can incur significant performance delay. If a programmer is willing to trade pure IEEE 754 compliance for speed, two non-IEEE 754 compliant modes are provided to speed situations where underflows and input are frequent: FTZ mode and DAZ mode.
When the FTZ mode is enabled, an underflow result is automatically converted to a zero with the correct sign. Although this behavior is not compliant with IEEE 754, it is provided for use in applications where performance is more important than IEEE 754 compliance. Since denormal results are not produced when the FTZ mode is enabled, the only denormal floating-point numbers that can be encountered in FTZ mode are the ones specified as constants (read only).
The DAZ mode is provided to handle denormal source operands efficiently when running a SIMD floating-point application. When the DAZ mode is enabled, input denormals are treated as zeros with the same sign. Enabling the DAZ mode is the way to deal with denormal floating-point constants when perfor mance is the objective.
If departing from the IEEE 754 specification is acceptable and performance is critical, run SSE/SSE2/SSE3 applications with FTZ and DAZ modes enabled.

Floating point operations in interrupt handler (PowerPC, VxWorks)

I haven't found any resources that exactly answer what I am trying to understand with an issue I saw in a piece of software I am working on, so I'll ask the geniuses here!
For starters, I'm running with VxWorks on a PowerPC processor.
In trying to debug a separate issue, I tried throwing some quick and dirty debug code in an interrupt handling routine. It involved a double precision floating point operation to store a value of interest (namely, how long it had been since I saw the last interrupt come in) which I used later outside the handler in my running thread. I didn't see a problem in this (sure, it takes longer, but time-wise I had pleanty; the interrupts aren't coming in too quickly) however VxWorks sure didn't like it. It consistently crashes the when it reaches that code, one of the bad crashes that reboots the system. It took me a bit to track down the double operation as the source of the issue, and I realized it's not even double "operations", even returning a constant double from a routine called in the interrupt failed miserably.
On PowerPC (or other architectures in general) are there generally issues doing floating point operations in interrupt handlers and returning floating point (or other type) values in functions called by an interrupt handler? I'm at a loss for why this would cause a program to crash.
(The workaround was to delay the conversion of "ticks" since last interrupt to "time" since laster interrupt until the code is out of the handler, since it seems to handle long integer operations just fine.)
In VxWorks, each task that utilises floating point has to be specified as such in the task creation so that the FP registers are saved during context switches, but only when switching from tasks that use floating point. This allows non-floating point tasks to have faster context switch times.
When an interrupt pre-empts a floating point task however, it is most likely the case that FP registers are not saved. To do so, the interrupt handler would need to determine what task was pre-empted and whether it had been specified as a floating point task; this would make the interrupt latency both higher and variable, which is generally undesirable in a real-time system.
So to make it work any interrupt routine using floating point must explicitly save and restore the FP registers itself. Any task that uses floating point must be specified as such in any case, though you can get away with it if you only have one such task.
If a floating-point task is pre-empted, your interrupt will modify floating point register values in use by that task, the result of this when the FP task resumes is non-deterministic but includes causing a floating point exception - if a previously non-zero register for example, becomes zero, and is subsequently used as the right-hand of a division operation.
It seems to me however that in this case the floating point operation is probably entirely unnecessary. Your "workaround" is in fact the conventional, safest and most deterministic method, and should probably be regarded as a correction of your design rather than a workaround.
Does your ISR call the fppSave()/fppRestore() functions?
If it doesn't, then the ISR is stomping on FP registers that might be in use by existing tasks.
Specifically, FP registers are used by the C++ compiler on the PPC architecture (I think dealing with throw/catch).
In VxWorks, at least for the PPC architectures, a floating point operation will cause a FP Unavilable Exception. This is because when an interrupt occurs the FP bit in MSR is cleared because VxWorks assumes that there will be no FP operations. This speeds up ISR/Task context switching because the FP registers do not have to saved/restored.
That being said, there was a time when we had some debug code that we needed FP operations in the interrupt context. We changed the VxWorks code that calls the specific ISR to 1) set the MSR[FP], do a fpsave call, call the ISR, do a fprestore call, then clear the MSR[FP]. This got us around the problem.
That being said, I agree with the rest of the folks here that FP operations should not be used in an ISR context because that ISRs should be fast and FP operations at typically not.
I have worked with e300 core while developing bare-metal applications and I can say that when an interrupt occurs, core closes the FPU, that you can observe by checking FP bit of MSR. Before doing anything with the floating point registers, you must re-enable FPU by writing 1 to FP bit of MSR. Then you make operations on FPU registers as you want in an ISR.
The general assumption in VxWorks is that Floating Point registers don't need to be saved and restored by ISRs. Primarily because ISRs usually don't mess with them. Historically, most real-time tasks didn't do FP either, but that's obviously changed. What's not obvious is that many tasks that don't explicitly use floating point nevertheless use the floating point registers. I believe that any task with code written in C++ uses the floating point registers (at least on some processors/compilers), even though no floating point operations are obvious. Such tasks should be given the FP_? (I forget the exact spelling) task attribute, causing their FP regs to be saved during context switches.
I think you will find this article interesting. Maybe you are getting into a floating point exception.
I never used PowerPC, but I'm good with Google :P

Floating point calculation change depending on the compiler

When I run the exact same code performing the exact same floating point calculation (using doubles) compiled on Windows and Solaris, I get slightly different results.
I am aware that the results are not exact due to rounding errors. However I would have expected the rounding errors to be platform-independent, thereby giving be the same (slightly incorrect) result on both platforms, which is not the case.
Is this normal, or do I have another problem in my code?
On x86, usually most calculations happen with 80-bit quantities, unless otherwise forced to be double-precision. Most other architectures I know of do all calculations in double-precision (again, unless otherwise overridden).
I don't know if you're running Solaris on SPARC or x86, but if the former, then I highly suspect that to be the cause of the difference.
The subject of your question suggests that it might depend on the compiler. It might, but the fact that you are running on different hardware (assuming your Solaris is not x86) suggests a much more likely reason for the difference - the difference in hardware itself.
Different hardware platforms might use completely different hardware devices (FPUs, CPUs) to perform floating-point calculations, arriving at different results.
Moreover, often the FPU units are configurable by some persistent settings, like infinity model, rounding mode etc. Different hardware might have different default setup. Compiler will normally generate the code that will initialize the FPU at program startup, by that initial setup can be different as well.
Finally, different implementations of C++ language might implement floating-point semantics differently, so you might even get different results from different C++ compilers of the same hardware.
I believe that under Windows/x86, your code will run with the x87 precision already set to 53 bits (double precision), though I'm not sure exactly when this gets set. On Solaris/x86 the x87 FPU is likely to be using its default precision of 64 bits (extended precision), hence the difference.
There's a simple check you can do to detect which precision (53 bits or 64 bits) is being used: try computing something like 1e16 + 2.9999, while being careful to avoid compiler constant-folding optimizations (e.g., define a separate add function to do the addition, and turn off any optimizations that might inline functions). When using 53-bit precision (SSE2, or x87 in double-precision mode) this gives 1e16 + 2; when using 64-bit precision (x87 in extended precision mode) this gives 1e16 + 4. The latter result comes from an effect called 'double rounding', where the result of the addition is rounded first to 64 bits, and then to 53 bits. (Doing this calculation directly in Python, I get 1e16 + 4 on 32-bit Linux, and 1e16+2 on Windows, for exactly the same reason.)
Here's a very nice article (that goes significantly beyond the oft-quoted Goldberg's "What every computer scientist should know...") that explains some of the problems arising from the use of the x87 FPU:
http://hal.archives-ouvertes.fr/docs/00/28/14/29/PDF/floating-point-article.pdf

C++ & GCC: How Does GCC's C++ Implementation Handle Division by Zero?

Just out of interest. How does GCC's C++ implementation handle it's standard number types being divided by zero?
Also interested in hearing about how other compiler's work in relation to zero division.
Feel free to go into detail.
This is not purely for entertainment as it semi-relates to a uni assignment.
Cheers, Chaz
It doesn't. What usually happens is that the CPU will throw an internal exception of some sort when a divide instruction has a 0 for the operand, which will trigger an interrupt handler that reads the status of the various registers on a CPU and handles it, usually by converting it into signal that is sent back to the program and handled by any registered signal handlers. In the case of most unix like OSes, they get a SIGFPE.
While the behavior can vary (for instance on some CPUs you can tell the CPU not to raise an exception, generally they just put some clamped value in like 0 or MAXINT), that variation is generally due to differences in the OS, CPUm and runtime environment, not the compiler.

Float values behaving differently across the release and debug builds

My application is generating different floating point values when I compile it in release mode and in debug mode. The only reason that I found out is I save a binary trace log and the one from the release build is ever so slightly off from the debug build, it looks like the bottom two bits of the 32 bit float values are different about 1/2 of the cases.
Would you consider this "difference" to be a bug or would this type of difference be expected. Would this be a compiler bug or an internal library bug.
For example:
LEFTPOS and SPACING are defined floating point values.
float def_x;
int xpos;
def_x = LEFTPOS + (xpos * (SPACING / 2));
The issue is in regards to the X360 compiler.
Release mode may have a different FP strategy set. There are different floating point arithmetic modes depending on the level of optimization you'd like. MSVC, for example, has strict, fast, and precise modes.
I know that on PC, floating point registers are 80 bits wide. So if a calculation is done entirely within the FPU, you get the benefit of 80 bits of precision. On the other hand, if an intermediate result is moved out into a normal register and back, it gets truncated to 32 bits, which gives different results.
Now consider that a release build will have optimisations which keep intermediate results in FPU registers, whereas a debug build will probably naively copy intermediate results back and forward between memory and registers - and there you have your difference in behaviour.
I don't know whether this happens on X360 too or not.
It's not a bug. Any floating point uperation has a certain imprecision. In Release mode, optimization will change the order of the operations and you'll get a slightly different result. The difference should be small, though. If it's big you might have other problems.
I helped a co-worker find a compiler switch that was different in release vs. debug builds that was causing his differences.
Take a look at /fp (Specify Floating-Point Behavior).
In addition to the different floating-point modes others have pointed out, SSE or similiar vector optimizations may be turned on for release. Converting floating-point arithmetic from standard registers to vector registers can have an effect on the lower bits of your results, as the vector registers will generally be more narrow (fewer bits) than the standard floating-point registers.
Not a bug. This type of difference is to be expected.
For example, some platforms have float registers that use more bits than are stored in memory, so keeping a value in the register can yield a slightly different result compared to storing to memory and re-loading from memory.
This discrepancy may very well be caused by the compiler optimization, which is typically done in the release mode, but not in debug mode. For example, the compiler may reorder some of the operations to speed up execution, which can conceivably cause a slight difference in the floating point result.
So, I would say most likely it is not a bug. If you are really worried about this, try turning on optimization in the Debug mode.
Like others mentioned, floating point registers have higher precision than floats, so the accuracy of the final result depends on the register allocation.
If you need consistent results, you can make the variables volatile, which will result in slower, less precise, but consistent results.
If you set a compiler switch that allowed the compiler to reorder floating-point operations, -- e.g. /fp:fast -- then obviously it's not a bug.
If you didn't set any such switch, then it's a bug -- the C and C++ standards don't allow the compilers to reorder operations without your permission.