number divide by zero is hardware exception - c++

I have learnt during C++ exceptional handling that number divide by zero is a hardware exception. Can anybody explain it why it is called hardware exception

Because it is not an exception in the C++ sense. Usually, in the C++ world, we use the word "hardware trap", to avoid any ambiguity, but "hardware exception" can also be used. Basically, the hardware triggers something which will cause you to land in the OS.
And not all systems will generate a hardware trap for divide by 0. I've worked on one where you just got the largest possible value as a result, and kept on.

The C++ Standard itself considers divide by zero to be Undefined Behaviour, but as usual an implementation can provide Implementation Defined Behaviour if it likes.
C++20 stipulations:
7.1.4 If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined. [Note: Treatment of division by zero, forming a remainder using a zero divisor, and all floating-point exceptions varies among machines, and is sometimes adjustable by a library function.— end note
Typically in practice, your CPU will check for divide by zero, and historically different CPU manufacturers have used different terminology for the CPU behaviour that results: some call it an "interrupt", others a "trap", or "signal", or "exception", or "fault", or "abort". CPU designers don't tend to care about - or avoid clashes with - anything but their hardware and assembly language terminology....
Regardless, even if called a "hardware exception", it's nothing to do with C++ exceptions in the try/catch sense.
On an Intel for example, a divide by zero will result in the CPU spontaneously saving a minimum of registers on the stack, then calling a function whose address must have been placed in a specific memory address beforehand.
It's up to the OS/executable to pick/override with some useful behaviour, and while some C++ compilers do specifically support interception of these events and generation of C++ Exceptions, it's not a feature mentioned by the C++ Standard, nor widely portable. The general expectation is that you'll either write a class that checks consistently, or perform ad-hoc checks before divisions that might fail.

This is a hardware exception because it's detected by CPU.
Your code in c/c++ or any other language is converted to CPU commands and then executed by CPU. So only CPU can find out you divided by zero

It depends on your processor if you get an exception or not. Fixed point and floating point also are different or can be. The floating point spec, to be compliant, has both an exception and non-exception solution for devide by zero. If the fpu has that exception disabled then you would get the "properly signed infinity" otherwise you get an exception and the result is instead a nan or something like that I dont have the spec handy.
The programmers reference manual for a particular processor should hopefully discuss fixed point divide by zero behavior if the processor has a divide at all. If not then it is a soft divide and then it is up to the compiler library as to what it does (calls a divide by zero handler for example).
It would be called a hardware exception in general because the hardware is detecting the problem, and the hardware does something as a result. Same thing when you have other problems like mmu access faults, data aborts, prefetch aborts, etc. hardware exception because it is an exception handled by hardware, generally...

Because, if it is checked, then it is checked and raised by the hardware. Specifically, the Arithmetic-Logic Unit (ALU) of your CPU will check for 0 as divider and generate an appropriate interrupt to signal the exception.
Otherwise, you would have to explicitely check for 0 in the assembler source code.
Edit: Note that this apply to integer division only, since floating point division has specific states to signal a division by zero.

Related

Why doesn't C++ automatically throw an exception on arithmetic overflow?

The C++ Standard at some point states that:
5 Expressions [expr]
...
If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined. [ Note: most existing implementations of C++ ignore integer overflows...]
I'm trying to understand why most(all?) implementations choose to ignore overflows rather than doing something like throwing an std::overflow_error exception. Is it not to incur any additional runtime cost? If that's the case, can't the underlying arithmetic processing hardware be used to do that check for free?
If that's the case, can't the underlying arithmetic processing hardware be used to do that check for free?
Raising an exception always has a cost. But perhaps some architectures can guarantee that when an exception is not raised, then the check is free.
However, C++ is designed to be efficiently implementable on a wide range of architectures. It would violate the design principles of C++ to mandate checking for integer overflow, unless all architectures could support such checks with zero cost in all cases where overflow does not occur. This is not the case.

On which platforms does integer divide by zero trigger a floating point exception?

In another question, someone was wondering why they were getting a "floating point error" when in fact they had an integer divide-by-zero in their C++ program. A discussion arose around this, with some asserting that floating point exceptions are in fact never raised for float divide by zero, but only arise on integer divide by zero.
This sounds strange to me, because I know that:
MSVC-compiled code on x86 and x64 on all Windows platforms reports an int divide by zero as "0xc0000094: Integer division by zero", and float divide by zero as 0xC000008E "Floating-point division by zero" (when enabled)
IA-32 and AMD64 ISAs specify #DE (integer divide exception) as interrupt 0. Floating-point exceptions trigger interrupt 16 (x87 floating-point) or interrupt 19 (SIMD floating-point).
Other hardware have similarly different interrupts (eg PPC raises 0x7000 on float-div-by-zero and doesn't trap for int/0 at all).
Our application unmasks floating-point exceptions for divide-by-zero with the _controlfp_s intrinsic (ultimately stmxcsr op) and then catches them for debugging purposes. So I have definitely seen IEEE754 divide-by-zero exceptions in practice.
So I conclude that there are some platforms that report int exceptions as floating point exceptions, such as x64 Linux (raising SIGFPE for all arithmetic errors regardless of ALU pipe).
What other operating systems (or C/C++ runtimes if you are the operating system) report integer div-by-zero as a floating point exception?
I'm not sure how the current situation came to be, but it's currently the case that FP exception detection support is very different from integer. It's common for integer division to trap. POSIX requires it to raise SIGFPE if it raises an exception at all.
However, you can sort out what kind of SIGFPE it was, to see that it was actually a division exception. (Not necessarily divide-by-zero, though: 2's complement INT_MIN / -1 division traps, and x86's div and idiv also trap when the quotient of 64b/32b division doesn't fit in the 32b output register. But that's not the case on AArch64 using sdiv.)
The glibc manual explains that BSD and GNU systems deliver an extra arg to the signal handler for SIGFPE, which will be FPE_INTDIV_TRAP for divide by zero. POSIX documents FPE_INTDIV_TRAP as a possible value for siginfo_t's int si_code field, on systems where siginfo_t includes that member.
IDK if Windows delivers a different exception in the first place, or if it bundles things into different flavours of the same arithmetic exception like Unix does. If so, the default handler decodes the extra info to tell you what kind of exception it was.
POSIX and Windows both use the phrase "division by zero" to cover all integer division exceptions, so apparently this is common shorthand. For people that do know about about INT_MIN / -1 (with 2's complement) being a problem, the phrase "division by zero" can be taken as synonymous with a divide exception. The phrase immediately points out the common case for people that don't know why integer division might be a problem.
FP exceptions semantics
FP exceptions are masked by default for user-space processes in most operating systems / C ABIs.
This makes sense, because IEEE floating point can represent infinities, and has NaN to propagate the error to all future calculations using the value.
0.0/0.0 => NaN
If x is finite: x/0.0 => +/-Inf with the sign of x
This even allows things like this to produce a sensible result when exceptions are masked:
double x = 0.0;
double y = 1.0/x; // y = +Inf
double z = 1.0/y; // z = 1/Inf = 0.0, no FP exception
FP vs. integer error detection
The FP way of detecting errors is pretty good: when exceptions are masked, they set a flag in the FP status register instead of trapping. (e.g. x86's MXCSR for SSE instructions). The flag stays set until manually cleared, so you can check once (after a loop for example) to see which exceptions happened, but not where they happened.
There have been proposals for having similar "sticky" integer-overflow flags to record if overflow happened at any point during a sequence of computations. Allowing integer division exceptions to be masked would be nice in some cases, but dangerous in other cases (e.g. in an address calculation, you should trap instead of potentially storing to a bogus location).
On x86, though, detecting if integer overflow happened during a sequence of calculations requires putting a conditional branch after every one of them, because flags are just overwritten. MIPS has an add instruction that will trap on signed overflow, and an unsigned instruction that never traps. So integer exception detection and handling is a lot less standardized.
Integer division doesn't have the option of producing NaN or Inf results, so it makes sense for it to work this way.
Any integer bit pattern produced by integer division will be wrong, because it will represent a specific finite value.
However, on x86, converting an out-of-range floating point value to integer with cvtsd2si or any similar conversion instruction produces the "integer indefinite" value if the "floating-point invalid" exception is masked. The value is all-zero except the sign bit. i.e. INT_MIN.
(See the Intel manuals, links in the x86 tag wiki.
What other operating systems (or C/C++ runtimes if you are the operating system) report integer div-by-zero as a floating point exception?
The answer depends on whether you are in kernel space or user space. If you are in kernel space, you can put "i / 0" in kernel_main(), have your interrupt handler call an exception handler and halt your kernel. If you're in user space, the answer depends on your operating system and compiler settings.
AMD64 hardware specifies integer divide by zero as interrupt 0, different from interrupt 16 (x87 floating-point exception) and interrupt 19 (SIMD floating-point exception).
The "Divide-by-zero" exception is for dividing by zero with the div instruction. Discussing the x87 FPU is outside the scope of this question.
Other hardware have similarly different interrupts (eg PPC raises 0x7000 on float-div-by-zero and doesn't trap for int/0 at all).
More specifically, 00700 is mapped to exception type "Program", which includes a floating-point enabled exception. Such an exception is raised if trying to divide-by-zero using a floating point instruction.
On the other hand, integer division is undefined behavior per the PPC PEM:
8-53 divw
If an attempt is made to perform either of the divisions—0x8000_0000 ÷
–1 or ÷ 0, then the contents of rD are undefined, as are
the contents of the LT, GT, and EQ bits of the CR0 field (if Rc = 1).
In this case, if OE = 1 then OV is set.
Our application unmasks floating-point exceptions for divide-by-zero with the _controlfp_s intrinsic (ultimately stmxcsr op) and then catches them for debugging purposes. So I have definitely seen IEEE754 divide-by-zero exceptions in practice.
I think your time is better spent catching divide by zero at compile-time rather than at run-time.
For userspace, this happens on AIX running on POWER, HP-UX running on PA-RISC, Linux running on x86-64, macOS running on x86-64, Tru64 running on Alpha and Solaris running on SPARC.
Avoiding divides-by-zero at compile time is much better.

Integer vs floating division -> Who is responsible for providing the result?

I've been programming for a while in C++, but suddenly had a doubt and wanted to clarify with the Stackoverflow community.
When an integer is divided by another integer, we all know the result is an integer and like wise, a float divided by float is also a float.
But who is responsible for providing this result? Is it the compiler or DIV instruction?
That depends on whether or not your architecture has a DIV instruction. If your architecture has both integer and floating-point divide instructions, the compiler will emit the right instruction for the case specified by the code. The language standard specifies the rules for type promotion and whether integer or floating-point division should be used in each possible situation.
If you have only an integer divide instruction, or only a floating-point divide instruction, the compiler will inline some code or generate a call to a math support library to handle the division. Divide instructions are notoriously slow, so most compilers will try to optimize them out if at all possible (eg, replace with shift instructions, or precalculate the result for a division of compile-time constants).
Hardware divide instructions almost never include conversion between integer and floating point. If you get divide instructions at all (they are sometimes left out, because a divide circuit is large and complicated), they're practically certain to be "divide int by int, produce int" and "divide float by float, produce float". And it'll usually be that both inputs and the output are all the same size, too.
The compiler is responsible for building whatever operation was written in the source code, on top of these primitives. For instance, in C, if you divide a float by an int, the compiler will emit an int-to-float conversion and then a float divide.
(Wacky exceptions do exist. I don't know, but I wouldn't put it past the VAX to have had "divide float by int" type instructions. The Itanium didn't really have a divide instruction, but its "divide helper" was only for floating point, you had to fake integer divide on top of float divide!)
The compiler will decide at compile time what form of division is required based on the types of the variables being used - at the end of the day a DIV (or FDIV) instruction of one form or another will get involved.
Your question doesn't really make sense. The DIV instruction doesn't do anything by itself. No matter how loud you shout at it, even if you try to bribe it, it doesn't take responsibility for anything
When you program in a programming language [X], it is the sole responsibility of the [X] compiler to make a program that does what you described in the source code.
If a division is requested, the compiler decides how to make a division happen. That might happen by generating the opcode for the DIV instruction, if the CPU you're targeting has one. It might be by precomputing the division at compile-time, and just inserting the result directly into the program (assuming both operands are known at compile-time), or it might be done by generating a sequence of instructions which together emulate a divison.
But it is always up to the compiler. Your C++ program doesn't have any effect unless it is interpreted according to the C++ standard. If you interpret it as a plain text file, it doesn't do anything. If your compiler interprets it as a Java program, it is going to choke and reject it.
And the DIV instruction doesn't know anything about the C++ standard. A C++ compiler, on the other hand, is written with the sole purpose of understanding the C++ standard, and transforming code according to it.
The compiler is always responsible.
One of the most important rules in the C++ standard is the "as if" rule:
The semantic descriptions in this International Standard define a parameterized nondeterministic abstract machine. This International Standard places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below.
Which in relation to your question means it doesn't matter what component does the division, as long as it gets done. It may be performed by a DIV machine code, it may be performed by more complicated code if there isn't an appropriate instruction for the processor in question.
It can also:
Replace the operation with a bit-shift operation if appropriate and likely to be faster.
Replace the operation with a literal if computable at compile-time or an assignment if e.g. when processing x / y it can be shown at compile time that y will always be 1.
Replace the operation with an exception throw if it can be shown at compile time that it will always be an integer division by zero.
Practically
The C99 standard defines "When integers are divided, the result of the / operator
is the algebraic quotient with any fractional part
discarded." And adds in a footnote that "this is often called 'truncation toward zero.'"
History
Historically, the language specification is responsible.
Pascal defines its operators so that using / for division always returns a real (even if you use it to divide 2 integers), and if you want to divide integers and get an integer result, you use the div operator instead. (Visual Basic has a similar distinction and uses the \ operator for integer division that returns an integer result.)
In C, it was decided that the same distinction should be made by casting one of the integer operands to a float if you wanted a floating point result. It's become convention to treat integer versus floating point types the way you describe in many C-derived languages. I suspect this convention may have originated in Fortran.

Dealing with Floating Point exceptions

I am not sure how to deal with floating point exceptions in either C or C++. From wiki, there are following types of floating point exceptions:
IEEE 754 specifies five arithmetic errors that are to be recorded in "sticky bits" (by default; note that trapping and other alternatives are optional and, if provided, non-default).
* inexact, set if the rounded (and returned) value is different from the mathematically exact result of the operation.
* underflow, set if the rounded value is tiny (as specified in IEEE 754) and inexact (or maybe limited to if it has denormalisation loss, as per the 1984 version of IEEE 754), returning a subnormal value (including the zeroes).
* overflow, set if the absolute value of the rounded value is too large to be represented (an infinity or maximal finite value is returned, depending on which rounding is used).
* divide-by-zero, set if the result is infinite given finite operands (returning an infinity, either +∞ or −∞).
* invalid, set if a real-valued result cannot be returned (like for sqrt(−1), or 0/0), returning a quiet NaN.
Is it that when any type of above exceptions happens, the program will exit abnormally? Or the program will carry this error on without mentioning anything and therefore make the error hard to debug?
Is a compiler like gcc able to give warning for some obvious case?
What can I do during coding my program to notify where the error happens and what types it is when it happens, so that I can locate the error easily in my code? Please give solutions in both C and C++ case.
Thanks and regards!
There are many options, but the general and also the default philosophy introduced by 754 is to not trap but to instead produce special results such as infinities that may or may not show up in important results.
As a result, the functions that test the state of individual operations are not used as often as the functions that test the representations of results.
See, for example...
LIST OF FUNCTIONS
Each of the functions that use floating-point values are provided in sin-
gle, double, and extended precision; the double precision prototypes are
listed here. The man pages for the individual functions provide more
details on their use, special cases, and prototypes for their single and
extended precision versions.
int fpclassify(double)
int isfinite(double)
int isinf(double)
int isnan(double)
int isnormal(double)
int signbit(double)
Update:
For anyone who really thinks FPU ops generate SIGFPE in a default case these days, I would encourage you to try this program. You can easily generate underflow, overflow, and divide-by-zero. What you will not generate (unless you run it on the last surviving VAX or a non-754 RISC) is SIGFPE:
#include <stdio.h>
#include <stdlib.h>
int main(int ac, char **av) { return printf("%f\n", atof(av[1]) / atof(av[2])); }
On Linux you can use the GNU extension feenableexcept (hidden right at the bottom of that page) to turn on trapping on floating point exceptions - if you do this then you'll receive the signal SIGFPE when an exception occurs which you can then catch in your debugger. Watch out though as sometimes the signal gets thrown on the floating point instruction after the one that's actually causing the problem, giving misleading line information in the debugger!
On Windows with Visual C++, you can control which floating-point exceptions are unmasked using _control87() etc.. Unmasked floating-point exceptions generate structured exceptions, which can be handled using __try/__except (and a couple of other mechanisms). This is all completely platform-dependent.
If you leave floating point exceptions masked, another platform-dependent approach to detecting these conditions is to clear the floating-point status using _clear87() etc., perform computations, and then query the floating-point status using _status87() etc..
Is any of this any better than DigitalRoss's suggestion of checking the result? In most cases, it's not. If you need to detect (or control) rounding (which is unlikely), then maybe?
On Windows with Borland/CodeGear/Embarcadero C++, some floating-point exceptions are unmasked by default, which often causes problems when using third-party libraries that were not tested with floating-point exceptions unmasked.
Different compilers handle these errors in different ways.
Inexactness is almost always the result of division of numbers with an absolute value greater than one (perhaps through trancendental functions). Adding, subtracting and multiplying numbers with an absolute value > 1.0 can only result in overflow.
Underflow doesn't occur very often, and probably won't be a concern in normal calculations except for iterated functions such as Taylor series.
Overflow is a problem that can usually be detected by some sort of "infinity" comparison, different compilers vary.
Divide by zero is quite noticable since your program will (should) crash if you don't have an error handler. Checking dividends and divisors will help avoid the problem.
Invalid answers usually are caught without special error handlers with some sort of DOMAIN error printed.
[EDIT]
This might help: (Numerical Computation Guide by Sun)
http://docs.sun.com/source/806-3568/
C99 introduced functions for handling floating point exceptions. Prior to a floating point operation, you can use feclearexcept() to clear any outstanding exceptions. After the operation(s), you can then use fetestexcept() to test which exception flags are set.
In Linux, you can trap these exceptions by trapping the SIGFPE signal. If you do nothing, these exceptions will terminate your program. To set a handler, use the signal function, passing the signal you wish to have trapped, and the function to be called in the event the signal fires.

How do division-by-zero exceptions work?

How is division calculated on compiler/chip level?
And why does C++ always throw these exceptions at run-time instead of compile-time (in case the divisor is known to be zero at compile time)?
It depends. Some processors have a hardware divide instruction. Some processors have to do the calculation is software.
Some C++ compilers don't trap at runtime either. Often because there is no hardware support for trapping on divide by zero.
It totally depend on the compiler. You can if you want write an extension for your compiler to check this kind of problem.
For example visual C++:
Division by zero Compiler error
At the chip level, division is of course done with circuits. Here's an overview of binary division circuitry.
Because the C++ compiler just isn't checking for divisors that are guaranteed to equal 0. It could check for this.
Big lookup tables. Remember those multiplication tables from school? Same idea, but division instead of multiplication. Obviously not every single number is in there, but the number is broken up into chunks and then shoved through the table.
The division takes place at runtime, not at compile time. Yes, the compiler could see that the divisor is zero, but most people are not expected to write an invalid statement like that.