I was reading Setting an int to Infinity in C++. I understand that when one needs true infinity, one is supposed to use numeric_limits<float>::infinity(); I guess the rationale behind it is that usually integral types have no values designated for representing special states like NaN, Inf, etc. like IEEE 754 floats do (again C++ doesn't mandate neither - int & float used are left to the implementation); but still it's misleading that max > infinity for a given type. I'm trying to understand the rationale behind this call in the standard. If having infinity doesn't make sense for a type, then shouldn't it be disallowed instead of having a flag to be checked for its validity?
The function numeric_limits<T>::infinity() makes sense for those T for which numeric_limits<T>::has_infinity returns true.
In case of T=int, it returns false. So that comparison doesn't make sense, because numeric_limits<int>::infinity() does not return any meaningful value to compare with.
If you read e.g. this reference you will see a table showing infinity to be zero for integer types. That's because integer types in C++ can't, by definition, be infinite.
Suppose, conversely, the standard did reserve some value to represent inifity, and that numeric_limits<int>::infinity() > numeric_limits<int>::max(). That means that there would be some value of int which is greater than max(), that is, some representable value of int is greater than the greatest representable value of int.
Clearly, whichever way the Standard specifies, some natural understanding is violated. Either inifinity() <= max(), or there exists x such that int(x) > max(). The Standard must choose which rule of nature to violate.
I believe they chose wisely.
numeric_limits<int>::infinity() returns the representation of positive infinity, if available.
In case of integers, positive infinity does not exists:
cout << "int has infinity: " << numeric_limits<int>::has_infinity << endl;
prints
int has infinity: false
Related
I was reading Setting an int to Infinity in C++. I understand that when one needs true infinity, one is supposed to use numeric_limits<float>::infinity(); I guess the rationale behind it is that usually integral types have no values designated for representing special states like NaN, Inf, etc. like IEEE 754 floats do (again C++ doesn't mandate neither - int & float used are left to the implementation); but still it's misleading that max > infinity for a given type. I'm trying to understand the rationale behind this call in the standard. If having infinity doesn't make sense for a type, then shouldn't it be disallowed instead of having a flag to be checked for its validity?
The function numeric_limits<T>::infinity() makes sense for those T for which numeric_limits<T>::has_infinity returns true.
In case of T=int, it returns false. So that comparison doesn't make sense, because numeric_limits<int>::infinity() does not return any meaningful value to compare with.
If you read e.g. this reference you will see a table showing infinity to be zero for integer types. That's because integer types in C++ can't, by definition, be infinite.
Suppose, conversely, the standard did reserve some value to represent inifity, and that numeric_limits<int>::infinity() > numeric_limits<int>::max(). That means that there would be some value of int which is greater than max(), that is, some representable value of int is greater than the greatest representable value of int.
Clearly, whichever way the Standard specifies, some natural understanding is violated. Either inifinity() <= max(), or there exists x such that int(x) > max(). The Standard must choose which rule of nature to violate.
I believe they chose wisely.
numeric_limits<int>::infinity() returns the representation of positive infinity, if available.
In case of integers, positive infinity does not exists:
cout << "int has infinity: " << numeric_limits<int>::has_infinity << endl;
prints
int has infinity: false
Context
While I was reading Consistent comparison, I have noticed a peculiar usage of the verb to compare:
There’s a new three-way comparison operator, <=>. The expression a <=> b
returns an object that compares <0 if a < b, compares >0 if a > b, and
compares ==0 if a and b are equal/equivalent.
Another example found on the internet (emphasis mine):
It returns a value that compares less than zero on failure. Otherwise,
the returned value can be used as the first argument on a later call
to get.
One last example, found in a on GitHub (emphasis mine):
// Perform a circular 16 bit compare.
// If the distance between the two numbers is larger than 32767,
// and the numbers are larger than 32768, subtract 65536
// Thus, 65535 compares less than 0, but greater than 65534
// This handles the 65535->0 wrap around case correctly
Of course, for experienced programmers the meaning is clear. But the way the verb to compare is used in these examples is not standard in any standardized forms of English.
Questions*
How does the programming jargon sentence "The object compares less than zero" translate into plain English?
Does it mean that if the object is compared with0 the result will be "less than zero"?
Why would be wrong to say "object is less than zero" instead of "object compares less than zero"?
* I asked for help on English Language Learners and English Language & Usage.
"compares <0" in plain English is "compares less than zero".
This is a common shorthand, I believe.
So to apply this onto the entire sentence gives:
The expression a <=> b returns an object that compares less than zero
if a is less than b, compares greater than zero if a is greater than
b, and compares equal to zero if a and b are equal/equivalent.
Which is quite a mouthful. I can see why the authors would choose to use symbols.
What I am interested in, more exactly, is an equivalent expression of "compares <0". Does "compares <0" mean "evaluates to a negative number"?
First, we need to understand the difference between what you quoted and actual wording for the standard. What you quoted was just an explanation for what would actually get put into the standard.
The standard wording in P0515 for the language feature operator<=> is that it returns one of 5 possible types. Those types are defined by the library wording in P0768.
Those types are not integers. Or even enumerations. They are class types. Which means they have exactly and only the operations that the library defines for them. And the library wording is very specific about them:
The comparison category types’ relational and equality friend functions are specified with an anonymous parameter of unspecified type. This type shall be selected by the implementation such that these parameters can accept literal 0
as a corresponding argument. [Example: nullptr_t satisfies this requirement. —
end example] In this context, the behaviour of a program that supplies an argument other than a literal 0 is undefined.
Therefore, Herb's text is translated directly into standard wording: it compares less than 0. No more, no less. Not "is a negative number"; it's a value type where the only thing you can do with it is comparing it to zero.
It's important to note how Herb's descriptive text "compares less than 0" translates to the actual standard text. The standard text in P0515 makes it clear that the result of 1 <=> 2 is strong_order::less. And the standard text in P0768 tells us that strong_order::less < 0 is true.
But it also tells us that all other comparisons are the functional equivalent of the descriptive phrase "compares less than 0".
For example, if -1 "compares less than 0", then that would also imply that it does not compare equal to zero. And that it does not compare greater than 0. It also implies that 0 does not compare less than -1. And so on.
P0768 tells us that the relationship between strong_order::less and the literal 0 fits all of the implications of the words "compares less than 0".
"acompares less than zero" means that a < 0 is true.
"a compares == 0 means that a == 0 is true.
The other expressions I'm sure make sense now right?
Yes, an "object compares less than 0" means that object < 0 will yield true. Likewise, compares equal to 0 means object == 0 will yield true, and compares greater than 0 means object > 0 will yield true.
As to why he doesn't use the phrase "is less than 0", I'd guess it's to emphasize that this is all that's guaranteed. For example, this could be essentially any arbitrary type, including one that doesn't really represent an actual value, but instead only supports comparison with 0.
Just, for example, let's consider a type something like this:
class comparison_result {
enum { LT, GT, EQ } res;
friend template <class Integer>
bool operator<(comparison_result c, Integer) { return c.res == LT; }
friend template <class Integer>
bool operator<(Integer, comparison_result c) { return c.res == GT; }
// and similarly for `>` and `==`
};
[For the moment, let's assume the friend template<...> stuff is all legit--I think you get the basic idea, anyway).
This doesn't really represent a value at all. It just represents the result of "if compared to 0, should the result be less than, equal to, or greater than". As such, it's not that it is less than 0, only that it produces true or false when compared to 0 (but produces the same results when compared to another value).
As to whether <0 being true means that >0 and ==0 must be false (and vice versa): there is no such restriction on the return type for the operator itself. The language doesn't even include a way to specify or enforce such a requirement. There's nothing in the spec to prevent them from all returning true. Returning true for all the comparisons is possible and seems to be allowed, but it's probably pretty far-fetched.
Returning false for all of them is entirely reasonable though--just, for example, any and all comparisons with floating point NaNs should normally return false. NaN means "Not a Number", and something that's not a number isn't less than, equal to or greater than a number. The two are incomparable, so in every case, the answer is (quite rightly) false.
I think the other answers so far have answered mostly what the result of the operation is, and that should be clear by now. #VTT's answer explains it best, IMO.
However, so far none have answered the English language behind it.
"The object compares less than zero." is simply not standard English, at best it is jargon or slang. Which makes it all the more confusing for non-native speakers.
An equivalent would be:
A comparison of the object using <0 (less than zero) always returns true.
That's quite lengthy, so I can understand why a "shortcut" was created:
The object compares less than zero.
It means that the expression will return an object that can be compared to <0 or >0 or ==0.
If a and b are integers, then the expression evaluates to a negative value (probably -1) if a is less than b.
The expression evaluates to 0 if a==b
And the expression will evaluates to a positive value (probably 1) if a is greater than b.
If x and y are double, why can I not do:
int nx = floor(x) or int ny = floor(y) to round down to a whole number which would work with int?
Even when we consider only integers, float can store values that int cannot. For example, consider this case:
float x = std::numeric_limits<int>::max() + 1f;
// Even floored, the value is out of range!
int y = floor(x);
There are even some other, special values like positive infinity, negative infinity, and NaN, which an int variable cannot hold. (There's also negative zero, but that's defined in the standard to be equal to positive zero, so it manages to squeak through.)
Because of this, this conversion is considered "narrowing" and you must should explicitly perform it with a cast (so that both the compiler and the future maintainers of your program know that it was not a mistake):
int y = static_cast<int>(floor(x));
"Narrowing conversion" simply means that the domain of the destination type is not a subset of the domain of the source type, so there are some inputs that cannot accurately be represented in the destination type. The explicit cast is your way of telling the compiler that you are willing to accept the consequences if a conversion is performed where the value cannot be represented in the destination type.
Also note that the default behavior when casting from a floating-point type to an integer type is to truncate the fractional component, so the floor() call is redundant. You can just do:
int y = static_cast<int>(x);
you have to cast the result to int so you can store it into one...
int nx = (int)(floor(x));
You have to know about the domain of your values. If, as you say in the comments, you are using the double overload (see cppreference for full details) then there is indeed a possible loss of data.
double can represent numbers up to about 1e308 though above about 1e17 there is no fractional part. int can only manage about 2e9. So if you know that your domain will never exceed 2 billion, then it should be safe to use and you can either ignore the warning or use a cast to make it go away.
double d = 1.5;
int i = d;
The second intialization truncates the floating-point value and stores the result into i. The result here is valid and well-defined: i gets the value 1. (If the resulting value could not be represented in an int the behavior would be undefined; that's not the case here).
The warning is telling you that the conversion loses information. That's correct, because values like 1.4 and 1.5 will both be converted to 1, so you would no longer be able to tell the difference. The new terminology for that is a "narrowing conversion". But despite the warning, the conversion is legal. There is no need for a cast here, except perhaps to quiet over-zealous compilers. Unless, of course, you have your compiler set to turn warnings into errors, in which case you'll spend a fair amount of time sorting out how to persuade your compiler to compile valid and meaningful code that some compiler-writer thinks deserves a warning.
Consider the following code on a one's complement architecture:
int zero = 0;
int negzero = -0;
std::cout<<(negzero < zero)<<std::endl;
std::cout<<(negzero <= zero)<<std::endl;
std::cout<<(negzero == zero)<<std::endl;
std::cout<<(~negzero)<<(~zero)<<std::endl;
std::cout<<(1 << negzero)<<std::endl;
std::cout<<(1 >> negzero)<<std::endl;
What output would the code produce?
What lines are defined by the standard, what lines are implementation dependent, and what lines are undefined behaviour?
Based on my interpretation of the standard:
The C++ standard in §3.9.1/p3 Fundamental types [basic.fundamental] actually throws the ball in the C standard:
The signed and unsigned integer types shall satisfy the constraints
given in the C standard, section 5.2.4.2.1.
Now if we go to ISO/IEC 9899:2011 section 5.2.4.2.1 it gives as a forward reference to §6.2.6.2/p2 Integer types (Emphasis Mine):
If the sign bit is zero, it shall not affect the resulting value. If
the sign bit is one, the value shall be modified in one of the
following ways:
the corresponding value with sign bit 0 is negated (sign and
magnitude);
the sign bit has the value −(2^M) (two’s complement);
the sign bit has the value −(2^M − 1) (ones’ complement).
Which of these applies is implementation-defined, as is whether the value with sign bit 1 and all value bits zero (for the first two),
or with sign bit and all value bits 1 (for ones’ complement), is a
trap representation or a normal value. In the case of sign and
magnitude and ones’ complement, if this representation is a normal
value it is called a negative zero.
Consequently, the existence of negative zero is implementation defined.
If we proceed further in paragraph 3:
If the implementation supports negative zeros, they shall be generated
only by:
the &, |, ^, ~, <<, and >> operators with operands that produce such
a value;
the +, -, *, /, and % operators where one operand is a negative zero
and the result is zero;
compound assignment operators based on the above cases.
It is unspecified whether these cases actually generate a negative
zero or a normal zero, and whether a negative zero becomes a normal
zero when stored in an object.
Consequently, it is unspecified whether the related cases that you displayed are going to generate a negative zero at all.
Now proceeding in paragraph 4:
If the implementation does not support negative zeros, the behavior of
the &, |, ^, ~, <<, and >> operators with operands that would produce
such a value is undefined.
Consequently, whether the related operations result in undefined behaviour, depends on whether the implementation supports negative zeros.
First of all, your first premise is wrong:
int negzero = -0;
should produce a normal zero on any conformant architecture.
References for that were given in #101010's answer:
3.9.1 Fundamental types [basic.fundamental] §3:
... The signed and unsigned integer
types shall satisfy the constraints given in the C standard, section 5.2.4.2.1.
Later in C reference:
5.2.4.2.1 Sizes of integer types
... Forward references: representations of types (6.2.6)
and (still C):
6.2.6 Representations of types / 6.2.6.2 Integer types § 3
If the implementation supports negative zeros, they shall be generated only by:
the &, |, ^, ~, <<, and >> operators with arguments that produce such a value;
the +, -, *, /, and % operators where one argument is a negative zero and the result is
zero;
compound assignment operators based on the above cases.
So negzero = -0 is not such a construct and shall not produce a negative 0.
For following lines, I will assume that the negative 0 was produced in a bitwise manner, on an implementation that supports it.
C++ standard does not speak at all of negative zeros, and C standard just say of them that their existence is implementation dependant. I could not find any paragraph explicitly saying whether a negative zero should or not be equal to a normal zero for relational or equality operator.
So I will just cite in C reference : 6.5.8 Relational operators §6
Each of the operators < (less than), > (greater than), <= (less than or equal to), and >=
(greater than or equal to) shall yield 1 if the specified relation is true and 0 if it is false.92)
The result has type int.
and in C++ 5.9 Relational operators [expr.rel] §5
If both operands (after conversions) are of arithmetic or enumeration type, each of the operators shall yield
true if the specified relationship is true and false if it is false.
My interpretation of standard is that an implementation may allow an alternate representation of the integer value 0 (negative zero) but it is still a representation of the value 0 and it should perform accordingly in any arithmetic expression, because C 6.2.6.2 Integer types § 3 says:
negative zeros[...] shall be generated only by [...] the +, -, *, /, and % operators where one argument is a negative zero and the result is
zero
That means that if the result is not 0, a negative 0 should perform as a normal zero.
So these two lines at least are perfectly defined and should produce 1:
std::cout<<(1 << negzero)<<std::endl;
std::cout<<(1 >> negzero)<<std::endl;
This line is clearly defined to be implementation dependant:
std::cout<<(~negzero)<<(~zero)<<std::endl;
because an implementation could have padding bits. If there are no padding bits, on a one's complement architecture ~zero is negzero, so ~negzero should produce a 0 but I could not find in standard if a negative zero should display as 0 or as -0. A negative floating point 0 should be displayed with a minus sign, but nothing seems explicit for an integer negative value.
For the last 3 line involving relational and equality operators, there is nothing explicit in standard, so I would say it is implementation defined
TL/DR:
Implementation-dependent:
std::cout<<(negzero < zero)<<std::endl;
std::cout<<(negzero <= zero)<<std::endl;
std::cout<<(negzero == zero)<<std::endl;
std::cout<<(~negzero)<<(~zero)<<std::endl;
Perfectly defined and should produce 1:
std::cout<<(1 << negzero)<<std::endl;
std::cout<<(1 >> negzero)<<std::endl;
First of all one's complement architectures (or even distinguish negative zero) are rather rare, and there's a reason for that. It's basically easier to (hardware-wise) add two's complement than one's complement.
The code you've posted doesn't seem to have undefined behavior or even implementation defined behavior, it should probably not result in negative zero (or it shouldn't be distinguished from normal zero).
Negative zeros should not be that easy to produce (and if you manage to do that it's implementation defined behavior at best). If it's a ones-complement architecture they would be produced by ~0 (bit-wise inversion) rather than -0.
The C++ standard is rather vague about the actual representation and requirements on the behavior of basic types (which means that the specification only deals with the actual meaning of the number). What this means is that you basically are out of luck in relating internal representation of the number and it's actual value. So even if you did this right and used ~0 (or whatever way is proper for the implementation) the standard still doesn't seem to bother with the representation as the value of negative zero is still zero.
#define zero (0)
#define negzero (~0)
std::cout<<(negzero < zero)<<std::endl;
std::cout<<(negzero <= zero)<<std::endl;
std::cout<<(negzero == zero)<<std::endl;
std::cout<<(~negzero)<<(~zero)<<std::endl;
std::cout<<(1 << negzero)<<std::endl;
std::cout<<(1 >> negzero)<<std::endl;
the three first lines should produce the same output as if negzero was defined the same as zero. The third line should output two zeros (as the standard requires that 0 to be rendered as 0 without sign-sign). The two last should output ones.
There are some hints (on how to produce negative zeros) that can be found in the C standard which actually mentions negative zero, but I don't think there's any mentions about they should compare less than normal zero. The C-standard suggests that negative zero might not survive storage in object (that's why I avoided that in the above example).
The way C and C++ are related it's reasonable to think that a negative zero would be produced in the same way in C++ as in C, and the standard seem to allow for that. While the C++ standard allows for other ways (via undefined behavior), but no other seem to be available via defined behavior. So it's rather certain that if a C++ implementation is to be able to produce negative zeros in an reasonable way it would be the same as for a similar C implementation.
Given the following declaration:
double x;
What is the value of x?
Is my answer correct?
The value of x is: plus or minus 10 to 308th (limited to ~12 significant digits)
No, your answer is not correct. What thought process led you to that answer? Where might you have gone wrong?
Given the proliferation of "answers" on this question, I'm just going to come out and state it. The answer is that the value of x is undefined. It's an uninitialized value. If you try to read the value, in practice you'll get garbage (i.e. you'll get whatever bit pattern was in that memory location, reinterpreted as a double). But it's not as simple as that. Since the value is undefined, the optimization pass of the compiler is actually free to make choices based on the undefined value that it could not have made if the variable had any defined value. Any attempts to use an uninitialized variable can produce unexpected results, and is certainly a programming error.
There is one caveat, as I alluded to in my comment. If this declaration happens at the top level (or is modified with the static keyword) then the value becomes simply 0.0.
When you say T x; you default-initialize the variable x, and for a fundamental type T such as double this means that no initialization happens at all, the variable has indeterminate value (cf. 8.5), and reading the variable x before writing to it is simply undefined behaviour (cf. the note in 17.6.3.3/2).
So it's much worse than just getting an unknown value - rather, your entire program becomes non-deterministic at the point of invoking undefined behaviour.
The answer depends on where the variable is defined.
#include <iostream>
double x;
int main(){
double y;
std::cout << x << " " << y << std::endl;
return 0;
}
Different rules apply to global and local variables.
x will be initialized to zero.
y, as everyone else has stated, could be anything.
The value of x could be anything. It's undefined behavior to access the value, and so you could even get a value double can't contain. If x has static storage then it is defined, and will be initialized to 0.0.
In practice it will be a pseudo random assortment of bits. Not really random as it's left over from what was previously in the same location in memory. So yes, it will be within your range, along with infinities and NaN as those are also doubles.
You are correct.
The value is uninitialized and may contain anything resident in memory. Double precision is 53 effective bits, so your range is approximately right. I didn't check your precision but it's not exact since the level of accuracy is compressed around zero.
It's worth mentioning that there's also positive and negative infinity, plus NaN, and finally negative zero (yes, negative zero!). There are bitstrings which aren't even IEEE floating-point numbers which x could contain too :-).