c++ the value of a double - c++

Given the following declaration:
double x;
What is the value of x?
Is my answer correct?
The value of x is: plus or minus 10 to 308th (limited to ~12 significant digits)

No, your answer is not correct. What thought process led you to that answer? Where might you have gone wrong?
Given the proliferation of "answers" on this question, I'm just going to come out and state it. The answer is that the value of x is undefined. It's an uninitialized value. If you try to read the value, in practice you'll get garbage (i.e. you'll get whatever bit pattern was in that memory location, reinterpreted as a double). But it's not as simple as that. Since the value is undefined, the optimization pass of the compiler is actually free to make choices based on the undefined value that it could not have made if the variable had any defined value. Any attempts to use an uninitialized variable can produce unexpected results, and is certainly a programming error.
There is one caveat, as I alluded to in my comment. If this declaration happens at the top level (or is modified with the static keyword) then the value becomes simply 0.0.

When you say T x; you default-initialize the variable x, and for a fundamental type T such as double this means that no initialization happens at all, the variable has indeterminate value (cf. 8.5), and reading the variable x before writing to it is simply undefined behaviour (cf. the note in 17.6.3.3/2).
So it's much worse than just getting an unknown value - rather, your entire program becomes non-deterministic at the point of invoking undefined behaviour.

The answer depends on where the variable is defined.
#include <iostream>
double x;
int main(){
double y;
std::cout << x << " " << y << std::endl;
return 0;
}
Different rules apply to global and local variables.
x will be initialized to zero.
y, as everyone else has stated, could be anything.

The value of x could be anything. It's undefined behavior to access the value, and so you could even get a value double can't contain. If x has static storage then it is defined, and will be initialized to 0.0.
In practice it will be a pseudo random assortment of bits. Not really random as it's left over from what was previously in the same location in memory. So yes, it will be within your range, along with infinities and NaN as those are also doubles.

You are correct.
The value is uninitialized and may contain anything resident in memory. Double precision is 53 effective bits, so your range is approximately right. I didn't check your precision but it's not exact since the level of accuracy is compressed around zero.
It's worth mentioning that there's also positive and negative infinity, plus NaN, and finally negative zero (yes, negative zero!). There are bitstrings which aren't even IEEE floating-point numbers which x could contain too :-).

Related

Why is numeric_limits::infinity() misbehaving logically for integral types [duplicate]

I was reading Setting an int to Infinity in C++. I understand that when one needs true infinity, one is supposed to use numeric_limits<float>::infinity(); I guess the rationale behind it is that usually integral types have no values designated for representing special states like NaN, Inf, etc. like IEEE 754 floats do (again C++ doesn't mandate neither - int & float used are left to the implementation); but still it's misleading that max > infinity for a given type. I'm trying to understand the rationale behind this call in the standard. If having infinity doesn't make sense for a type, then shouldn't it be disallowed instead of having a flag to be checked for its validity?
The function numeric_limits<T>::infinity() makes sense for those T for which numeric_limits<T>::has_infinity returns true.
In case of T=int, it returns false. So that comparison doesn't make sense, because numeric_limits<int>::infinity() does not return any meaningful value to compare with.
If you read e.g. this reference you will see a table showing infinity to be zero for integer types. That's because integer types in C++ can't, by definition, be infinite.
Suppose, conversely, the standard did reserve some value to represent inifity, and that numeric_limits<int>::infinity() > numeric_limits<int>::max(). That means that there would be some value of int which is greater than max(), that is, some representable value of int is greater than the greatest representable value of int.
Clearly, whichever way the Standard specifies, some natural understanding is violated. Either inifinity() <= max(), or there exists x such that int(x) > max(). The Standard must choose which rule of nature to violate.
I believe they chose wisely.
numeric_limits<int>::infinity() returns the representation of positive infinity, if available.
In case of integers, positive infinity does not exists:
cout << "int has infinity: " << numeric_limits<int>::has_infinity << endl;
prints
int has infinity: false

Is using an uninitialized unsigned type object Undefined Behavior?

I know that using (accessing the value) an uninitialized non-static and non-global object of type built-in integral type is Undefined behavior.
int x; // x is defined inside a function scope for example main
++x;// UB
signed char c = 127; // max positive value for char on my machine
c++; // UB overflowing a signed char.
Until here it is OK but what about unsigned types?
unsigned char uc = 255; // ok
uc++; // ok c now has the value 0
It is OK here because overflowing an unsigned will discard the bits outside the range.
So as long as any value assigned to an unsigned is it harmless to do this:
unsigned int x; // local non-static uninitialized
std::cout << x << '\n';// is it UB or OK?
AS you can see that any (indeterminate) value set to an unsigned doesn't cause a UB so is it wrong to do so?
I don't matter what value in x but I think it doesn't cause any harm.
If it is OK then I guess it looks like a random value can be generated by the compiler from using an uninitialized unsigned non-static non-global object.
Is using an uninitialized unsigned type object Undefined Behavior?
Yes.
but I think it doesn't cause any harm.
You might be right. But since it's UB you might also be wrong. You can never be sure.
One sneaky thing compilers do is detect your Undefined Behavior and "optimize" your program by outright removing code paths that reach this UB.
Example (program compiles, but does nothing)
From cppreference :
Use of an indeterminate value obtained by default-initializing a non-class variable of any type is undefined behavior
An object with an indeterminate value is distinct from an object whose value you don't know. Using an indeterminate value is Undefined Behavior (with a few exceptions, see the link above) even if every possible value the object could have would have defined behavior.
It is an error to assume that an object with indeterminate value has any one of its possible values. The reason ++x is Undefined Behavior is not because it might already have the maximum value and overflow. It is Undefined Behavior simply because x is not initialized.
If uninitialized objects were required to behave as though they were initialized with some unspecified bit pattern, that would impede some useful optimizations and diagnostics. While there are times when it may be useful for programs to be able to perform some operations with uninitialized objects (e.g. if code initializes a variable subset of the objects in a group, and wants to copy all of the initialized ones, blindly copying all of the objects may be faster than trying to selectively copy only the initialized ones or having initialize the objects whose values affect program execution), there are other times when the diagnostics or optimizations would be more helpful. Because the authors of the Standard can't possibly know in every case whether it would be most useful to have implementations treat uninitialized values as though initialized with unspecified bit patterns, as causing a trap when read, or behaving in some other ways that might facilitate optimization, they allow implementations to process such situations in whatever manner would be most useful to their customers.
Consider, for example:
struct thing { uint32_t count; uint16_t dat[126]; };
struct thing x,y,z;
void test(void)
{
struct thing temp;
temp.count = readInteger();
for (int i=0; i<temp.count; i++)
temp.dat[i] = readInteger;
x = temp;
y = temp;
z = y;
}
Depending upon the code's intended purpose, there are at least five ways it might be useful to have implementations process it:
Treat temp as though unwritten parts were initialized with arbitrary bit patterns, which would then be copied into x, y, and z (meaning both structures would be guaranteed to hold matching unspecified data).
Trap on the attempt to copy temp without fully initializing its contents, since code which copies uninitialized data can pose a security risk if the execution context would have access to confidential data.
Treat every read of an uninitialized part of an automatic object as though it might yield an independent unspecified value, thus allowing a compiler to leave the corresponding parts of x and y holding any convenient values (including whatever they happened to hold before the function call), but treat a store of an indeterminate value to a static-duration or heap-duration object as though it writes an unspecified bit pattern (so unwritten parts of y and z would be guaranteed to match each other, even though they might not match x).
Treat every read of an indeterminate value as yielding a "contagious" indeterminate value, thus allowing a compiler to leave the corresponding parts of x, y, and z holding any convenient values, with no guarantee that any of them match.
Use the fact that code would invoke UB if temp.count weren't equal to 126 to optimize calling code based upon the fact that it "will" always equal 126.
Because none of the above behaviors would be most useful in every case, the Standard allows compilers to select among them in whatever fashion would be most useful to their customers.

Using floor to get whole number from double

If x and y are double, why can I not do:
int nx = floor(x) or int ny = floor(y) to round down to a whole number which would work with int?
Even when we consider only integers, float can store values that int cannot. For example, consider this case:
float x = std::numeric_limits<int>::max() + 1f;
// Even floored, the value is out of range!
int y = floor(x);
There are even some other, special values like positive infinity, negative infinity, and NaN, which an int variable cannot hold. (There's also negative zero, but that's defined in the standard to be equal to positive zero, so it manages to squeak through.)
Because of this, this conversion is considered "narrowing" and you must should explicitly perform it with a cast (so that both the compiler and the future maintainers of your program know that it was not a mistake):
int y = static_cast<int>(floor(x));
"Narrowing conversion" simply means that the domain of the destination type is not a subset of the domain of the source type, so there are some inputs that cannot accurately be represented in the destination type. The explicit cast is your way of telling the compiler that you are willing to accept the consequences if a conversion is performed where the value cannot be represented in the destination type.
Also note that the default behavior when casting from a floating-point type to an integer type is to truncate the fractional component, so the floor() call is redundant. You can just do:
int y = static_cast<int>(x);
you have to cast the result to int so you can store it into one...
int nx = (int)(floor(x));
You have to know about the domain of your values. If, as you say in the comments, you are using the double overload (see cppreference for full details) then there is indeed a possible loss of data.
double can represent numbers up to about 1e308 though above about 1e17 there is no fractional part. int can only manage about 2e9. So if you know that your domain will never exceed 2 billion, then it should be safe to use and you can either ignore the warning or use a cast to make it go away.
double d = 1.5;
int i = d;
The second intialization truncates the floating-point value and stores the result into i. The result here is valid and well-defined: i gets the value 1. (If the resulting value could not be represented in an int the behavior would be undefined; that's not the case here).
The warning is telling you that the conversion loses information. That's correct, because values like 1.4 and 1.5 will both be converted to 1, so you would no longer be able to tell the difference. The new terminology for that is a "narrowing conversion". But despite the warning, the conversion is legal. There is no need for a cast here, except perhaps to quiet over-zealous compilers. Unless, of course, you have your compiler set to turn warnings into errors, in which case you'll spend a fair amount of time sorting out how to persuade your compiler to compile valid and meaningful code that some compiler-writer thinks deserves a warning.

conversion from a base type to other pointers

i want to make a pointer of type double to point to a pointer of type int which points to another var:
int x=23;
int *f_var =&x;
double*l_ptr = (double *)f_var;
both of these pointers have the same address but when i display their values f_var display the good one but l_ptr display a strange value. Why is this happens and i will be glad if you will explain me.Why they don't have the same value? If both of them are pointing to the same location and double can store an int why they have different values?
Your question itself is flawed, but I'll attempt to respond to a couple of aspects of it.
Firstly, an int is not a base type for a double. Neither is a pointer to int a base type for a pointer to double. So this question is mis-titled.
Second, there is no guarantee that a floating point type can store every value an int can. In practice, that depends on the floating point representation (how many bits it uses to represent the mantissa). It often turns out that a float or double can represent some of the values an int can, but not others (for which it stores a value that is an approximation). This is part of the trade-off of a floating point variable being able to represent non-integral values over a larger range than an int can.
Third, your description "display their values" is ambiguous. If you are doing
std::cout << f_var << ' ' << l_ptr << '\n';
or (more explicitly, without relying on implicit conversions to void pointers)
std::cout << (void *)f_var << ' ' << (void *)l_ptr << '\n';
the value of the pointers (i.e. the address they hold) will be printed. In practice, the values output will typically be the same - it is their type that differs, not their value. But the values of these pointers are the address in memory of the variable x, not the value of x.
Alternatively, if you are doing
std::cout << *f_var << ' ' << *l_ptr << '\n';
the result is undefined behaviour, since l_ptr is being dereferenced as if it points to a double, despite pointing to an int. The result of this can be anything (printing garbage, printing values you don't expect, reformatting your hard drive, anything). In practice, it probably attempts to interpret the bits that make up an int (i.e. the variable x) as if they represent a floating point value. Since the bits in an int have different meaning from the bits in a double (there is a mantissa and an exponent in floating point representations, which is why a double can represent values like 0.5E15 or 0.5E-3) the same set of bits represents a different value. There is also the wrinkle that, typically, a double consists of more bits than an int does - so, by printing in this way, your code is interpreting random memory to the right of x as data.
Actually, when you do that:
int x=23;
int *f_var = &x;
double *l_ptr = (double *)f_var;
l_ptr isn't a "pointer of type double [which points] to a pointer of type int" (as you said) but a pointer to double which stores the address of an int. Dereferencing it is undefined behaviour.
[Edit: actually, the cast from int* to double* is already undefined behaviour, so l_ptr may store something else than the address of x]
[Edit: oops, no, you can "safely" reinterpret_cast from any pointer to any pointer/integral large enough to hold its value]
Dereferencing a pointer to double actually storing the address of a pointer to int isn't any better though, assuming you achieve to assign that address to a pointer to double without causing undefined behaviour (and a pointer of type double, is actually probably not a pointer, but i assumed you meant "pointer to double").
What you see when you output *l_ptr* is probably the bitwise representation ofxand probably some garbage data (because adoublemay be longer in memory than anint`) interpreted as a double. However, it could as well be 42 or "you're doing weird things" (or a crash, or nothing at all): it is undefined behaviour, the compiler may generate anything he wants and still meet the C++ standard.
An int and double are not stored the same way. For a start they're most likely different sizes in memory. But more importantly their bits are used in very different ways.
So quite simply: reintrepreting an int's memory as a double is nonsense and shouldn't be expected to show the same value.
If you were to do something like:
int a = 23;
double b = 23;
bool isSameBits = (memcmp(&a, &b, sizeof(a)) == 0);
That bool should indicate that they indeed don't use their bits the same way.
And bool isSameSize = (sizeof(a) == sizeof(b)); would indicate if they're different sizes.
If you want to know more about the underlying details, then you could perhaps google for something like "memory layout of double".
Then it is possible to make a pointer of a type to point to a different pointer of other type and when you dereference them to hold both of them the same value? I only want to know because as you saw i am a noob.

Why is using the value of int i; undefined behaviour but using the value of rand() is not?

If I don't initialise i, I don't know its value, but also I cannot predict the value of rand().
But on the other hand, I know the value of the uninitialised i is between INT_MIN and INT_MAX, also I know the value of rand() is between 0 and RAND_MAX.
Why is using the value of the uninitialised i undefined behaviour but using the value of rand() is not?
The value of an uninitialized variable is undefined. The return value of rand() is well-defined as being the next number in the pseudo-random sequence for the given seed.
You can rely on rand() returning a pseudorandom number. You cannot rely on any characteristic of the value of an uninitialized int i.
I looked up the C99 standard (ISO/IEC 9899:1999), which is the only standard document I have available, but I seriously doubt these things have changed. In chapter 6.2.6 Representations of types, it is stated that integers are allowed to be stored in memory with padding bits, the value of which is unspecified but may include parity bits, which would be set upon initialization and any arithmetic operation on the integer. Certain representations (like e.g. a parity mismatch) could be trap representations, the behaviour of which is undefined (but might well terminate your program).
So, no, you cannot even rely on an uninitialized int i to be INT_MIN <= i <= INT_MAX.
The standard says reading from an uninitialized variable is undefined behaviour, so it is. You cannot say that it is between INT_MIN and INT_MAX. In fact, cannot really think of that variable as holding any value, so you couldn't even check that hypothesis*.
rand() on the other hand is designed to produce a random number within a range. If reading from the result of rand() were undefined, it wouldn't be part of any library because it couldn't be used.
* Usually "undefined behaviour" provides scope for optimization. The optimizer can do whatever it wants with an uninitialized variable, working under the assumption that it isn't read from.
Value of rand is defined to be (pseudo-)random. Value of an uninitialized variable is not defined to be anything. That is the difference, whether something is defined to be anything meaningful (while rand() is also meaningful - it gives (pseudo-)random numbers) or undefined.
The short answer, like others have said, is because the standard says so.
The act of accessing the value of a variable (described in the standard as performing an lvalue to rvalue conversion) that is uninitialised gives undefined behaviour.
rand() is specified as giving a value between 0 and RAND_MAX, where RAND_MAX (a macro declared in <cstdlib> for C++, and <stdlib.h> for C) is specified as having a value which is at least 32767.
Consider variable x in the following code:
uint32_t blah(uint32_t q, uint32_t r)
{
uint16_t x;
if (q)
x=foo(q); // Assume return type of foo() is uint16_t
if (r)
x=bar(r); // Assume return type of bar() is uint16_t
return x;
}
Since code never assigns x a value outside the range 0-65535, a compiler for a 32-bit processor could legitimately allocate a 32-bit register for it. Indeed, since the value of q is never used after the first time x is written, a compiler could use the same register to hold x has had been used to hold q. Ensuring that the function would always return a value 0-65535 would require adding some extra instructions compared with simply letting it return whatever happened to be in the register allocated to x.
Note that from the point of view of the Standard authors, if the Standard isn't going to require that the compiler make x hold a value from 0-65535, it may as well not specify anything about what may happen if code tries to use x. Implementations that wish to offer any guarantees about behavior in such cases are free to do so, but the Standard imposes no requirements.
rand uses an algorithm to generate your number. So rand is not undefined and if when you use srand to start the random generation you choose a value like 42, you will always get the same value each time you launch your app
try to launch this example you will see:
#include <stdio.h>
#include <stdlib.h>
int main()
{
srand(42);
printf("%d\n", rand());
return 0;
}
see also: https://en.wikipedia.org/wiki/List_of_random_number_generators