Multiplying 1 million - c++

Why when you do this:
int a = 1000000 , b = 1000000;
long long product = a * b;
cout<<product;
It gives some random trash value?Why do both a and b need to be long long in order to calculate it?

You are observing the effects of undefined behavior. This:
a * b
is an (arithmetic) expression of type int, as both operands a and b are of type int. Trying to store the value of 1000000000000 into an int results in the so-called signed integer overflow which is undefined behavior.
Either cast one of the operands to long long, thus causing the entire expression to become long long, which is sufficiently large to accept the value of 1000000000000:
#include <iostream>
int main()
{
int a = 1000000, b = 1000000;
auto product = static_cast<long long>(a) * b;
std::cout << product;
}
Or define one of the operands as long long:
#include <iostream>
int main()
{
long long a = 1000000;
int b = 1000000;
auto product = a * b;
std::cout << product;
}
Optionally, use unsigned long instead.

This will work fine. When you do multiplication of int and int if it gets out of limit then to put it in a long long variable you multiply the result with 1ll or 1LL. When you multiply the result with 1ll the result is converted to long long during calculation of product itself.
int a = 1000000 , b = 1000000;
long long product = 1ll * a * b;
cout << product;

Related

long long int ans = a*b VS long long int ans = (long long int) a*b

I've written a code:
int a = 1000000000, b = 1000000000;
long long int ans = a * b;
cout << ans << '\n';
this code is causing overflow. I understand that a * b is causing the problem but I have taken long long int variable to keep a*b.
But look at the following code:
int a = 1000000000, b = 1000000000;
long long int ans = (long long int)a * b;
cout << ans << '\n';
it's working fine causing no overflow. Does it make any temporary variable to hold the value when calculating? Please explain the reason behind this strange overflowing.
This makes two temporary variables, (long long int)a and (long long int)b. The second conversion is implicit.
Actual compilers might not bother, if the hardware has a 32*32->64 multiply, but officially the conversions have to occur. On 64 bits hardware, it's essentially free when you load an int in a 64 bit register.

Assign a small number to unsigned long long data-type without casting

I became a little bit confuse about assigning a small number to a big data-type variable, for example in my code (checkout online) :
#include <iostream>
int main()
{
unsigned long long num = 5000000000;
unsigned long long num2 = static_cast<unsigned long long>(5000000) * static_cast<unsigned long long>(1000);
unsigned long long num3 = 5000000 * 1000UL; // Casting 1000 to long data-type
unsigned long long num4 = 5000000 * 1000;
std::cout << num << std::endl << num2 << std::endl << num3 << std::endl << num4;
return 0;
}
The output is
5000000000
5000000000
5000000000
705032704
I know about literal casting and static_cast feature in c++ and also about the compiler behavior that always casting with the biggest data-type in a mathematical statement.
But the problem is here that why the result of statement unsigned long long num4 = 5000000 * 1000; is the number 705032704 and not 5000000000? BTW i know when i cast it like 5000000 * 1000UL; it gives me 5000000000 (because it cast to largest data-type).
Why the unsigned long long num4 = 5000000 * 1000; statement dont casting automatically to unsigned long long data-type without using casting directly?
Where the number 705032704 come from when 5000000 * 1000 calculated?
Regards!
Your line unsigned long long num4 = 5000000 * 1000; consists of three independent parts which are evaluated separately.
The right-hand-side is evaluated as int because all the operands are int. The result is not what you expect because of an integer overflow.
The left-hand-side makes space for an unsigned long long.
The assignment copies the (unexpected) result from the right-hand-side into the space allocated for the variable.

Unsigned long long Fibonacci numbers negative?

I've written a simple Fibonacci sequence generator that looks like:
#include <iostream>
void print(int c, int r) {
std::cout << c << "\t\t" << r << std::endl;
}
int main() {
unsigned long long int a = 0, b = 1, c = 1;
for (int r = 1; r <= 1e3; r += 1) {
print(c, r);
a = b;
b = c;
c = a + b;
}
}
However, as r gets around the value of 40, strange things begin to happen. c's value oscillate between negative and positive, despite the fact he's an unsigned integer, and of course the Fibonacci sequence can't be exactly that.
What's going on with unsigned long long integers?
Does c get too large even for a long long integer?
You have a narrowing conversion here print(c, r); where you defined print to take only int's and here you pass an unsigned long long. It is implementation defined.
Quoting the C++ Standard Draft:
4.4.7:3: If the destination type is signed, the value is
unchanged if it can be represented in the destination type; otherwise,
the value is implementation-defined.
But what typically happens is that: from the unsigned long long, only the bits that are just enough to fit into an int are copied to your function. The truncated int is stored in Twos complements, depending on the value of the Most Significant Bit. you get such alternation.
Change your function signature to capture unsigned long long
void print(unsigned long long c, int r) {
std::cout << c << "\t\t" << r << std::endl;
}
BTW, see Mohit Jain's comment to your question.

Average pointer position in C++

I'm fiddling around with pointers, and as an example, the code:
int foo = 25;
int * bar = &foo;
cout << (unsigned long int) bar;
outputs something around 3216952416 when I run it.
So why does this code output such a low number, or even a negative number, when I run it?
int total = 0, amount = 0;
for( int i = 0; i < 100000; i++ )
{
int foo = 25;
int * bar = &foo;
total += (unsigned long int) bar;
amount++;
}
cout << "Average pointer position of foo: " << total / amount << endl;
It's probably something simple...
First of all, variable addresses (which are a pointer's value) are unsigned. You are using signed integer to make your calculation. It explain that the range is shifted, and negative values can occurs. Secondly you are certainly overflowing when using
total += (unsigned long int) bar;
, so any value is possible.
Your cast is useless, because the type of total is int. There will be another implicit cast performed.
You can try to change the type of total to unsigned long int but this is not enough. to avoid overflow, you need to do the sum of ratio, not the ratio of the sum
double amount = 100000;
double average= 0;
for( int i = 0; i < amount; i++ )
{
int foo = 25;
int * bar = &foo;
average += (((unsigned long int) bar)/amount);
}
cout << "Average pointer position of foo: " << (unsigned long int)average << endl;
int cannot store an infinitely large number. When total becomes too large to store in an int, the value wraps around to the other side of int's range.
On most systems, an int variable is held using 32 bits of memory. This gives it a range of -2147483648 to 2147483647.
If your int already holds a value of 2147483647 and you add 1, the result will be -2147483648!
So, if you want to calculate the "average pointer value", you need to store your sum in something much larger than int, or do your calculations such that you don't need to store the sum of all the pointer values (eg. UmNyobe's answer).

Why is long long value not printed correctly?

Why is the long long value not printed as I expect in the following code snippet?
#include <stdio.h>
int main()
{
int x = 1363177921;
long long unsigned y = x * 1000000;
printf("y: %llu\n", y); // Why is 1363177921000000 not printed?
return 0;
}
It's not the printing that's at fault. You have integer overflow in:
long long unsigned y = x * 1000000;
Change that to:
long long unsigned y = x * 1000000ull;
Because your x is not a long long, nor is 1000000 - it is only converted to long long AFTER the multiplication.
Make it 1000000ULL, and you'll get what you want.
the problem is that x is int and 1000000 will be long. Now the compiler will multiply them as it was multiplying 2 long and then the result is converted to long long
To solve add an implicit typecasting before x or 100000 or convert x into long long as show below
#include <stdio.h>
int main()
{
int x = 1363177921;
long long unsigned y = (long long )x * 1000000;
printf("y: %llu\n", y); // Why is 1363177921000000 not printed?
return 0;
}
http://codepad.org/rLW8fGTA