Average pointer position in C++ - c++

I'm fiddling around with pointers, and as an example, the code:
int foo = 25;
int * bar = &foo;
cout << (unsigned long int) bar;
outputs something around 3216952416 when I run it.
So why does this code output such a low number, or even a negative number, when I run it?
int total = 0, amount = 0;
for( int i = 0; i < 100000; i++ )
{
int foo = 25;
int * bar = &foo;
total += (unsigned long int) bar;
amount++;
}
cout << "Average pointer position of foo: " << total / amount << endl;
It's probably something simple...

First of all, variable addresses (which are a pointer's value) are unsigned. You are using signed integer to make your calculation. It explain that the range is shifted, and negative values can occurs. Secondly you are certainly overflowing when using
total += (unsigned long int) bar;
, so any value is possible.
Your cast is useless, because the type of total is int. There will be another implicit cast performed.
You can try to change the type of total to unsigned long int but this is not enough. to avoid overflow, you need to do the sum of ratio, not the ratio of the sum
double amount = 100000;
double average= 0;
for( int i = 0; i < amount; i++ )
{
int foo = 25;
int * bar = &foo;
average += (((unsigned long int) bar)/amount);
}
cout << "Average pointer position of foo: " << (unsigned long int)average << endl;

int cannot store an infinitely large number. When total becomes too large to store in an int, the value wraps around to the other side of int's range.
On most systems, an int variable is held using 32 bits of memory. This gives it a range of -2147483648 to 2147483647.
If your int already holds a value of 2147483647 and you add 1, the result will be -2147483648!
So, if you want to calculate the "average pointer value", you need to store your sum in something much larger than int, or do your calculations such that you don't need to store the sum of all the pointer values (eg. UmNyobe's answer).

Related

Sum signed 32-bit int with unsigned 64bit int

On my application, I receive two signed 32-bit int and I have to store them. I have to create a sort of counter and I don't know when it will be reset, but I'll receive big values and frequently. Beacause of that, in order to store these values, I decided to use two unsigned 64-bit int.
The following could be a simple version of the counter.
struct Counter
{
unsigned int elementNr;
unsigned __int64 totalLen1;
unsigned __int64 totalLen2;
void UpdateCounter(int len1, int len2)
{
if(len1 > 0 && len2 > 0)
{
++elementNr;
totalLen1 += len1;
totalLen2 += len2;
}
}
}
I know that if a smaller type is casted to a bigger one (e.g. int to long) there should be no issues. However, passing from 32 bit rappresentation to 64 bit rappresentation and from signed to unsigned at the same time, is something new for me.
Reading around, I undertood that len1 should be expanded from 32 bit to 64 bit and then applied sign extension. Because the unsigned int and signen int have the same rank (Section 4.13), the latter should be converted.
If len1 stores a negative value, passing from signed to unsigned will return a wrong value, this is why I check the positivy at the beginning of the function. However, for positive values, there
should be no issues I think.
For clarity I could revrite UpdateCounter(int len1, int len2) like this
void UpdateCounter(int len1, int len2)
{
if(len1 > 0 && len2 > 0)
{
++elementNr;
__int64 tmp = len1;
totalLen1 += static_cast<unsigned __int64>(tmp);
tmp = len2;
totalLen2 += static_cast<unsigned __int64>(tmp);
}
}
Might there be some side effects that I have not considered.
Is there another better and safer way to do that?
A little background, just for reference: binary operators such arithmetic addition work on operands of the same type (the specific CPU instruction to which is translated depends on the number representation that must be the same for both instruction operands).
When you write something like this (using fixed width integer types to be explicit):
int32_t a = <some value>;
uint64_t sum = 0;
sum += a;
As you already know this involves an implicit conversion, more specifically an
integral promotion according to integer conversion rank.
So the expression sum += a; is equivalent to sum += static_cast<uint64_t>(a);, so a is promoted having the lesser rank.
Let's see what happens in this example:
int32_t a = 60;
uint64_t sum = 100;
sum += static_cast<uint64_t>(a);
std::cout << "a=" << static_cast<uint64_t>(a) << " sum=" << sum << '\n';
The output is:
a=60 sum=160
So all is all ok as expected. Let's se what happens adding a negative number:
int32_t a = -60;
uint64_t sum = 100;
sum += static_cast<uint64_t>(a);
std::cout << "a=" << static_cast<uint64_t>(a) << " sum=" << sum << '\n';
The output is:
a=18446744073709551556 sum=40
The result is 40 as expected: this relies on the two's complement integer representation (note: unsigned integer overflow is not undefined behaviour) and all is ok, of course as long as you ensure that the sum does not become negative.
Coming back to your question you won't have any surprises if you always add positive numbers or at least ensuring that sum will never be negative... until you reach the maximum representable value std::numeric_limits<uint64_t>::max() (2^64-1 = 18446744073709551615 ~ 1.8E19).
If you continue to add numbers indefinitely sooner or later you'll reach that limit (this is valid also for your counter elementNr).
You'll overflow the 64 bit unsigned integer by adding 2^31-1 (2147483647) every millisecond for approximately three months, so in this case it may be advisable to check:
#include <limits>
//...
void UpdateCounter(const int32_t len1, const int32_t len2)
{
if( len1>0 )
{
if( static_cast<decltype(totalLen1)>(len1) <= std::numeric_limits<decltype(totalLen1)>::max()-totalLen1 )
{
totalLen1 += len1;
}
else
{// Would overflow!!
// Do something
}
}
}
When I have to accumulate numbers and I don't have particular requirements about accuracy I often use double because the maximum representable value is incredibly high (std::numeric_limits<double>::max() 1.79769E+308) and to reach overflow I would need to add 2^32-1=4294967295 every picoseconds for 1E+279 years.

Multiplying 1 million

Why when you do this:
int a = 1000000 , b = 1000000;
long long product = a * b;
cout<<product;
It gives some random trash value?Why do both a and b need to be long long in order to calculate it?
You are observing the effects of undefined behavior. This:
a * b
is an (arithmetic) expression of type int, as both operands a and b are of type int. Trying to store the value of 1000000000000 into an int results in the so-called signed integer overflow which is undefined behavior.
Either cast one of the operands to long long, thus causing the entire expression to become long long, which is sufficiently large to accept the value of 1000000000000:
#include <iostream>
int main()
{
int a = 1000000, b = 1000000;
auto product = static_cast<long long>(a) * b;
std::cout << product;
}
Or define one of the operands as long long:
#include <iostream>
int main()
{
long long a = 1000000;
int b = 1000000;
auto product = a * b;
std::cout << product;
}
Optionally, use unsigned long instead.
This will work fine. When you do multiplication of int and int if it gets out of limit then to put it in a long long variable you multiply the result with 1ll or 1LL. When you multiply the result with 1ll the result is converted to long long during calculation of product itself.
int a = 1000000 , b = 1000000;
long long product = 1ll * a * b;
cout << product;

Why is "int" not working correctly with "j" but long long is working fine?

This is my code with int j:
void solve(){
unsigned long long n;
cin>>n;
unsigned long long sum = 0;
int j = 1;
for(int i=3;i<n+1;i+=2){
sum += ((4*i)-4)*(j);
j++;
}
cout<<sum<<"\n";
}
Input:
499993
Output:
6229295798864
but it is giving wrong output, and here is my code with long long j which is working fine:
void solve(){
int n;
cin>>n;
unsigned long long sum = 0;
long long j = 1;
for(int i=3;i<n+1;i+=2){
sum += ((4*i)-4)*(j);
j++;
}
cout<<sum<<"\n";
}
Input:
499993
Output:
41664916690999888
In this case value of j is well below 499993, which is in int range but still, it's not working. Why is it actually happening?
Here is the link to the actual problem. In case, you want to have a look.
Notice that the result of ((4*i)-4)*(j) is an int, since both i and j are int types. The right hand side is promoted to unsigned long long only when adding ((4*i)-4)*(j) to sum. But the expression ((4*i)-4)*(j) already overflows the size of the int type for a large enough n before being promoted.
However, if you change either of i or j to unsigned long long, the expression ((4*i)-4)*(j) is evaluated to unsigned long long, safely inside the size limits.
In the first code snippet in the expression
((4*i)-4)*(j)
of the assignment statement
sum += ((4*i)-4)*(j);
the both operands (4*i)-4) and (j) have the type int. So the type of the expression (the common type of the operands) is also int. But an object of the type int is not large enough to store the result value. So here an overflow takes place.
When the j is declared as having the type long long
long long j = 1;
then the common type of the expression above is also long long. It means that due to the usual arithmetic conversion this operand (4*i)-4) is also converted to the type long long. And an object of this type can store the result value provided for the inputted data.
You can check what are the maximum values that can be stored in objects of the type int and long long.
Here you are.
#include <iostream>
#include <limits>
int main()
{
std::cout << "The maximum value of an object of the type int is "
<< std::numeric_limits<int>::max()
<< '\n';
std::cout << "The maximum value of an object of the type long long is "
<< std::numeric_limits<long long>::max()
<< '\n';
return 0;
}
The program output might look like
The maximum value of an object of the type int is 2147483647
The maximum value of an object of the type long long is 9223372036854775807

c++ order of precedence - casting before multiplying

In the C++ function below, why is numbers[max_index1] cast to long long and then multiplied by numbers[max_index2]? I would of thought that you'd multiply the numbers and then cast?
Also would it not make more sense to make the vector numbers type long long instead of int therefore the casting wouldn't be necessary?
long long MaxPairwiseProductFast(const vector<int>& numbers) {
int n = numbers.size();
int max_index1 = -1;
cout << "value at max_index1 is " << numbers[max_index1] << std::endl;
for(int i = 0; i < n; i++)
if((max_index1 == -1) || (numbers[i] > numbers[max_index1]))
max_index1 = i;
int max_index2 = -1;
for(int j = 0; j < n; j++)
if((numbers[j] != numbers[max_index1]) && ((max_index2 == -1) || (numbers[j] > numbers[max_index2])))
max_index2 = j;
return ((long long)(numbers[max_index1])) * numbers[max_index2];
}
int main() {
int n;
cin >> n;
vector<int> numbers(n);
for (int i = 0; i < n; ++i) {
cin >> numbers[i];
}
long long result = MaxPairwiseProductFast(numbers);
cout << result << "\n";
return 0;
}
((long long)(numbers[max_index1])) * numbers[max_index2];
numbers[max_index2] will be promoted to long long before multiplication is performed.
If you multiply two int's and the result overflowed, there is nothing you can achieve by casting that result to long long, so you cast first, then multiply.
Also would be not make more sense to make the vector numbers type long long instead of int therefore the casting wouldn't be necessary?
If you know that the individual numbers will fit an int, but the result of multiplying two int's can overflow, this will help save space.
I woudld of thought that you'd multiply the numbers and then cast?
Imagine your two operands have the value std::numeric_limits<int>::max(). This is the biggest value an int can represent, and (since it's a positive integer) the result of squaring this number is even bigger.
When you multiply two int values, the result is also an int. See here (specifically conversions, integral promotion and overflow of signed types).
Since the result is by definition bigger than the largest value you can store in an int, doing this multiplication with ints gives an undefined result. You need to perform the multiplication with a type large enough to store the result.

Why can't I use a long long int type ? c++

I try
long long int l = 42343254325322343224;
but to no avail. Why does it tell me, "integer constant is too long." I am using the long long int type which should be able to hold more than 19 digits. Am I doing something wrong here or is there a special secret I do not know of just yet?
Because it's more, on my x86_64 system, of 2^64
// 42343254325322343224
// maximum for 8 byte long long int (2^64) 18446744073709551616
// (2^64-1 maximum unsigned representable)
std::cout << sizeof(long long int); // 8
you shouldn't confuse the number of digits with the number of bits necessary to represent a number
Take a look at Boost.Multiprecision at Boost.Multiprecision
It defines templates and classes to handle larger numbers.
Here is the example from the Boost tutorial:
#include <boost/multiprecision/cpp_int.hpp>
using namespace boost::multiprecision;
int128_t v = 1;
// Do some fixed precision arithmetic:
for(unsigned i = 1; i <= 20; ++i)
v *= i;
std::cout << v << std::endl; // prints 20!
// Repeat at arbitrary precision:
cpp_int u = 1;
for(unsigned i = 1; i <= 100; ++i)
u *= i;
std::cout << u << std::endl; // prints 100!
It seems that the value of the integer literal exceeds the acceptable value for type long long int
Try the following program that to determine maximum values of types long long int and unsigned long long int
#include <iostream>
#include <limits>
int main()
{
std::cout << std::numeric_limits<long long int>::max() << std::endl;
std::cout << std::numeric_limits<unsigned long long int>::max() << std::endl;
return 0;
}
I have gotten the following results at www.ideone.com
9223372036854775807
18446744073709551615
You can compare it with the value you specified
42343254325322343224
Take into account that in general case there is no need to specify suffix ll for a integer decimal literal that is so big that can be stored only in type long long int The compiler itself will determine the most appropriate type ( int or long int or long long int ) for the integral decimal literal.