Recently I've been going through some easy project Euler problems and solving them in Ruby and C++. But for Problem 14 concerning the Collatz conjecture, my C++ code went on for about half an hour before I terminated it, though when I translated the code into Ruby, it solved it in nine seconds.
That difference is quite unbelievable to me - I had always been led to believe that C++ was almost always faster than Ruby, especially for mathematical process.
My code is as follows.
C++:
#include <iostream>
using namespace std;
int main ()
{
int a = 2;
int b = 2;
int c = 0;
while (b < 1000000)
{
a = b;
int d = 2;
while (a != 4)
{
if (a % 2 == 0)
a /= 2;
else
a = 3*a + 1;
d++;
}
if (d > c)
{
cout << b << ' ' << d << endl;
c=d;
}
b++;
}
cout << c;
return 0;
}
Run time - I honestly don't know, but it's a really REALLY long time.
and Ruby:
#!/usr/bin/ruby -w
a = 0
b = 2
c = 0
while b < 1000000
a = b;
d = 2
while a != 4
if a % 2 == 0
a /= 2
else
a = 3*a + 1
end
d+=1
end
if d > c
p b,d
c=d
end
b+=1
end
p c
Run time - approximately 9 seconds.
Any idea what's going on here?
P.S. the C++ code runs a good deal faster than the Ruby code until it hits 100,000.
You're overflowing int, so it's not terminating. Use int64_t instead of int in your c++ code.
You'll probably need to include stdint.h for that..
In your case the problem was a bug in C++ implementation (numeric overflow).
Note however that trading in some memory you can get the result much faster than you're doing...
Hint: suppose you find that from number 231 you need 127 steps to end the computation, and suppose that starting from another number you get to 231 after 22 steps... how many more steps do you need to do?
With 32-bit arithmetic, the C++ code overflows on a = 3*a + 1. With signed 32-bit arithmetic, the problem is compounded, because the a /= 2 line will preserve the sign bit.
This makes it much harder for a to ever equal 4, and indeed when b reaches 113383, a overflows and the loop never ends.
With 64-bit arithmetic there is no overflow, because a maxes out at 56991483520, when b is 704511.
Without looking at the math, I speculate that unsigned 32-bit arithmetic will "probably" work, because the multiplication and unsigned division will both work modulo 2^32. And given the short running time of the program, values aren't going to cover too much of the 64-bit spectrum, so if a value is equal to 4 modulo 2^32, it's "probably" actually equal to 4.
Related
I'm having an issue creating a function that checks if a root can be simplified. In this example, I'm trying to simplify the cube root of 108, and the first number that this should work for is 27.
In order to do this, I am calling pow() with the number being the index (in this case, 27), and the power being (1/power), which in this instance is 3. I then compare that to the rounded answer of pow(index,(1/power)), which should also be 3.
Included is a picture of my problem, but basically, I am getting two answers that are equivalent to 3, yet my program is not recognizing them as equal. It seems to be working elsewhere in my program, but will not work here. Any suggestions as to why?
int inside = insideVal;
int currentIndex = index;
int coeff = co;
double insideDbl = pow(index, (1/(double)power));
double indexDbl = round(pow(index,(1/(double)power)));
cout<<insideDbl<< " " << indexDbl <<endl;
//double newPow = (1/(double)power);
vector<int> storedInts = storeNum;
if(insideDbl == indexDbl){
if(inside % currentIndex == 0){
storedInts.push_back(currentIndex);
return rootNumerator(inside/currentIndex, currentIndex, coeff, power, storedInts);
}
else{
return rootNumerator(inside, currentIndex + 1, coeff, power, storedInts);
}
}
else if(currentIndex < inside){
return rootNumerator(inside, currentIndex + 1, coeff, power, storedInts);
}
I tried to add a picture, but my reputation apparently wasn't high enough. In my console, I am getting "3 3" for the line that reads cout<<insideDbl<< " " << indexDbl <<endl;
EDIT:
Alright, so if the answers aren't exact, why does the same type of code work elsewhere in my program? Taking the 4th Root of 16 (which should equal 2) works using this segment of code:
else if( pow(initialNumber, (1/initialPower)) == round(pow(initialNumber,(1/initialPower)))){
int simplifiedNum = pow(initialNumber, (1/initialPower));
cout<<simplifiedNum;
Value* simplifiedVal = new RationalNumber(simplifiedNum);
return simplifiedVal;
}
despite the fact that the conditions are exactly the same as the ones that I'm having trouble with.
Well you are a victim of finite precision floating point arithmetic.
What happened?
This if(insideDbl == indexDbl), is very dangerous and misleading. It is in fact a question whether (Note: I made up the exact numbers but I can give you precise ones) 3.00000000000001255 is the same as 2.999999999999996234. I put 14 0s and 14 9s. So technically the difference goes beyond 15 most significant places. This is important.
Now if you write insideDbl == indexDbl, the compiler compares the binary representantions of them. Which are clearly different. However, when you simply print them, the default precision is like 5 or 6 significant digits, so they get rounded, and seem to be the same.
How to check it?
Try printing them with:
typedef std::numeric_limits< double > dbl_limits;
cout.precision(dbl::max_digits10);
cout << "Does " << insideDbl << " == " << indexDbl << "?\n";
This will set the precision, to the number of digits, the are necessary to differentiate two numbers. Please note that this is higher than the guaranteed precision of computation! That is the root of confusion.
I would also encourage reading numeric_limits. Especially about digits10, and max_digits10.
Why sometimes it works?
Because sometimes two algorithms will end up using the same binary representation for the final results, and sometimes they won't.
Also 2 can be a special case, as I believe it can be actually represented exactly in binary form. I think (but won't put my head on it.) all powers of 2 (and their sums) can be, like 0,675 = 0,5+0,125 = 2^-1 + 2^-3. But please don't take it for granted unless someone else confirms it.
What can you do?
Stick to the precise computations. Using integers, or whatever. Or you could assume that everything 3.0 +/- 10^-10 is actually 3.0 (epsilon comparisons), which is very risky, to say the least, when you do care about precise math.
Tl;dr: You can never compare two floats or doubles for equality, even when mathematically you can prove the mentioned equality, because of the finite precision of computations. That is, unless you are actually interested in the same binary representation of the value, as opposed to the value itself. Sometimes this is the case.
I suspect that you'll do better by computing the prime factorisation of insideVal and taking the product of those primes that appear in a multiple of the root.
For example
108 = 22 × 33
and hence
3√108 = 3 × 3√22
and
324 = 22 × 34
and hence
3√324 = 3 × 3√(22 × 3)
You can use trial division to construct the factorisation.
Edit A C++ implementation
First we need an integer overload for pow
unsigned long
pow(unsigned long x, unsigned long n)
{
unsigned long p = 1;
while(n!=0)
{
if(n%2!=0) p *= x;
n /= 2;
x *= x;
}
return p;
}
Note that this is simply the peasant algorithm applied to powers.
Next we need to compute the prime numbers in sequence
unsigned long
next_prime(const std::vector<unsigned long> &primes)
{
if(primes.empty()) return 2;
unsigned long p = primes.back();
unsigned long i;
do
{
++p;
i = 0;
while(i!=primes.size() && primes[i]*primes[i]<=p && p%primes[i]!=0) ++i;
}
while(i!=primes.size() && primes[i]*primes[i]<=p);
return p;
}
Note that primes is expected to contain all of the prime numbers less than the one we're trying to find and that we can quit checking once we reach a prime greater than the square root of the candidate p since that could not possibly be a factor.
Using these functions, we can calculate the factor that we can take outside the root with
unsigned long
factor(unsigned long x, unsigned long n)
{
unsigned long f = 1;
std::vector<unsigned long> primes;
unsigned long p = next_prime(primes);
while(pow(p, n)<=x)
{
unsigned long i = 0;
while(x%p==0)
{
++i;
x /= p;
}
f *= pow(p, (i/n));
primes.push_back(p);
p = next_prime(primes);
}
return f;
}
Applying this to your example
std::cout << factor(108, 3) << std::endl; //output: 3
gives the expected result. For another example, try
std::cout << factor(3333960000UL, 4) << std::endl; //output: 30
which you can confirm is correct by noting that
3333960000 = 304 × 4116
and checking that 4116 doesn't have any factor that is a power of 4.
I want to write the program that Calculate 2^x mod n = 1 we have n is an integer but, we should calculate x.I wrote the code but my code work too slow in big n.Can you suggest me a good way that work less than 1 second to solve this problem.
here is my code:
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
long long int n,cntr=1,cheak;
cin >> n;
while (1)
{
if (n % 2 == 0)
{
break;
}
cheak=pow(2, cntr);
if (cheak % n == 1)
break;
cntr++;
}
cout << cntr << endl;
}
Some suggested modifications to your current approach: Note: a better approach follows!
Change your long long int to unsigned long long int. This will give you one more bit.
Change while (1) to while (cntr < 64). The size of unsigned long long is likely only 64 bits. (It's guaranteed to be at least 64 bits, but not larger than that.) You would then need to check whether your loop succeeded, however.
Change cheak to calculate 2n as 1ull << cntr. Make sure to include the ull suffix, which says this is an unsigned long long.
The << operator shifts bits to the left. Shifting all the bits to the left by 1 doubles the integer value of the number, assuming no bits "shifted away" off the left of the value. So, 1 << n will compute 2n.
The suffix ull indicates an integer constant is an unsigned long long. If you omit this suffix, 1 will be treated as an integer, and shift values above 31 will not do what you want.
However, all of the above are merely refinements on your current approach. It's worth understanding those refinements to better understand the language. They don't, however, look at the bigger picture.
Modular multiplication allows you to find (A * B) mod C as ( (A mod C) * (B mod C) ) mod C. How does that help us here?
We can rewrite the entire algorithm in a way that only limits N and X to the precision of the machine integers, and not 2N:
int main()
{
unsigned int modulus;
unsigned int raised = 2;
int power = 1;
std::cin >> modulus;
if (modulus % 2 == 1)
{
while (raised % modulus != 1)
{
raised = ((unsigned long long)raised * 2) % modulus;
power++;
}
std::cout << power << std::endl;
} else
{
std::cout << "modulus must be odd" << std::endl;
}
}
The cast to unsigned long long above allows modulus to be as large as 232 - 1, assuming unsigned int is 32 bits, without the computation overflowing.
With this approach, I was able to very quickly find answers even for very large inputs. For example, 111111111 returns 667332. I verified 2677332 mod 111111111 == 1 using the arbitrary precision calculator bc.
It's very fast. It computed 22323860 mod 4294967293 == 1 in less than 0.07 seconds on my computer.
Epilog: This highlights an important principle in programming: Really, this was a math problem more than a programming problem. Finding an efficient solution required knowing more about the problem domain than it did knowing about C++. The actual C++ code was trivial once we identified the correct mathematical approach.
It often goes this way, whether it's the mathematics or some other algorithmic aspect. And, it shouldn't surprise you to learn that discrete mathematics is where many of our graph and set algorithms come from. The programming language itself is a small piece of the big picture.
For each k between 1 and ceil(sqrt(n)), compute 2^k mod n and 2^(k ceil(sqrt(n))) mod n. Then compute the modular inverse of each 2^k. Sort all of the inverse(2^k)s into an array foo and the 2^(k ceil(sqrt(n))s into an array bar. There will be at least one value in common between the two arrays; find it. Say inverse(2^a) = 2^(b ceil(sqrt(n))). Then 2^(a + b ceil(sqrt(n))) = 1 (mod n).
How's your professor's sense of humor?
#include <iostream>
int main() { std::cout << 0 << '\n'; }
always prints a correct answer to the problem as stated.
pow is quite expensive in calculations, but if you have 2 as its first argument, you can better do a shift left, as shift left is equal to multiplying by 2:
cheak = (1 << cntr);
I am trying to convert a binary array to decimal in following way:
uint8_t array[8] = {1,1,1,1,0,1,1,1} ;
int decimal = 0 ;
for(int i = 0 ; i < 8 ; i++)
decimal = (decimal << 1) + array[i] ;
Actually I have to convert 64 bit binary array to decimal and I have to do it for million times.
Can anybody help me, is there any faster way to do the above ? Or is the above one is nice ?
Your method is adequate, to call it nice I would just not mix bitwise operations and "mathematical" way of converting to decimal, i.e. use either
decimal = decimal << 1 | array[i];
or
decimal = decimal * 2 + array[i];
It is important, before attempting any optimisation, to profile the code. Time it, look at the code being generated, and optimise only when you understand what is going on.
And as already pointed out, the best optimisation is to not do something, but to make a higher level change that removes the need.
However...
Most changes you might want to trivially make here, are likely to be things the compiler has already done (a shift is the same as a multiply to the compiler). Some may actually prevent the compiler from making an optimisation (changing an add to an or will restrict the compiler - there are more ways to add numbers, and only you know that in this case the result will be the same).
Pointer arithmetic may be better, but the compiler is not stupid - it ought to already be producing decent code for dereferencing the array, so you need to check that you have not in fact made matters worse by introducing an additional variable.
In this case the loop count is well defined and limited, so unrolling probably makes sense.
Further more it depends on how dependent you want the result to be on your target architecture. If you want portability, it is hard(er) to optimise.
For example, the following produces better code here:
unsigned int x0 = *(unsigned int *)array;
unsigned int x1 = *(unsigned int *)(array+4);
int decimal = ((x0 * 0x8040201) >> 20) + ((x1 * 0x8040201) >> 24);
I could probably also roll a 64-bit version that did 8 bits at a time instead of 4.
But it is very definitely not portable code. I might use that locally if I knew what I was running on and I just wanted to crunch numbers quickly. But I probably wouldn't put it in production code. Certainly not without documenting what it did, and without the accompanying unit test that checks that it actually works.
The binary 'compression' can be generalized as a problem of weighted sum -- and for that there are some interesting techniques.
X mod (255) means essentially summing of all independent 8-bit numbers.
X mod 254 means summing each digit with a doubling weight, since 1 mod 254 = 1, 256 mod 254 = 2, 256*256 mod 254 = 2*2 = 4, etc.
If the encoding was big endian, then *(unsigned long long)array % 254 would produce a weighted sum (with truncated range of 0..253). Then removing the value with weight 2 and adding it manually would produce the correct result:
uint64_t a = *(uint64_t *)array;
return (a & ~256) % 254 + ((a>>9) & 2);
Other mechanism to get the weight is to premultiply each binary digit by 255 and masking the correct bit:
uint64_t a = (*(uint64_t *)array * 255) & 0x0102040810204080ULL; // little endian
uint64_t a = (*(uint64_t *)array * 255) & 0x8040201008040201ULL; // big endian
In both cases one can then take the remainder of 255 (and correct now with weight 1):
return (a & 0x00ffffffffffffff) % 255 + (a>>56); // little endian, or
return (a & ~1) % 255 + (a&1);
For the sceptical mind: I actually did profile the modulus version to be (slightly) faster than iteration on x64.
To continue from the answer of JasonD, parallel bit selection can be iteratively utilized.
But first expressing the equation in full form would help the compiler to remove the artificial dependency created by the iterative approach using accumulation:
ret = ((a[0]<<7) | (a[1]<<6) | (a[2]<<5) | (a[3]<<4) |
(a[4]<<3) | (a[5]<<2) | (a[6]<<1) | (a[7]<<0));
vs.
HI=*(uint32_t)array, LO=*(uint32_t)&array[4];
LO |= (HI<<4); // The HI dword has a weight 16 relative to Lo bytes
LO |= (LO>>14); // High word has 4x weight compared to low word
LO |= (LO>>9); // high byte has 2x weight compared to lower byte
return LO & 255;
One more interesting technique would be to utilize crc32 as a compression function; then it just happens that the result would be LookUpTable[crc32(array) & 255]; as there is no collision with this given small subset of 256 distinct arrays. However to apply that, one has already chosen the road of even less portability and could as well end up using SSE intrinsics.
You could use accumulate, with a doubling and adding binary operation:
int doubleSumAndAdd(const int& sum, const int& next) {
return (sum * 2) + next;
}
int decimal = accumulate(array, array+ARRAY_SIZE,
doubleSumAndAdd);
This produces big-endian integers, whereas OP code produces little-endian.
Try this, I converted a binary digit of up to 1020 bits
#include <sstream>
#include <string>
#include <math.h>
#include <iostream>
using namespace std;
long binary_decimal(string num) /* Function to convert binary to dec */
{
long dec = 0, n = 1, exp = 0;
string bin = num;
if(bin.length() > 1020){
cout << "Binary Digit too large" << endl;
}
else {
for(int i = bin.length() - 1; i > -1; i--)
{
n = pow(2,exp++);
if(bin.at(i) == '1')
dec += n;
}
}
return dec;
}
Theoretically this method will work for a binary digit of infinate length
Math:
If you have an equation like this:
x = 3 mod 7
x could be ... -4, 3, 10, 17, ..., or more generally:
x = 3 + k * 7
where k can be any integer. I don't know of a modulo operation is defined for math, but the factor ring certainly is.
Python:
In Python, you will always get non-negative values when you use % with a positive m:
#!/usr/bin/python
# -*- coding: utf-8 -*-
m = 7
for i in xrange(-8, 10 + 1):
print(i % 7)
Results in:
6 0 1 2 3 4 5 6 0 1 2 3 4 5 6 0 1 2 3
C++:
#include <iostream>
using namespace std;
int main(){
int m = 7;
for(int i=-8; i <= 10; i++) {
cout << (i % m) << endl;
}
return 0;
}
Will output:
-1 0 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 0 1 2 3
ISO/IEC 14882:2003(E) - 5.6 Multiplicative operators:
The binary / operator yields the quotient, and the binary % operator
yields the remainder from the division of the first expression by the
second. If the second operand of / or % is zero the behavior is
undefined; otherwise (a/b)*b + a%b is equal to a. If both operands are
nonnegative then the remainder is nonnegative; if not, the sign of the
remainder is implementation-defined 74).
and
74) According to work underway toward the revision of ISO C, the
preferred algorithm for integer division follows the rules defined in
the ISO Fortran standard, ISO/IEC 1539:1991, in which the quotient is
always rounded toward zero.
Source: ISO/IEC 14882:2003(E)
(I couldn't find a free version of ISO/IEC 1539:1991. Does anybody know where to get it from?)
The operation seems to be defined like this:
Question:
Does it make sense to define it like that?
What are arguments for this specification? Is there a place where the people who create such standards discuss about it? Where I can read something about the reasons why they decided to make it this way?
Most of the time when I use modulo, I want to access elements of a datastructure. In this case, I have to make sure that mod returns a non-negative value. So, for this case, it would be good of mod always returned a non-negative value.
(Another usage is the Euclidean algorithm. As you could make both numbers positive before using this algorithm, the sign of modulo would matter.)
Additional material:
See Wikipedia for a long list of what modulo does in different languages.
On x86 (and other processor architectures), integer division and modulo are carried out by a single operation, idiv (div for unsigned values), which produces both quotient and remainder (for word-sized arguments, in AX and DX respectively). This is used in the C library function divmod, which can be optimised by the compiler to a single instruction!
Integer division respects two rules:
Non-integer quotients are rounded towards zero; and
the equation dividend = quotient*divisor + remainder is satisfied by the results.
Accordingly, when dividing a negative number by a positive number, the quotient will be negative (or zero).
So this behaviour can be seen as the result of a chain of local decisions:
Processor instruction set design optimises for the common case (division) over the less common case (modulo);
Consistency (rounding towards zero, and respecting the division equation) is preferred over mathematical correctness;
C prefers efficiency and simplicitly (especially given the tendency to view C as a "high level assembler"); and
C++ prefers compatibility with C.
Back in the day, someone designing the x86 instruction set decided it was right and good to round integer division toward zero rather than round down. (May the fleas of a thousand camels nest in his mother's beard.) To keep some semblance of math-correctness, operator REM, which is pronounced "remainder", had to behave accordingly. DO NOT read this: https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/rzatk/REM.htm
I warned you. Later someone doing the C spec decided it would be conforming for a compiler to do it either the right way or the x86 way. Then a committee doing the C++ spec decided to do it the C way. Then later yet, after this question was posted, a C++ committee decided to standardize on the wrong way. Now we are stuck with it. Many a programmer has written the following function or something like it. I have probably done it at least a dozen times.
inline int mod(int a, int b) {int ret = a%b; return ret>=0? ret: ret+b; }
There goes your efficiency.
These days I use essentially the following, with some type_traits stuff thrown in. (Thanks to Clearer for a comment that gave me an idea for an improvement using latter day C++. See below.)
<strike>template<class T>
inline T mod(T a, T b) {
assert(b > 0);
T ret = a%b;
return (ret>=0)?(ret):(ret+b);
}</strike>
template<>
inline unsigned mod(unsigned a, unsigned b) {
assert(b > 0);
return a % b;
}
True fact: I lobbied the Pascal standards committee to do mod the right way until they relented. To my horror, they did integer division the wrong way. So they do not even match.
EDIT: Clearer gave me an idea. I am working on a new one.
#include <type_traits>
template<class T1, class T2>
inline T1 mod(T1 a, T2 b) {
assert(b > 0);
T1 ret = a % b;
if constexpr ( std::is_unsigned_v<T1>)
{
return ret;
} else {
return (ret >= 0) ? (ret) : (ret + b);
}
}
What are arguments for this specification?
One of the design goals of C++ is to map efficiently to hardware. If the underlying hardware implements division in a way that produces negative remainders, then that's what you'll get if you use % in C++. That's all there is to it really.
Is there a place where the people who create such standards discuss about it?
You will find interesting discussions on comp.lang.c++.moderated and, to a lesser extent, comp.lang.c++
Others have described the why well enough and unfortunately the question which asks for a solution is marked a duplicate of this one and a comprehensive answer on that aspect seems to be missing. There seem to be 2 commonly used general solutions and one special-case I would like to include:
// 724ms
inline int mod1(int a, int b)
{
const int r = a % b;
return r < 0 ? r + b : r;
}
// 759ms
inline int mod2(int a, int b)
{
return (a % b + b) % b;
}
// 671ms (see NOTE1!)
inline int mod3(int a, int b)
{
return (a + b) % b;
}
int main(int argc, char** argv)
{
volatile int x;
for (int i = 0; i < 10000000; ++i) {
for (int j = -argc + 1; j < argc; ++j) {
x = modX(j, argc);
if (x < 0) return -1; // Sanity check
}
}
}
NOTE1: This is not generally correct (i.e. if a < -b). The reason I included it is because almost every time I find myself taking the modulus of a negative number is when doing math with numbers that are already modded, for example (i1 - i2) % n where the 0 <= iX < n (e.g. indices of a circular buffer).
As always, YMMV with regards to timing.
I'm working on Project Euler to brush up on my C++ coding skills in preparation for the programming challenge(s) we'll be having this next semester (since they don't let us use Python, boo!).
I'm on #16, and I'm trying to find a way to keep real precision for 2¹°°°
For instance:
int main(){
double num = pow(2, 1000);
printf("%.0f", num):
return 0;
}
prints
10715086071862673209484250490600018105614050000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
Which is missing most of the numbers (from python):
>>> 2**1000
10715086071862673209484250490600018105614048117055336074437503883703510511249361224931983788156958581275946729175531468251871452856923140435984577574698574803934567774824230985421074605062371141877954182153046474983581941267398767559165543946077062914571196477686542167660429831652624386837205668069376L
Granted, I can write the program with a Python 1 liner
sum(int(_) for _ in str(2**1000))
that gives me the result immediately, but I'm trying to find a way to do it in C++. Any pointers? (haha...)
Edit:
Something outside the standard libs is worthless to me - only dead-tree code is allowed in those contests, and I'm probably not going to print out 10,000 lines of external code...
If you just keep track of each digit in a char array, this is easy. Doubling a digit is trivial, and if the result is greater than 10 you just subtract 10 and add a carry to the next digit. Start with a value of 1, loop over the doubling function 1000 times, and you're done. You can predict the number of digits you'll need with ceil(1000*log(2)/log(10)), or just add them dynamically.
Spoiler alert: it appears I have to show the code before anyone will believe me. This is a simple implementation of a bignum with two functions, Double and Display. I didn't make it a class in the interest of simplicity. The digits are stored in a little-endian format, with the least significant digit first.
typedef std::vector<char> bignum;
void Double(bignum & num)
{
int carry = 0;
for (bignum::iterator p = num.begin(); p != num.end(); ++p)
{
*p *= 2;
*p += carry;
carry = (*p >= 10);
*p -= carry * 10;
}
if (carry != 0)
num.push_back(carry);
}
void Display(bignum & num)
{
for (bignum::reverse_iterator p = num.rbegin(); p != num.rend(); ++p)
std::cout << static_cast<int>(*p);
}
int main(int argc, char* argv[])
{
bignum num;
num.push_back(1);
for (int i = 0; i < 1000; ++i)
Double(num);
Display(num);
std::cout << std::endl;
return 0;
}
You need a bignum library, such as this one.
You probably need a pointer here (pun intended)
In C++ you would need to create your own bigint lib in order to do the same as in python.
C/C++ operates on fundamental data types. You are using a double which has only 64 bits to store a 1000 bit number. double uses 51 bit for the significant digits and 11 bit for the magnitude.
The only solution for you is to either use a library like bignum mentioned elsewhere or to roll out your own.
UPDATE: I just browsed to the Euler Problem site and found that Problem 13 is about summing large integers. The iterated method can become very tricky after a short while, so I'd suggest to use the code from Problem #13 you should have already to solve this, because 2**N => 2**(N-1) + 2**(N-1)
Using bignums is cheating and not a solution. Also, you don't need to compute 2**1000 or anything like that to get to the result. I'll give you a hint:
Take the first few values of 2**N:
0 1 2 4 8 16 32 64 128 256 ...
Now write down for each number the sum of its digits:
1 2 4 8 7 5 10 11 13 ...
You should notice that (x~=y means x and y have the same sum of digits)
1+1=2, 1+(1+2)=4, 1+(1+2+4)=8, 1+(1+2+4+8)=16~=7 1+(1+2+4+8+7)=23~=5
Now write a loop.
Project Euler = Think before Compute!
If you want to do this sort of thing on a practical basis, you're looking for an arbitrary precision arithmetic package. There are a number around, including NTL, lip, GMP, and MIRACL.
If you're just after something for Project Euler, you can write your own code for raising to a power. The basic idea is to store your large number in quite a few small pieces, and implement your own carries, borrows, etc., between the pieces.
Isn't pow(2, 1000) just 2 left-shifted 1000 times, essentially? It should have an exact binary representation in a double float. It shouldn't require a bignum library.