How to calculate pow(2,n) when n exceeds 64 in c++? - c++

So, I am new to programming in c++ and i came across this question where i need to calculate pow(2,n)/2 where n>64 ?
i tried using unsigned long long int but as the limit of the c++ is only 2^64. So is there any method to calculate this.
Edit:
1 < n < 10^5
The result of the expression is used in further calculation
The question is asked on online platform.So, i cant use libraries like gmp to handle large numbers.
Question
You are given with an array A of size N. An element Ai is said to be charged if its value (Ai) is greater than or equal to Ki. Ki is the total number of subsets of array A that consist of element Ai.
Total charge value of the array is defined as summation of all charged elements present in the array mod (10^9)+7.
Your task is to output the total charge value of the given array.

An important detail here is that you're not being asked to compute 2n for gigantic n. Instead, you're being asked to compute 2n mod 109 + 7 for large n, and that's a different question.
For example, let's suppose you want to compute 270 mod 109 + 1. Notice that 270 doesn't fit into a 64-bit machine word. However, 270 = 230 · 235, and 235 does fit into a 64-bit machine word. Therefore, we could do this calculation to get 270 mod 109 + 7:
270 (mod 109 + 7)
= 235 · 235 (mod 109 + 7)
= (235 mod 109 + 7) · (235 mod 109 + 7) mod 109 + 7
= (34359738368 mod 109 + 7) · (34359738368 mod 109 + 7) mod 109 + 7
= (359738130 · 359738130) mod 109 + 7
= 129411522175896900 mod 109 + 7
= 270016253
More generally, by using repeated squaring, you can compute 2n mod 109 + 7 for any value of n in a way that nicely fits into a 64-bit integer.
Hope this helps!

The common approach in serious numerical work is to rewrite the formula's. You store log(x) instead of x, and later when you do need x it will typically be in a context where you didn't need all those digits anyway.

Related

Evaluating polynomials to 5 significant figures but only 1 sig fig returns - Maple Programming

For example, A polynomial is defined as follows:
f := (x, y) -> 333.75y^6 + x^2(11x^2y^2 - y^6 - 12y^4 - 2) + 5.5y^8 + 1/2*x/y
In maple, I look to evaluate this to 5 significant figures like so:
evalf[5](f(77617,33096))
And obtain a value that is: 1*10^32.
Why is this not to 5 sig fig? Why is this not close to a value of 7.878 * 10^29 as you increase the number of sig fig required?
Thanks!
Don't reduce the working precision that low, especially if you are trying to compute an accurate answer (and then round it for convenience).
More importantly, for compound expressions the floating-point working precision (Digits, or the index of an evalf call) is just that: a specification of working precision and not an accuracy request.
By lowering the working precision so much you are seeing greater roundoff error in the floating-point computation.
restart;
f := (x, y) -> 333.75*y^6
+ x^2*(11*x^2*y^2 - y^6 - 12*y^4 - 2)
+ 5.5*y^8 + 1/2*x/y:
for d from 5 to 15 do
evalf[5](evalf[d](f(77617,33096)));
end do;
32
1 10
31
-3 10
30
1 10
29
8 10
29
7.9 10
29
7.88 10
29
7.878 10
29
7.8784 10
29
7.8785 10
29
7.8785 10
29
7.8785 10

How to find weights given a Huffman tree

Huffman's algorithm derives a tree given the weights of the symbols. I want the reverse: given a tree, figure out a set of symbol weights that would generate that tree a tree with the same bit lengths for each symbol.
I'm aware that there are multiple sets of weights that generate the same tree, so I imagine that the weights can be given as powers of two, and the longest code could be assigned weight 1.
(Not relevant to the question, but the purpose is to fine-tune the fixed tree used internally by an LZ77-type compression algorithm to code the offsets and lengths, checking whether the current bitlengths are reasonable or adjusting them if not).
You imagine correctly. However the powers of two will result in many ties when executing the Huffman algorithm. The tree you get back may have a different topology than the tree you started with, depending on how the ties are decided. But the bit lengths will all be the same.
Here is an example:
I used these frequencies for the alphabet:
817 A
145 B
248 C
431 D
1232 E
209 F
182 G
668 H
689 I
10 J
80 K
397 L
277 M
662 N
781 O
156 P
9 Q
572 R
628 S
905 T
304 U
102 V
264 W
15 X
211 Y
5 Z
That gave me this tree:
Then I assigned powers-of-two frequencies to the symbols per their depths in the tree:
64 A
16 B
32 C
32 D
128 E
16 F
16 G
64 H
64 I
2 J
8 K
32 L
32 M
64 N
64 O
16 P
1 Q
64 R
64 S
128 T
32 U
16 V
32 W
4 X
32 Y
1 Z
Applying Huffman to that, I get a very different tree, but one where all of the symbols have the same depth as before:
I'm pretty sure that there's a way to assign frequencies working up from the bottom, making the next thing to add just big enough to assure the right choice. That will also result in lower weights overall than the powers of two, coming closer to a Fibonacci sequence. This is an interesting problem, so now I am tempted to play with it.

Downscale array for decimal factor

Is there efficient way to downscale number of elements in array by decimal factor?
I want to downsize elements from one array by certain factor.
Example:
If I have 10 elements and need to scale down by factor 2.
1 2 3 4 5 6 7 8 9 10
scaled to
1.5 3.5 5.5 7.5 9.5
Grouping 2 by 2 and use arithmetic mean.
My problem is what if I need to downsize array with 10 elements to 6 elements? In theory I should group 1.6 elements and find their arithmetic mean, but how to do that?
Before suggesting a solution, let's define "downsize" in a more formal way. I would suggest this definition:
Downsizing starts with an array a[N] and produces an array b[M] such that the following is true:
M <= N - otherwise it would be upsizing, not downsizing
SUM(b) = (M/N) * SUM(a) - The sum is reduced proportionally to the number of elements
Elements of a participate in computation of b in the order of their occurrence in a
Let's consider your example of downsizing 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 to six elements. The total for your array is 55, so the total for the new array would be (6/10)*55 = 33. We can achieve this total in two steps:
Walk the array a totaling its elements until we've reached the integer part of N/M fraction (it must be an improper fraction by rule 1 above)
Let's say that a[i] was the last element of a that we could take as a whole in the current iteration. Take the fraction of a[i+1] equal to the fractional part of N/M
Continue to the next number starting with the remaining fraction of a[i+1]
Once you are done, your array b would contain M numbers totaling to SUM(a). Walk the array once more, and scale the result by N/M.
Here is how it works with your example:
b[0] = a[0] + (2/3)*a[1] = 2.33333
b[1] = (1/3)*a[1] + a[2] + (1/3)*a[3] = 5
b[2] = (2/3)*a[3] + a[4] = 7.66666
b[3] = a[5] + (2/3)*a[6] = 10.6666
b[4] = (1/3)*a[6] + a[7] + (1/3)*a[8] = 13.3333
b[5] = (2/3)*a[8] + a[9] = 16
--------
Total = 55
Scaling down by 6/10 produces the final result:
1.4 3 4.6 6.4 8 9.6 (Total = 33)
Here is a simple implementation in C++:
double need = ((double)a.size()) / b.size();
double have = 0;
size_t pos = 0;
for (size_t i = 0 ; i != a.size() ; i++) {
if (need >= have+1) {
b[pos] += a[i];
have++;
} else {
double frac = (need-have); // frac is less than 1 because of the "if" condition
b[pos++] += frac * a[i]; // frac of a[i] goes to current element of b
have = 1 - frac;
b[pos] += have * a[i]; // (1-frac) of a[i] goes to the next position of b
}
}
for (size_t i = 0 ; i != b.size() ; i++) {
b[i] /= need;
}
Demo.
You will need to resort to some form of interpolation, as the number of elements to average isn't integer.
You can consider computing the prefix sum of the array, i.e.
0 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9 10
yields by summation
0 1 2 3 4 5 6 7 8 9
1 3 6 10 15 21 28 36 45 55
Then perform linear interpolation to get the intermediate values that you are lacking, like at 0*, 10/6, 20/6, 30/5*, 40/6, 50/6, 60/6*. (Those with an asterisk are readily available).
0 1 10/6 2 3 20/6 4 5 6 40/6 7 8 50/6 9
1 3 15/3 6 10 35/3 15 21 28 100/3 36 45 145/3 55
Now you get fractional sums by subtracting values in pairs. The first average is
(15/3-1)/(10/6) = 12/5
I can't think of anything in the C++ library that will crank out something like this, all fully cooked and ready to go.
So you'll have to, pretty much, roll up your sleeves and go to work. At this point, the question of what's the "efficient" way of doing it boils down to its very basics. Which means:
1) Calculate how big the output array should be. Based on the description of the issue, you should be able to make that calculation even before looking at the values in the input array. You know the input array's size(), you can calculate the size() of the destination array.
2) So, you resize() the destination array up front. Now, you no longer need to worry about the time wasted in growing the size of the dynamic output array, incrementally, as you go through the input array, making your calculations.
3) So what's left is the actual work: iterating over the input array, and calculating the downsized values.
auto b=input_array.begin();
auto e=input_array.end();
auto p=output_array.begin();
Don't see many other options here, besides brute force iteration and calculations. Iterate from b to e, getting your samples, calculating each downsized value, and saving the resulting value into *p++.

sas generate 5 digit id code that first 3 must be letters and last 2 numbers

How can I generate in SAS and ID code with 5 digits(letters & Numbers)? Where the first 3 must be letters and last 2 must be numbers.
You can create a unique mapping of the integers from 0 to 26^3 * 10^2 - 1 to a string of the format AAA00. This wikipedia page introduces the concept of different numerical bases quite well.
Your map would look something like this
value = 100 * (X * 26^2 + Y * 26^1 + Z * 26^0) + a * 10^1 + b * 10^0
where X, Y & Z are integers between 0 and 25 (which can be represented as the letters of the alphabet), and a & b are integers between 0 and 9.
As an example:
47416 = 100 * (0 * 26^2 + 18 * 26^1 + 6 * 26^0) + 1 * 10^1 + 6 * 10^0
Using:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
You get:
47416 -> [0] [18] [6] (1) (6)
A S G 1 6
So 47416 can be represented as ASG16.
To do this programatically you will need to step through your number splitting it into quotient and remainder through division by your bases (10 and 26), storing the remainder as part of your output and using the quotient for the next iteration.
you will probably want to use these functions:
mod() Modulo function to get the remainder from division
floor() Flooring function which returns the rounded down integer part of a real numer
A couple of similar (but slightly simpler) examples to get you started can be found here.
Have a go, and if you get stuck post a new question. You will probably get the best response from SO if you provide a detailed question, code showing your progress, a description of where and why you are stuck, any errors or warnings you are getting and some sample data.

Decompose integers larger than 100 digits [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
X and Y are integers larger than 100 digits. Find the integer P which is within the range [X, Y[ and that guaranties the "best" prime decomposition (i.e. the decomposition with the most unique prime factors).
What I've done is just check the primality and decompose each number in the range and find the number that respects the rule. Is there any other way to do this?
An example on small integers
Edit:
In the above example, 123456 is decomposed to
2^6 * 3^1 * 643^1, that's 2 * 2 * 2 * 2 * 2 * 2 * 3 * 643 but only 3 unique factors.
While the answer, 123690, is decomposed to 6 unique factors
2^1 * 3^1 * 5^1 * 7^1 * 19^1 * 31^1.
The answer to questions about enumerating prime numbers is always to find a way to solve the problem using a sieve; in your case, you are looking for "anti-prime" numbers with a large number of factors, but the principle still applies.
The key to this question is that, for most numbers, most of the factors are small. Thus, my suggestion is to set up a sieve for the range X to Y, containing integers all initialized to zero. Then consider all the primes less than some limit, as large as convenient, but obviously much smaller than X. For each prime, add 1 to each element of the sieve that is a multiple of the prime. After sieving with all the primes, the sieve location with the largest count corresponds to the number between X and Y that has the most distinct prime factors.
Let's consider an example: take the range 100 to 125 and sieve with the primes 2, 3, 5 and 7. You'll get something like this:
100 2 5
101 (101)
102 2 3 (17)
103 (103)
104 2 (13)
105 3 5 7
106 2 (53)
107 (107)
108 2 3
109 (109)
110 2 5 (11)
111 3 (37)
112 2 7
113 (113)
114 2 3 (19)
115 5 (23)
116 2 (29)
117 3 (13)
118 2 (59)
119 7 (17)
120 2 3 5
121 (11)
122 2 (61)
123 3 (41)
124 2 (31)
125 5
So the winners are 105 and 120, each having three prime factors; you'll have to decide for yourself what to do with ties. Note that some factors are missed: 11 divides 110 and 121, 13 divides 104 and 117, 17 divides 102 and 119, 19 divides 114, 23 divides 115, 29 divides 116, 31 divides 124, 37 divides 111, 41 divides 123, 53 divides 106, 59 divides 118, 61 divides 122, and of course 101, 103, 107, 109, and 113 are prime. That means 102, 110 and 114 also tie for the lead, each having three prime factors. So this algorithm isn't perfect, but for X and Y in the hundred-digit range, and assuming you sieve by the primes to a million or ten million, it is unlikely you will substantially miss the answer.
Good question. Look for it soon at my blog.
Take the list of all primes in order (2,3,5,7...) and start multiplying them (2 * 3 * 5 *...) until you get a number >= X. Call this number P'. If its <= Y, you're done, P = P'. If not, start computing P'/2, P'/3, P'/5 etc looking for a number [X,Y]. If you don't find it and get to a number < X, try multiplying in then next prime to P' and continuing. If this still fails, then the range [X,Y] is pretty small, so fall back to the method of factoring all the numbers in that range.
For a small range (Y-X is small), allocate an array of size Y-X+1, zero it, then for all primes <= Y-X, add one to the array elements corresponding to multiples of the prime (simple seive). Then search for the element with the largest total. If that total n is such that (Y-X)n >= X, then that is the answer. If not, continue sieving primes larger than Y-X until you get to some prime p such that pn > X for some n in the table...
One of the two above methods should work, depending on how large the range is...