I have an array of 20 numbers (64 bit int) something like 10, 25, 36,43...., 118, 121 (sorted numbers).
Now, I have to give millions of numbers as input (say 17, 30).
What I have to give as output is:
for Input 17:
17 is < 25 and > 10. So, output will be index 0.
for Input 30:
30 is < 36 and > 25. So, output will be index 1.
Now, I can do it using linear search, binary serach. Is there any method to do it faster way ? Input numbers are random (gaussian).
If you know the distribution, you can direct your search in a smarter way.
Here is the rough idea of this variant of binary search:
Assuming that your data is expected to be distributed uniformly on 0 to 100.
If you observe the value 0, you start at the beginning. If your value is 37, you start at 37% of the array you have. This is the key difference to binary search: you don't always start at 50%, but you try to start in the expected "optimal" position.
This also works for Gaussian distributed data, if you know the parameters (If you don't know them, you can still estimate them easily from the observed data). You would compute the Gaussian CDF, and this yields the place to start your search.
Now for the next step, you need to refine your search. At the position you looked at, there was a different value. You can use this to re-estimate the position to continue searching.
Now even if you don't know the distribution this can work very well. So you start with a binary search, and looked at objects at 50% and 25% already. Instead of going to 37.5% next, you can do a better guess, if your query values was e.g. very close to the 50% entry. Unless your data set is very "clumpy" (and your queries are not correlated to the data) then this should still outperform "naive" binary search that always splits in the middle.
http://en.wikipedia.org/wiki/Interpolation_search
The expected average runtime apparently is O(log(log(n)), from Wikipedia.
Update: since someone complained that with just 20 numbers things are different. Yes, they are. With 20 numbers linear search may be best. Because of CPU caching. Linear scanning through a small amount of memory - that fits into the CPU cache - can be really fast. In particular with an unrolled loop. But that case is quite pathetic and uninteresting IMHO.
I believe best option for you is to use upper_bound - it will find the first value in the array bigger than the one you are searching for.
Still depending on the problem you try to solve maybe lower_bound or binary_search may be the thing you need.
All of these algorithms are with logarithmic complexity.
There is nothing will be better than binary search since your array is sorted.
Linear search is O(n) while binary search is O(log n)
Edit:
Interpolation search makes an extra assumption (the elements have to be uniformly distributed) and do more comparisons per iteration.
You can try both and empirically measure which is better for your case
In fact, this problem is quite interesting because it is a re-cast of an information theoretic framework.
Given 20 numbers, you will end up with 21 bins (including < first one and > last one).
For each incoming number, you are to map to one of these 21 bins. This mapping is done by comparison. Each comparison gives you 1 bit of information (< or >= -- two states).
So suppose the incoming number requires 5 comparisons in order to figure out which bin it belongs to, then it is equivalent to using 5 bits to represent that number.
Our goal is to minimize the number of comparisons! We have 1 million numbers each belonging to 21 ordered code words. How do we do that?
This is exactly an entropy compression problem.
Let a[1],.. a[20], be your 20 numbers.
Let p(n) = pr { incoming number is < n }.
Build the decision tree as follows.
Step 1.
let i = argmin |p(a[i]) - 0.5|
define p0(n) = p(n) / (sum(p(j), j=0...a[i-1])), and p0(n)=0 for n >= a[i].
define p1(n) = p(n) / (sum(p(j), j=a[i]...a[20])), and p1(n)=0 for n < a[i].
Step 2.
let i0 = argmin |p0(a[i0]) - 0.5|
let i1 = argmin |p1(a[i1]) - 0.5|
and so on...
and by the time we're done, we end up with:
i, i0, i1, i00, i01, i10, i11, etc.
each one of these i gives us the comparison position.
so now our algorithm is as follows:
let u = input number.
if (u < a[i]) {
if (u < a[i0]) {
if (u < a[i00]) {
} else {
}
} else {
if (u < a[i01]) {
} else {
}
}
} else {
similarly...
}
so the i's define a tree, and the if statements are walking the tree. we can just as well put it into a loop, but it's easier to illustrate with a bunch of if.
so for example, if you knew that your data were uniformly distributed between 0 and 2^63, and your 20 number were
0,1,2,3,...19
then
i = 20 (notice that there is no i1)
i0 = 10
i00 = 5
i01 = 15
i000 = 3
i001 = 7
i010 = 13
i011 = 17
i0000 = 2
i0001 = 4
i0010 = 6
i0011 = 9
i00110 = 8
i0100 = 12
i01000 = 11
i0110 = 16
i0111 = 19
i01110 = 18
ok so basically, the comparison would be as follows:
if (u < a[20]) {
if (u < a[10]) {
if (u < a[5]) {
} else {
...
}
} else {
...
}
} else {
return 21
}
so note here, that I am not doing binary search! I am first checking the end point. why?
there is 100*((2^63)-20)/(2^63) percent chance that it will be greater than a[20]. this is basically like 99.999999999999999783159565502899% chance!
so this algorithm as it is has an expected number of comparison of 1 for a dataset with the properties specified above! (this is better than log log :p)
notice what I have done here is I am basically using fewer compares to find numbers that are more probable and more compares to find numbers that are less probable. for example, the number 18 requires 6 comparisons (1 more than needed with binary search); however, the numbers 20 to 2^63 require only 1 comparison. this same principle is used for lossless (entropy) data compression -- use fewer bits to encode code words that appear often.
building the tree is a one time process and you can use the tree 1 million times later.
the question is... when does this decision tree become binary search? homework exercise! :p the answer is simple. it's similar to when you can't compress a file any more.
ok, so I didn't pull this out of my behind... the basis is here:
http://en.wikipedia.org/wiki/Arithmetic_coding
You could perform binary search using std::lower_bound and std::upper_bound. These give you back iterators, so you can use std::distance to get an index.
Related
Given a number N (<=10000), find the minimum number of primatic numbers which sum up to N.
A primatic number refers to a number which is either a prime number or can be expressed as power of prime number to itself i.e. prime^prime e.g. 4, 27, etc.
I tried to find all the primatic numbers using seive and then stored them in a vector (code below) but now I am can't see how to find the minimum of primatic numbers that sum to a given number.
Here's my sieve:
#include<algorithm>
#include<vector>
#define MAX 10000
typedef long long int ll;
ll modpow(ll a, ll n, ll temp) {
ll res=1, y=a;
while (n>0) {
if (n&1)
res=(res*y)%temp;
y=(y*y)%temp;
n/=2;
}
return res%temp;
}
int isprimeat[MAX+20];
std::vector<int> primeat;
//Finding all prime numbers till 10000
void seive()
{
ll i,j;
isprimeat[0]=1;
isprimeat[1]=1;
for (i=2; i<=MAX; i++) {
if (isprimeat[i]==0) {
for (j=i*i; j<=MAX; j+=i) {
isprimeat[j]=1;
}
}
}
for (i=2; i<=MAX; i++) {
if (isprimeat[i]==0) {
primeat.push_back(i);
}
}
isprimeat[4]=isprimeat[27]=isprimeat[3125]=0;
primeat.push_back(4);
primeat.push_back(27);
primeat.push_back(3125);
}
int main()
{
seive();
std::sort(primeat.begin(), primeat.end());
return 0;
}
One method could be to store all primatics less than or equal to N in a sorted list - call this list L - and recursively search for the shortest sequence. The easiest approach is "greedy": pick the largest spans / numbers as early as possible.
for N = 14 you'd have L = {2,3,4,5,7,8,9,11,13}, so you'd want to make an algorithm / process that tries these sequences:
13 is too small
13 + 13 -> 13 + 2 will be too large
11 is too small
11 + 11 -> 11 + 4 will be too large
11 + 3 is a match.
You can continue the process by making the search function recurse each time it needs another primatic in the sum, which you would aim to have occur a minimum number of times. To do so you can pick the largest -> smallest primatic in each position (the 1st, 2nd etc primatic in the sum), and include another number in the sum only if the primatics in the sum so far are small enough that an additional primatic won't go over N.
I'd have to make a working example to find a small enough N that doesn't result in just 2 numbers in the sum. Note that because you can express any natural number as the sum of at most 4 squares of natural numbers, and you have a more dense set L than the set of squares, so I'd think it rare you'd have a result of 3 or more for any N you'd want to compute by hand.
Dynamic Programming approach
I have to clarify that 'greedy' is not the same as 'dynamic programming', it can give sub-optimal results. This does have a DP solution though. Again, i won't write the final process in code but explain it as a point of reference to make a working DP solution from.
To do this we need to build up solutions from the bottom up. What you need is a structure that can store known solutions for all numbers up to some N, this list can be incrementally added to for larger N in an optimal way.
Consider that for any N, if it's primatic then the number of terms for N is just 1. This applies for N=2-5,7-9,11,13,16,17,19. The number of terms for all other N must be at least two, which means either it's a sum of two primatics or a sum of a primatic and some other N.
The first few examples that aren't trivial:
6 - can be either 2+4 or 3+3, all the terms here are themselves primatic so the minimum number of terms for 6 is 2.
10 - can be either 2+8, 3+7, 4+6 or 5+5. However 6 is not primatic, and taking that solution out leaves a minimum of 2 terms.
12 - can be either 2+10, 3+9, 4+8, 5+7 or 6+6. Of these 6+6 and 2+10 contain non-primatics while the others do not, so again 2 terms is the minimum.
14 - ditto, there exist two-primatic solutions: 3+11, 5+9, 7+7.
The structure for storing all of these solutions needs to be able to iterate across solutions of equal rank / number of terms. You already have a list of primatics, this is also the list of solutions that need only one term.
Sol[term_length] = list(numbers). You will also need a function / cache to look up some N's shortest-term-length, eg S(N) = term_length iif N in Sol[term_length]
Sol[1] = {2,3,4,5 ...} and Sol[2] = {6,10,12,14 ...} and so on for Sol[3] and onwards.
Any solution can be found using one term from Sol[1] that is primatic. Any solution requiring two primatics will be found in Sol[2]. Any solution requiring 3 will be in Sol[3] etc.
What you need to recognize here is that a number S(N) = 3 can be expressed Sol[1][a] + Sol[1][b] + Sol[1][c] for some a,b,c primatics, but it can also be expressed as Sol[1][a] + Sol[2][d], since all Sol[2] must be expressible as Sol[1][x] + Sol[1][y].
This algorithm will in effect search Sol[1] for a given N, then look in Sol[1] + Sol[K] with increasing K, but to do this you will need S and Sol structures roughly in the form shown here (or able to be accessed / queried in a similar manner).
Working Example
Using the above as a guideline I've put this together quickly, it even shows which multi-term sum it uses.
https://ideone.com/7mYXde
I can explain the code in-depth if you want but the real DP section is around lines 40-64. The recursion depth (also number of additional terms in the sum) is k, a simple dual-iterator while loop checks if a sum is possible using the kth known solutions and primatics, if it is then we're done and if not then check k+1 solutions, if any. Sol and S work as described.
The only confusing part might be the use of reverse iterators, it's just to make != end() checking consistent for the while condition (end is not a valid iterator position but begin is, so != begin would be written differently).
Edit - FYI, the first number that takes at least 3 terms is 959 - had to run my algorithm to 1000 numbers to find it. It's summed from 6 + 953 (primatic), no matter how you split 6 it's still 3 terms.
Currently, I was trying to create a 2-SUM algorithm that would, given a set of around 1 million integers, find the number of target values t (-10,000 <= t <= 10,000) that are formed by the sum of any two values x,y in the set.
I have no problem with 2-SUM for a single value of t, just by using hash-tables and finding for each hash entry x in the table if there exists another entry t-x. This will run in O(N) time.
But, now I have to find multiple values of t, from -10000 to 10000. If I just use a plain for-loop, then the runtime will now be O(N^2).
I have tried this code, which brute-forces through all t from -10000 to 10000, but it simply runs too slow (~1hr. to execute).
So, my question is, are there any hints for better ways to handle the ~20,001 targets without having to brute-force through all 20,001 values?
Here is the code I used for my O(N^2) solution:
for(long long t = -10000; t <= 10000; t++)
{
for(unordered_set<long long>::iterator it=S.begin(); it != S.end(); ++it)
{
long long value = *it;
if((S.find(t-value) != S.end()) & (t-value != value))
{
values++;
//cout << "Found pair target " << t << " " << value << " " << t-value << '\n';
break;
}
}
}
A better approach would be to use an ordered set (if values are unique, or ordered array / list if you care for duplicates).
Then, you search for a matching pair for your values using the following method:
For each Val (-10000, -9999, ...)
Let iS be 0
Let iE be length - 1
While (S[iS] + S[iE]) != Val
4.1 (S[iS] + S[iE]) > Val : Binary Search in (iS -> iE - 1) for the maximum value, lower or equal to (Val - S[iS]) and set iE to match.
4.2 (S[iS] + S[iE]) < Val : Binary Search in (iS +1 -> iE) for the minimum value, higher or equal to (Val - S[iE]) and set iS to match.
4.3 If iS > iE, Val doesn't exist.
This gives you O(n log(n)) for sorting, and O(m n) (m is 20001 for -10000 -> 10000) for searching although realistically, the searching will perform much better then O(m n). The entire solution is O(m n) due to m > log(n).
It can be further optimized by using a map of matched values and on each iteration, after a match is found, advance iE till (S[iS] + S[iE]) > maxValue (10000) and marking all sums as found, then there are less iterations in outer loop.
As other people have already suggested, if you want a "best effort" approach (meaning that it may not be the best, but still good enough), you can sort your data and use std::lower_bound for searching.
The std::lower_bound function is implemented as a binary search, which means that in the worst case, for 1000000 integers you'll be having 20 comparisons to find a match. If you do this inside of a -10000..10000 loop you'll get 20000*20 = 400000 comparisons, which should take far less than an hour (my guess is a few minutes, depending on CPU power).
The map::find on an unordered_set is a linear search, that means that in the worst case you're going to have 20000*1000000 = 20000000000 comparisons, which is 50000 times worse.
You could improve on a binary search (e.g. by seeing how "close" you're to your target and switching to linear search from there if you're under a specific difference in value) but I don't think that would speed up the search that much.
There are other ways, probaly faster (maybe you could discard duplicates using 15625 integers with 64 bit precision and setting the bit matching the value in your dataset, giving you and O(n) time for the setup and an O(1) for the lookup, but you're going to need two sets, one for positive values, the other for negative), but they're going to be much more difficult to implement.
Thanks to everyone who has helped!
I solved the problem by partitioning the input into multiple "buckets", that is, I would sort the dataset and then split it into buckets of intervals of 10,000. So, the smallest 10k numbers go into 1st bucket, next 10k to 2nd, and so forth.... I would split it into so when I have to search for the entry t-x, I will search in my 10,000 numbers rather than all 1,000,000 numbers.
I have the following problem. I have a number represented in binary representation. I need a way to randomly select two bits of them that are different (i.e. find a 1 and a 0). Besides this I run other operations on that number (reversing sequences, permute sequences,...) These are the approaches I already used:
Keep track of all the ones and the zeros. When I create the binary representation of the binary number I store the places of the 0's and 1's. So that I can choose an index for one list and one index from the other one. I then have two different bits. To run my other operations I created those from an elementary swap operations which updates the indices of the 1's and 0's when manipulating. Therefore I have a third list that stores the list index for each bit. If a bit is 1 I know where to find in the list with all the indices of the ones (same goes for zeros).
The method above yields some overhead when operations are done that do not require the bits to be different. So another way would be to create the lists whenever different bits are needed.
Does anyone have a better idea to do this? I need these operations to be really fast (I am working with popcount, clz, and other binary operations)
I don't feel as though I have enough information to assess the tradeoffs properly, but perhaps you'll find this idea useful. To find a random 1 in a word (find a 1 over multiple words by popcount and reservoir sampling; find a 0 by complementing), first test the popcount. If the popcount is high, then generate indexes uniformly at random and test them until a one is found. If the popcount is medium, then take bitwise ANDs with uniform random masks (but keep the original if the AND is zero) to reduce the popcount. When the popcount is low, use clz to compile the (small) list of candidates efficiently and then sample uniformly at random.
I think the following might be a rather efficient algorithm to do what you are asking. You only iterate over each bit in the number once, and for each element, you have to generate a random number (not exactly sure how costly that is, but I believe there are some optimized CPU instructions for getting random numbers).
Idea is to iterate over all the bits, and with the right probability, update the index to the current index you are visiting.
Generic pseudocode for getting an element from a stream/array:
p = 1
e = null
for s in stream:
with probability 1/p:
replace e with s
p++
return e
Java version:
int[] getIdx(int n){
int oneIdx = 0;
int zeroIdx = 0;
int ones = 1;
int zeros = 1;
// this loop depends on whether you want to select all the prepended zeros
// in a 32/64 bit representation. Alter to your liking...
for(int i = n, j = 0; i > 0; i = i >>> 1, j++){
if((i & 1) == 1){ // current element is 1
if(Math.random() < 1/(float)ones){
oneIdx = j;
}
ones++;
} else{ // element is 0
if(Math.random() < 1/(float)zeros){
zeroIdx = j;
}
zeros++;
}
}
return new int[]{zeroIdx,oneIdx};
}
An optimization you might look into is to do the probability selection using ints instead of floats, might be slightly faster. Here is a short proof I did some time ago regarding that this works: here . I believe the algorithm is attributed to Knuth but can't remember exactly.
What's the best way to check if a sequence of numbers has an increasing or decreasing trend?
I know that I could pick the first and last value of the sequence, and check their difference, but I'd like a somewhat more robust check. This means that I want to be able to tolerate a minority of increasing values within a mostly decreasing sequence, and viceversa.
More specifically, the numbers are stored as
vector<int> mySequence;
A few more details about the number sequences that I am dealing with:
All the numbers within the sequence have the same order of magnitude. This means that no sequence like the following can appear: [45 38 320 22 12 6].
By descending trend I mean that most or all the numbers within the sequence are lesser than the previous one. (The opposite applies for ascending trend). As a consequence, the following sequence is to be considered as descending: [45 42 38 32 28 34 26 20 12 8 48]
I would accumulate the number of increases vs number of decreases, which should give you an idea of whether there's an overall trend to increase or decrease.
You probably could look into trend estimation and some type of regression like linear regression.
It depends of course on the specific application of yours, but in general it sounds like a fitting problem.
I think you can simply calculate the median of your sequence and check if it is greater than the first value.
This is ONE way, not THE way.
Another way, always considering average medium, you can check the number of ascending and descending values in the sequence.
int trend = 0;
int avg = mySequence[0];
int size = mySequence.size();
for (int i=0; i < size - 1; ++i) {
if(i > 0) {
avg = (avg + mySequence[i]) / 2;
}
(mySequence[i+1] - avg) > 0 ? ++trend; --trend;
}
One possibility would be to count the number of ascending and descending values in the sequence:
int trend = 0;
for (int i=0;i<mySequence.size()-1;++i)
{
diff = mySequence[i+1] - mySequence[i];
if (diff > 0)
{
trend++;
}
else if (diff < 0)
{
trend--;
}
}
The sequence you give in example will end with trend equal to -6
I would most probably try to split the sequence into multiple segments, as you said the values do not differ dramatically - see piecewise regression
and to interpret the segments as your business needs.
You will need a vector for storing the segments, each segment having start/end index, some sort of median value, etc - see also where to split a piecewise regression
I suggest using methods from mathematical analysis (e.g. integral and differential calculus) applied to discrete integer sequences.
One way is then to compute rolling averages and see if those averages increase or decrease. Natural and easy ;)
What is the fastest way to find the sum of decimal digits?
The following code is what I wrote but it is very very slow for range 1 to 1000000000000000000
long long sum_of_digits(long long input) {
long long total = 0;
while (input != 0) {
total += input % 10;
input /= 10;
}
return total;
}
int main ( int argc, char** argv) {
for ( long long i = 1L; i <= 1000000000000000000L; i++) {
sum_of_digits(i);
}
return 0;
}
I'm assuming what you are trying to do is along the lines of
#include <iostream>
const long long limit = 1000000000000000000LL;
int main () {
long long grand_total = 0;
for (long long ii = 1; ii <= limit; ++ii) {
grand_total += sum_of_digits(i);
}
std::cout << "Grand total = " << grand_total << "\n";
return 0;
}
This won't work for two reasons:
It will take a long long time.
It will overflow.
To deal with the overflow problem, you will either have to put a bound on your upper limit or use some bignum package. I'll leave solving that problem up to you.
To deal with the computational burden you need to get creative. If you know the upper limit is limited to powers of 10 this is fairly easy. If the upper limit can be some arbitrary number you will have to get a bit more creative.
First look at the problem of computing the sum of digits of all integers from 0 to 10n-1 (e.g., 0 to 9 (n=1), 0 to 99 (n=2), etc.) Denote the sum of digits of all integers from 10n-1 as Sn. For n=1 (0 to 9), this is just 0+1+2+3+4+5+6+7+8+9=45 (9*10/2). Thus S1=45.
For n=2 (0 to 99), you are summing 0-9 ten times and you are summing 0-9 ten times again. For n=3 (0 to 999), you are summing 0-99 ten times and you are summing 0-9 100 times. For n=4 (0 to 9999), you are summing 0-999 ten times and you are summing 0-9 1000 times. In general, Sn=10Sn-1+10n-1S1 as a recursive expression. This simplifies to Sn=(9n10n)/2.
If the upper limit is of the form 10n, the solution is the above Sn plus one more for the number 1000...000. If the upper limit is an arbitrary number you will need to get creative once again. Think along the lines that went into developing the formula for Sn.
You can break this down recursively. The sum of the digits of an 18-digit number are the sums of the first 9 digits plus the last 9 digits. Likewise the sum of the digits of a 9-bit number will be the sum of the first 4 or 5 digits plus the sum of the last 5 or 4 digits. Naturally you can special-case when the value is 0.
Reading your edit: computing that function in a loop for i between 1 and 1000000000000000000 takes a long time. This is a no brainer.
1000000000000000000 is one billion billion. Your processor will be able to do at best billions of operations per second. Even with a nonexistant 4-5 Ghz processor, and assuming best case it compiles down to an add, a mod, a div, and a compare jump, you could only do 1 billion iterations per second, meaning it will take on the order of 1 billion seconds.
You probably don't want to do it in a bruteforce way. This seems to be more of a logical thinking question.
Note - 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 = N(N+1)/2 = 45.
---- Changing the answer to make it clearer after David's comment
See David's answer - I had it wrong
Quite late to the party, but anyways, here is my solution. Sorry it's in Python and not C++, but it should be relatively easy to translate. And because this is primarily an algorithm problem, I hope that's ok.
As for the overflow problem, the only thing that comes to mind is to use arrays of digits instead of actual numbers. Given this algorithm I hope it won't affect performance too much.
https://gist.github.com/frnhr/7608873
It uses these three recursions I found by looking and poking at the problem. Rather then trying to come up with some general and arcane equations, here are three examples. A general case should be easily visible from those.
relation 1
Reduces function calls with arbitrary argument to several recursive calls with more predictable arguments for use in relations 2 and 3.
foo(3456) == foo(3000)
+ foo(400) + 400 * (3)
+ foo(50) + 50 * (3 + 4)
+ foo(6) + 6 * (3 + 4 + 5)
relation 2
Reduce calls with an argument in the form L*10^M (e.g: 30, 7000, 900000) to recursive call usable for relation 3. These triangular numbers popped in quite uninvited (but welcome) :)
triangular_numbers = [0, 1, 3, 6, 10, 15, 21, 28, 36] # 0 not used
foo(3000) == 3 * foo(1000) + triangular_numbers[3 - 1] * 1000
Only useful if L > 1. It holds true for L = 1 but is trivial. In that case, go directly to relation 3.
relation 3
Recursively reduce calls with argument in format 1*10^M to a call with argument that's divided by 10.
foo(1000) == foo(100) * 10 + 44 * 100 + 100 - 9 # 44 and 9 are constants
Ultimately you only have to really calculate the sum or digits for numbers 0 to 10, and it turns out than only up to 3 of these calculations are needed. Everything else is taken care of with this recursion. I'm pretty sure it runs in O(logN) time. That's FAAST!!!!!11one
On my laptop it calculates the sum of digit sums for a given number with over 1300 digits in under 7 seconds! Your test (1000000000000000000) gets calculated in 0.000112057 seconds!
I think you cannot do better than O(N) where N is the number of digits in the given number(which is not computationally expensive)
However if I understood your question correctly (the range) you want to output the sum of digits for a range of numbers. In that case, you can increment by one when you go from number0 to number9 and then decrease by 8.
You will need to cheat - look for mathematical patterns that let you short-cut your computations.
For example, do you really need to test that input != 0 every time? Does it matter if you add 0/10 several times? Since it won't matter, consider unrolling the loop.
Can you do the calculation in a larger base, eg, base 10^2, 10^3, etcetera, that might allow you to reduce the number of digits, which you'll then have to convert back to base 10? If this works, you'll be able to implement a cache more easily.
Consider looking at compiler intrinsics that let you give hints to the compiler for branch prediction.
Given that this is C++, consider implementing this using template metaprogramming.
Given that sum_of_digits is purely functional, consider caching the results.
Now, most of those suggestions will backfire - but the point I'm making is that if you have hit the limits of what your computer can do for a given algorithm, you do need to find a different solution.
This is probably an excellent starting point if you want to investigate this in detail: http://mathworld.wolfram.com/DigitSum.html
Possibility 1:
You could make it faster by feeding the result of one iteration of the loop into the next iteration.
For example, if i == 365, the result is 14. In the next loop, i == 366 -- 1 more than the previous result. The sum is also 1 more: 3 + 6 + 6 = 15.
Problems arise when there is a carry digit. If i == 99 (ie. result = 18), the next loop's result isn't 19, it's 1. You'll need extra code to detect this case.
Possibility 2:
While thinking though the above, it occurred to me that the sequence of results from sum_of_digits when graphed would resemble a sawtooth. With some analysis of the resulting graph (which I leave as an exercise for the reader), it may be possible to identify a method to allow direct calculation of the sum result.
However, as some others have pointed out: Even with the fastest possible implementation of sum_of_digits and the most optimised loop code, you can't possibly calculate 1000000000000000000 results in any useful timeframe, and certainly not in less than one second.
Edit: It seems you want the the sum of the actual digits such that: 12345 = 1+2+3+4+5 not the count of digits, nor the sum of all numbers 1 to 12345 (inclusive);
As such the fastest you can get is:
long long sum_of_digits(long long input) {
long long total = input % 10;
while ((input /= 10) != 0)
total += input % 10;
return total;
}
Which is still going to be slow when you're running enough iterations. Your requirement of 1,000,000,000,000,000,000L iterations is One Million, Million, Million. Given 100 Million takes around 10,000ms on my computer, one can expect that it will take 100ms per 1 million records, and you want to do that another million million times. There are only 86400 seconds in a day, so at best we can compute around 86,400 Million records per day. It would take one computer
Lets suppose your method could be performed in a single float operation (somehow), suppose you are using the K computer which is currently the fastest (Rmax) supercomputer at over 10 petaflops, if you do the math that is = 10,000 Million Million floating operations per second. This means that your 1 Million, Million, Million loop will take the world's fastest non-distributed supercomputer 100 seconds to compute the sums (IF it took 1 float operation to calculate, which it can't), so you will need to wait around for quite some time for computers to become 100 so much more powerful for your solution to be runable in under one second.
What ever you're trying to do, you're either trying to do an unsolvable problem in near real-time (eg: graphics calculation related) or you misunderstand the question / task that was given you, or you are expected to perform something faster than any (non-distributed) computer system can do.
If your task is actually to sum all the digits of a range as you show and then output them, the answer is not to improve the for loop. for example:
1 = 0
10 = 46
100 = 901
1000 = 13501
10000 = 180001
100000 = 2250001
1000000 = 27000001
10000000 = 315000001
100000000 = 3600000001
From this you could work out a formula to actually compute the total sum of all digits for all numbers from 1 to N. But it's not clear what you really want, beyond a much faster computer.
No the best, but simple:
int DigitSumRange(int a, int b) {
int s = 0;
for (; a <= b; a++)
for(c : to_string(a))
s += c-48;
return s;
}
A Python function is given below, which converts the number to a string and then to a list of digits and then finds the sum of these digits.
def SumDigits(n):
ns=list(str(n))
z=[int(d) for d in ns]
return(sum(z))
In C++ one of the fastest way can be using strings.
first of all get the input from users in a string. Then add each element of string after converting it into int. It can be done using -> (str[i] - '0').
#include<iostream>
#include<string>
using namespace std;
int main()
{ string str;
cin>>str;
long long int sum=0;
for(long long int i=0;i<str.length();i++){
sum = sum + (str[i]-'0');
}
cout<<sum;
}
The formula for finding the sum of the digits of numbers between 1 to N is:
(1 + N)*(N/2)
[http://mathforum.org/library/drmath/view/57919.html][1]
There is a class written in C# which supports a number with more than the supported max-limit of long.
You can find it here. [Oyster.Math][2]
Using this class, I have generated a block of code in c#, may be its of some help to you.
using Oyster.Math;
class Program
{
private static DateTime startDate;
static void Main(string[] args)
{
startDate = DateTime.Now;
Console.WriteLine("Finding Sum of digits from {0} to {1}", 1L, 1000000000000000000L);
sum_of_digits(1000000000000000000L);
Console.WriteLine("Time Taken for the process: {0},", DateTime.Now - startDate);
Console.ReadLine();
}
private static void sum_of_digits(long input)
{
var answer = IntX.Multiply(IntX.Parse(Convert.ToString(1 + input)), IntX.Parse(Convert.ToString(input / 2)), MultiplyMode.Classic);
Console.WriteLine("Sum: {0}", answer);
}
}
Please ignore this comment if it is not relevant for your context.
[1]: https://web.archive.org/web/20171225182632/http://mathforum.org/library/drmath/view/57919.html
[2]: https://web.archive.org/web/20171223050751/http://intx.codeplex.com/
If you want to find the sum for the range say 1 to N then simply do the following
long sum = N(N+1)/2;
it is the fastest way.