What is the fastest way to find the sum of decimal digits?
The following code is what I wrote but it is very very slow for range 1 to 1000000000000000000
long long sum_of_digits(long long input) {
long long total = 0;
while (input != 0) {
total += input % 10;
input /= 10;
}
return total;
}
int main ( int argc, char** argv) {
for ( long long i = 1L; i <= 1000000000000000000L; i++) {
sum_of_digits(i);
}
return 0;
}
I'm assuming what you are trying to do is along the lines of
#include <iostream>
const long long limit = 1000000000000000000LL;
int main () {
long long grand_total = 0;
for (long long ii = 1; ii <= limit; ++ii) {
grand_total += sum_of_digits(i);
}
std::cout << "Grand total = " << grand_total << "\n";
return 0;
}
This won't work for two reasons:
It will take a long long time.
It will overflow.
To deal with the overflow problem, you will either have to put a bound on your upper limit or use some bignum package. I'll leave solving that problem up to you.
To deal with the computational burden you need to get creative. If you know the upper limit is limited to powers of 10 this is fairly easy. If the upper limit can be some arbitrary number you will have to get a bit more creative.
First look at the problem of computing the sum of digits of all integers from 0 to 10n-1 (e.g., 0 to 9 (n=1), 0 to 99 (n=2), etc.) Denote the sum of digits of all integers from 10n-1 as Sn. For n=1 (0 to 9), this is just 0+1+2+3+4+5+6+7+8+9=45 (9*10/2). Thus S1=45.
For n=2 (0 to 99), you are summing 0-9 ten times and you are summing 0-9 ten times again. For n=3 (0 to 999), you are summing 0-99 ten times and you are summing 0-9 100 times. For n=4 (0 to 9999), you are summing 0-999 ten times and you are summing 0-9 1000 times. In general, Sn=10Sn-1+10n-1S1 as a recursive expression. This simplifies to Sn=(9n10n)/2.
If the upper limit is of the form 10n, the solution is the above Sn plus one more for the number 1000...000. If the upper limit is an arbitrary number you will need to get creative once again. Think along the lines that went into developing the formula for Sn.
You can break this down recursively. The sum of the digits of an 18-digit number are the sums of the first 9 digits plus the last 9 digits. Likewise the sum of the digits of a 9-bit number will be the sum of the first 4 or 5 digits plus the sum of the last 5 or 4 digits. Naturally you can special-case when the value is 0.
Reading your edit: computing that function in a loop for i between 1 and 1000000000000000000 takes a long time. This is a no brainer.
1000000000000000000 is one billion billion. Your processor will be able to do at best billions of operations per second. Even with a nonexistant 4-5 Ghz processor, and assuming best case it compiles down to an add, a mod, a div, and a compare jump, you could only do 1 billion iterations per second, meaning it will take on the order of 1 billion seconds.
You probably don't want to do it in a bruteforce way. This seems to be more of a logical thinking question.
Note - 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 = N(N+1)/2 = 45.
---- Changing the answer to make it clearer after David's comment
See David's answer - I had it wrong
Quite late to the party, but anyways, here is my solution. Sorry it's in Python and not C++, but it should be relatively easy to translate. And because this is primarily an algorithm problem, I hope that's ok.
As for the overflow problem, the only thing that comes to mind is to use arrays of digits instead of actual numbers. Given this algorithm I hope it won't affect performance too much.
https://gist.github.com/frnhr/7608873
It uses these three recursions I found by looking and poking at the problem. Rather then trying to come up with some general and arcane equations, here are three examples. A general case should be easily visible from those.
relation 1
Reduces function calls with arbitrary argument to several recursive calls with more predictable arguments for use in relations 2 and 3.
foo(3456) == foo(3000)
+ foo(400) + 400 * (3)
+ foo(50) + 50 * (3 + 4)
+ foo(6) + 6 * (3 + 4 + 5)
relation 2
Reduce calls with an argument in the form L*10^M (e.g: 30, 7000, 900000) to recursive call usable for relation 3. These triangular numbers popped in quite uninvited (but welcome) :)
triangular_numbers = [0, 1, 3, 6, 10, 15, 21, 28, 36] # 0 not used
foo(3000) == 3 * foo(1000) + triangular_numbers[3 - 1] * 1000
Only useful if L > 1. It holds true for L = 1 but is trivial. In that case, go directly to relation 3.
relation 3
Recursively reduce calls with argument in format 1*10^M to a call with argument that's divided by 10.
foo(1000) == foo(100) * 10 + 44 * 100 + 100 - 9 # 44 and 9 are constants
Ultimately you only have to really calculate the sum or digits for numbers 0 to 10, and it turns out than only up to 3 of these calculations are needed. Everything else is taken care of with this recursion. I'm pretty sure it runs in O(logN) time. That's FAAST!!!!!11one
On my laptop it calculates the sum of digit sums for a given number with over 1300 digits in under 7 seconds! Your test (1000000000000000000) gets calculated in 0.000112057 seconds!
I think you cannot do better than O(N) where N is the number of digits in the given number(which is not computationally expensive)
However if I understood your question correctly (the range) you want to output the sum of digits for a range of numbers. In that case, you can increment by one when you go from number0 to number9 and then decrease by 8.
You will need to cheat - look for mathematical patterns that let you short-cut your computations.
For example, do you really need to test that input != 0 every time? Does it matter if you add 0/10 several times? Since it won't matter, consider unrolling the loop.
Can you do the calculation in a larger base, eg, base 10^2, 10^3, etcetera, that might allow you to reduce the number of digits, which you'll then have to convert back to base 10? If this works, you'll be able to implement a cache more easily.
Consider looking at compiler intrinsics that let you give hints to the compiler for branch prediction.
Given that this is C++, consider implementing this using template metaprogramming.
Given that sum_of_digits is purely functional, consider caching the results.
Now, most of those suggestions will backfire - but the point I'm making is that if you have hit the limits of what your computer can do for a given algorithm, you do need to find a different solution.
This is probably an excellent starting point if you want to investigate this in detail: http://mathworld.wolfram.com/DigitSum.html
Possibility 1:
You could make it faster by feeding the result of one iteration of the loop into the next iteration.
For example, if i == 365, the result is 14. In the next loop, i == 366 -- 1 more than the previous result. The sum is also 1 more: 3 + 6 + 6 = 15.
Problems arise when there is a carry digit. If i == 99 (ie. result = 18), the next loop's result isn't 19, it's 1. You'll need extra code to detect this case.
Possibility 2:
While thinking though the above, it occurred to me that the sequence of results from sum_of_digits when graphed would resemble a sawtooth. With some analysis of the resulting graph (which I leave as an exercise for the reader), it may be possible to identify a method to allow direct calculation of the sum result.
However, as some others have pointed out: Even with the fastest possible implementation of sum_of_digits and the most optimised loop code, you can't possibly calculate 1000000000000000000 results in any useful timeframe, and certainly not in less than one second.
Edit: It seems you want the the sum of the actual digits such that: 12345 = 1+2+3+4+5 not the count of digits, nor the sum of all numbers 1 to 12345 (inclusive);
As such the fastest you can get is:
long long sum_of_digits(long long input) {
long long total = input % 10;
while ((input /= 10) != 0)
total += input % 10;
return total;
}
Which is still going to be slow when you're running enough iterations. Your requirement of 1,000,000,000,000,000,000L iterations is One Million, Million, Million. Given 100 Million takes around 10,000ms on my computer, one can expect that it will take 100ms per 1 million records, and you want to do that another million million times. There are only 86400 seconds in a day, so at best we can compute around 86,400 Million records per day. It would take one computer
Lets suppose your method could be performed in a single float operation (somehow), suppose you are using the K computer which is currently the fastest (Rmax) supercomputer at over 10 petaflops, if you do the math that is = 10,000 Million Million floating operations per second. This means that your 1 Million, Million, Million loop will take the world's fastest non-distributed supercomputer 100 seconds to compute the sums (IF it took 1 float operation to calculate, which it can't), so you will need to wait around for quite some time for computers to become 100 so much more powerful for your solution to be runable in under one second.
What ever you're trying to do, you're either trying to do an unsolvable problem in near real-time (eg: graphics calculation related) or you misunderstand the question / task that was given you, or you are expected to perform something faster than any (non-distributed) computer system can do.
If your task is actually to sum all the digits of a range as you show and then output them, the answer is not to improve the for loop. for example:
1 = 0
10 = 46
100 = 901
1000 = 13501
10000 = 180001
100000 = 2250001
1000000 = 27000001
10000000 = 315000001
100000000 = 3600000001
From this you could work out a formula to actually compute the total sum of all digits for all numbers from 1 to N. But it's not clear what you really want, beyond a much faster computer.
No the best, but simple:
int DigitSumRange(int a, int b) {
int s = 0;
for (; a <= b; a++)
for(c : to_string(a))
s += c-48;
return s;
}
A Python function is given below, which converts the number to a string and then to a list of digits and then finds the sum of these digits.
def SumDigits(n):
ns=list(str(n))
z=[int(d) for d in ns]
return(sum(z))
In C++ one of the fastest way can be using strings.
first of all get the input from users in a string. Then add each element of string after converting it into int. It can be done using -> (str[i] - '0').
#include<iostream>
#include<string>
using namespace std;
int main()
{ string str;
cin>>str;
long long int sum=0;
for(long long int i=0;i<str.length();i++){
sum = sum + (str[i]-'0');
}
cout<<sum;
}
The formula for finding the sum of the digits of numbers between 1 to N is:
(1 + N)*(N/2)
[http://mathforum.org/library/drmath/view/57919.html][1]
There is a class written in C# which supports a number with more than the supported max-limit of long.
You can find it here. [Oyster.Math][2]
Using this class, I have generated a block of code in c#, may be its of some help to you.
using Oyster.Math;
class Program
{
private static DateTime startDate;
static void Main(string[] args)
{
startDate = DateTime.Now;
Console.WriteLine("Finding Sum of digits from {0} to {1}", 1L, 1000000000000000000L);
sum_of_digits(1000000000000000000L);
Console.WriteLine("Time Taken for the process: {0},", DateTime.Now - startDate);
Console.ReadLine();
}
private static void sum_of_digits(long input)
{
var answer = IntX.Multiply(IntX.Parse(Convert.ToString(1 + input)), IntX.Parse(Convert.ToString(input / 2)), MultiplyMode.Classic);
Console.WriteLine("Sum: {0}", answer);
}
}
Please ignore this comment if it is not relevant for your context.
[1]: https://web.archive.org/web/20171225182632/http://mathforum.org/library/drmath/view/57919.html
[2]: https://web.archive.org/web/20171223050751/http://intx.codeplex.com/
If you want to find the sum for the range say 1 to N then simply do the following
long sum = N(N+1)/2;
it is the fastest way.
Related
Inputs are two values 1 <= m , n <= 10^12
i don't know why my code is taking soo long for large values . time limit is 1 sec. please suggest me some critical modifications.
#include<iostream>
#include<algorithm>
using namespace std;
int main()
{
unsigned long long m,n,count=0;
cin >> m >> n;
for (long long int i = 1; i <= ((min(m,n))/2)+1; i++) //i divided min(m,n) by 2 to make it efficient.
{
if ((m%i == 0) && (n%i == 0))
{
count++;
}
}
if (((n%m == 0) || (m%n == 0)) && (n!=m))
{
cout << count << endl;
}
printf("%lld",count); //cout<<count;
system("pause");
return 0;
}
Firstly
((min(m, n)) / 2) + 1
Is being calculated every iteration. But it's loop-invariant. In general loop invariant code can be calculated before the loop, and stored. It will add up, but there are obviously much better ways to improve things. I'll describe one below:
you can make this much faster by calculating how many common prime factors there are, and by dividing out any "found" primes as you go. e.g. if only one number is divisible by 5, and the other is not, you can divide that one by 5 and you still get the same answer for common factors. Divide m and n by any "found" numbers as you go through it. (but keep checking whether either is divisible by e.g. 2 and keep dividing before you go on).
e.g. if the two numbers are both divisible by 2, 3 and 5, then the number of ways those three primes can combine is 8 (2^3), treating the presence of each prime as a true/false thing. So each prime that occurs once multiplies the number of combos by 2.
If any of the primes occurs more than once, then it changes the equation slightly. e.g. if the two numbers are divisible by 4, 3, 5:
4 = 2^2, so you could have no "2s", 1 "2" or 2 "2s" in the combined factor, so the total combinations 3 x 2 x 2 = 12. So any prime that occurs "x" times, multiplies the total number of combos by "x+1".
So basically, you don't need to check for every actual factor, you just need to search for how many common prime factors there are, then work out how many combos that adds up to. Luckily you only need to store one value, "total_combos" and multiply it by the "x+1" value for each found number as you go.
And a handy thing is that you can divide out all primes as they're found, and you're guaranteed that the largest remaining prime to be found is no larger than the square root of the smallest remaining number out of m and n.
So to run you through how this would work, start with a copy of m and n, loop up to the sqrt of the min of those two (m and n will be reduced as the loop cycles through).
make a value "total_combos", which starts at 1.
Check for 2's first, find out how many common powers of 2 there are, add one to that number. Divide out ALL the 2's from m and n, even if they're not matched, because reducing down the number cuts the total amount you actually need to search. You count the 2's, add one, then multiply "total_combos" by that. Keep dividing m or n by two as long as either has a factor of 2 remaining.
Then check for 3's, find out how many common powers of 3 there are, add one, the multiply "total_combos" by that. Divide out any and all factors of 3 when you're doing this.
then check for 4's. Since 4 isn't prime and we got rid of all 2's already, there will be zero 4's. Add one to that = 1, then we times "total_combos" by 1, so it stays the same. We didn't need to check whether 4 was prime or not, the divisions we already did ensured it's ignored. Same for any power of 2.
then check for 5's. same deal as 2's and 3's. And so on. All the prime bases get divided out as you go, so whenever a value actually matches you can be sure it's a new prime.
stop the loop when it exceeds sqrt(max(m,n)) (EDITED: min is probably wrong there). But m and n here are the values that have had all the lower primes divided out, so it's much faster.
I hope this approach is helpful.
There is a better way to solve this problem.
All you have to do is take the GCD of two numbers. Now any number won't divide m & n if they are greater than their GCD. So all you to do is that run a loop till the i<=Math.sqrt(GCD(m,n)) and check if the m%i==0 and n%i==0 only. It will save a lot of nanosecs.
Given a number N (<=10000), find the minimum number of primatic numbers which sum up to N.
A primatic number refers to a number which is either a prime number or can be expressed as power of prime number to itself i.e. prime^prime e.g. 4, 27, etc.
I tried to find all the primatic numbers using seive and then stored them in a vector (code below) but now I am can't see how to find the minimum of primatic numbers that sum to a given number.
Here's my sieve:
#include<algorithm>
#include<vector>
#define MAX 10000
typedef long long int ll;
ll modpow(ll a, ll n, ll temp) {
ll res=1, y=a;
while (n>0) {
if (n&1)
res=(res*y)%temp;
y=(y*y)%temp;
n/=2;
}
return res%temp;
}
int isprimeat[MAX+20];
std::vector<int> primeat;
//Finding all prime numbers till 10000
void seive()
{
ll i,j;
isprimeat[0]=1;
isprimeat[1]=1;
for (i=2; i<=MAX; i++) {
if (isprimeat[i]==0) {
for (j=i*i; j<=MAX; j+=i) {
isprimeat[j]=1;
}
}
}
for (i=2; i<=MAX; i++) {
if (isprimeat[i]==0) {
primeat.push_back(i);
}
}
isprimeat[4]=isprimeat[27]=isprimeat[3125]=0;
primeat.push_back(4);
primeat.push_back(27);
primeat.push_back(3125);
}
int main()
{
seive();
std::sort(primeat.begin(), primeat.end());
return 0;
}
One method could be to store all primatics less than or equal to N in a sorted list - call this list L - and recursively search for the shortest sequence. The easiest approach is "greedy": pick the largest spans / numbers as early as possible.
for N = 14 you'd have L = {2,3,4,5,7,8,9,11,13}, so you'd want to make an algorithm / process that tries these sequences:
13 is too small
13 + 13 -> 13 + 2 will be too large
11 is too small
11 + 11 -> 11 + 4 will be too large
11 + 3 is a match.
You can continue the process by making the search function recurse each time it needs another primatic in the sum, which you would aim to have occur a minimum number of times. To do so you can pick the largest -> smallest primatic in each position (the 1st, 2nd etc primatic in the sum), and include another number in the sum only if the primatics in the sum so far are small enough that an additional primatic won't go over N.
I'd have to make a working example to find a small enough N that doesn't result in just 2 numbers in the sum. Note that because you can express any natural number as the sum of at most 4 squares of natural numbers, and you have a more dense set L than the set of squares, so I'd think it rare you'd have a result of 3 or more for any N you'd want to compute by hand.
Dynamic Programming approach
I have to clarify that 'greedy' is not the same as 'dynamic programming', it can give sub-optimal results. This does have a DP solution though. Again, i won't write the final process in code but explain it as a point of reference to make a working DP solution from.
To do this we need to build up solutions from the bottom up. What you need is a structure that can store known solutions for all numbers up to some N, this list can be incrementally added to for larger N in an optimal way.
Consider that for any N, if it's primatic then the number of terms for N is just 1. This applies for N=2-5,7-9,11,13,16,17,19. The number of terms for all other N must be at least two, which means either it's a sum of two primatics or a sum of a primatic and some other N.
The first few examples that aren't trivial:
6 - can be either 2+4 or 3+3, all the terms here are themselves primatic so the minimum number of terms for 6 is 2.
10 - can be either 2+8, 3+7, 4+6 or 5+5. However 6 is not primatic, and taking that solution out leaves a minimum of 2 terms.
12 - can be either 2+10, 3+9, 4+8, 5+7 or 6+6. Of these 6+6 and 2+10 contain non-primatics while the others do not, so again 2 terms is the minimum.
14 - ditto, there exist two-primatic solutions: 3+11, 5+9, 7+7.
The structure for storing all of these solutions needs to be able to iterate across solutions of equal rank / number of terms. You already have a list of primatics, this is also the list of solutions that need only one term.
Sol[term_length] = list(numbers). You will also need a function / cache to look up some N's shortest-term-length, eg S(N) = term_length iif N in Sol[term_length]
Sol[1] = {2,3,4,5 ...} and Sol[2] = {6,10,12,14 ...} and so on for Sol[3] and onwards.
Any solution can be found using one term from Sol[1] that is primatic. Any solution requiring two primatics will be found in Sol[2]. Any solution requiring 3 will be in Sol[3] etc.
What you need to recognize here is that a number S(N) = 3 can be expressed Sol[1][a] + Sol[1][b] + Sol[1][c] for some a,b,c primatics, but it can also be expressed as Sol[1][a] + Sol[2][d], since all Sol[2] must be expressible as Sol[1][x] + Sol[1][y].
This algorithm will in effect search Sol[1] for a given N, then look in Sol[1] + Sol[K] with increasing K, but to do this you will need S and Sol structures roughly in the form shown here (or able to be accessed / queried in a similar manner).
Working Example
Using the above as a guideline I've put this together quickly, it even shows which multi-term sum it uses.
https://ideone.com/7mYXde
I can explain the code in-depth if you want but the real DP section is around lines 40-64. The recursion depth (also number of additional terms in the sum) is k, a simple dual-iterator while loop checks if a sum is possible using the kth known solutions and primatics, if it is then we're done and if not then check k+1 solutions, if any. Sol and S work as described.
The only confusing part might be the use of reverse iterators, it's just to make != end() checking consistent for the while condition (end is not a valid iterator position but begin is, so != begin would be written differently).
Edit - FYI, the first number that takes at least 3 terms is 959 - had to run my algorithm to 1000 numbers to find it. It's summed from 6 + 953 (primatic), no matter how you split 6 it's still 3 terms.
I have the following code, which is an answer to this question: https://leetcode.com/problems/add-digits/
class Solution {
public:
int addDigits(int num) {
if (!num/10)
return num;
long d = 1;
int retVal = 0;
while(num / d){
d *= 10;
}
for(d; d >= 1; d/=10){
retVal += num / d;
num %= d;
}
if (retVal > 9)
retVal = addDigits(retVal);
return retVal;
}
};
As a follow-up to this though, I'm trying to determine what the BigO growth is. My first attempt at calculating it came out to be O(n^n) (I assumed since the growth of each depth is directly depended on n every time), which is just depressing. Am I wrong? I hope I'm wrong.
In this case it's linear O(n) because you call addDigits method recursively without any loop and whatnot once in the method body
More details:
Determining complexity for recursive functions (Big O notation)
Update:
It's linear from the point of view of that the recursive function is called once. However, in this case, it's not exactly true, because the number of executions barely depends on input parameter.
Let n be the number of digits in base 10 of num.
I'd say that
T(1)=O(1)
T(n)=n+T(n') with n' <=n
Which gives us
O(n*n)
But can we do better?
Note than the maximum number representable with 2 digits is 99 which reduce in this way 99->18->9.
Note that we can always collapse 10 digits into 2 9999999999->90. For n>10 we can decompose than number in n/10segments of up to 10 digits each and reduce those segments in numbers of 2 digits each to be summed. The sum of n/10 numbers of 2 digits will always have less (or equal) than (n/10)*2 digits. Therefore
T(n)=n+T(n/5) for n>=10
Other base cases with n<10 should be easier. This gives
T(n)=O(1) for n<10
T(n)=n+T(n/5) for n>=10
Solving the recurrence equation gives
O(n) for n>=10
Looks like it's O(1) for values < 10, and O(n) for any other values.
I'm not well versed enough with the Big-O notation, to give an answer how this would be combined.
Most probably the first part is neclectable in significance, and such the overall time complexity becomes O(n).
Given n(n<=1000000) positive integer numbers (each number is smaller than 1000000). The task is to calculate the sum of the bitwise xor ( ^ in c/c++) value of all the distinct combination of the given numbers.
Time limit is 1 second.
For example, if 3 integers are given as 7, 3 and 5, answer should be 7^3 + 7^5 + 3^5 = 12.
My approach is:
#include <bits/stdc++.h>
using namespace std;
int num[1000001];
int main()
{
int n, i, sum, j;
scanf("%d", &n);
sum=0;
for(i=0;i<n;i++)
scanf("%d", &num[i]);
for(i=0;i<n-1;i++)
{
for(j=i+1;j<n;j++)
{
sum+=(num[i]^num[j]);
}
}
printf("%d\n", sum);
return 0;
}
But my code failed to run in 1 second. How can I write my code in a faster way, which can run in 1 second ?
Edit: Actually this is an Online Judge problem and I am getting Cpu Limit Exceeded with my above code.
You need to compute around 1e12 xors in order to brute force this. Modern processors can do around 1e10 such operations per second. So brute force cannot work; therefore they are looking for you to figure out a better algorithm.
So you need to find a way to determine the answer without computing all those xors.
Hint: can you think of a way to do it if all the input numbers were either zero or one (one bit)? And then extend it to numbers of two bits, three bits, and so on?
When optimising your code you can go 3 different routes:
Optimising the algorithm.
Optimising the calls to language and library functions.
Optimising for the particular architecture.
There may very well be a quicker mathematical way of xoring every pair combination and then summing them up, but I know it not. In any case, on the contemporary processors you'll be shaving off microseconds at best; that is because you are doing basic operations (xor and sum).
Optimising for the architecture also makes little sense. It normally becomes important in repetitive branching, you have nothing like that here.
The biggest problem in your algorithm is reading from the standard input. Despite the fact that "scanf" takes only 5 characters in your computer code, in machine language this is the bulk of your program. Unfortunately, if the data will actually change each time your run your code, there is no way around the requirement of reading from stdin, and there will be no difference whether you use scanf, std::cin >>, or even will attempt to implement your own method to read characters from input and convert them into ints.
All this assumes that you don't expect a human being to enter thousands of numbers in less than one second. I guess you can be running your code via: myprogram < data.
This function grows quadratically (thanks #rici). At around 25,000 positive integers with each being 999,999 (worst case) the for loop calculation alone can finish in approximately a second. Trying to make this work with input as you have specified and for 1 million positive integers just doesn't seem possible.
With the hint in Alan Stokes's answer, you may have a linear complexity instead of quadratic with the following:
std::size_t xor_sum(const std::vector<std::uint32_t>& v)
{
std::size_t res = 0;
for (std::size_t b = 0; b != 32; ++b) {
const std::size_t count_0 =
std::count_if(v.begin(), v.end(),
[b](std::uint32_t n) { return (n >> b) & 0x01; });
const std::size_t count_1 = v.size() - count_0;
res += count_0 * count_1 << b;
}
return res;
}
Live Demo.
Explanation:
x^y = Sum_b((x&b)^(y&b)) where b is a single bit mask (from 1<<0 to 1<<32).
For a given bit, with count_0 and count_1 the respective number of count of number with bit set to 0 or 1, we have count_0 * (count_0 - 1) 0^0, count_0 * count_1 0^1 and count_1 * (count_1 - 1) 1^1 (and 0^0 and 1^1 are 0).
I have an array of 20 numbers (64 bit int) something like 10, 25, 36,43...., 118, 121 (sorted numbers).
Now, I have to give millions of numbers as input (say 17, 30).
What I have to give as output is:
for Input 17:
17 is < 25 and > 10. So, output will be index 0.
for Input 30:
30 is < 36 and > 25. So, output will be index 1.
Now, I can do it using linear search, binary serach. Is there any method to do it faster way ? Input numbers are random (gaussian).
If you know the distribution, you can direct your search in a smarter way.
Here is the rough idea of this variant of binary search:
Assuming that your data is expected to be distributed uniformly on 0 to 100.
If you observe the value 0, you start at the beginning. If your value is 37, you start at 37% of the array you have. This is the key difference to binary search: you don't always start at 50%, but you try to start in the expected "optimal" position.
This also works for Gaussian distributed data, if you know the parameters (If you don't know them, you can still estimate them easily from the observed data). You would compute the Gaussian CDF, and this yields the place to start your search.
Now for the next step, you need to refine your search. At the position you looked at, there was a different value. You can use this to re-estimate the position to continue searching.
Now even if you don't know the distribution this can work very well. So you start with a binary search, and looked at objects at 50% and 25% already. Instead of going to 37.5% next, you can do a better guess, if your query values was e.g. very close to the 50% entry. Unless your data set is very "clumpy" (and your queries are not correlated to the data) then this should still outperform "naive" binary search that always splits in the middle.
http://en.wikipedia.org/wiki/Interpolation_search
The expected average runtime apparently is O(log(log(n)), from Wikipedia.
Update: since someone complained that with just 20 numbers things are different. Yes, they are. With 20 numbers linear search may be best. Because of CPU caching. Linear scanning through a small amount of memory - that fits into the CPU cache - can be really fast. In particular with an unrolled loop. But that case is quite pathetic and uninteresting IMHO.
I believe best option for you is to use upper_bound - it will find the first value in the array bigger than the one you are searching for.
Still depending on the problem you try to solve maybe lower_bound or binary_search may be the thing you need.
All of these algorithms are with logarithmic complexity.
There is nothing will be better than binary search since your array is sorted.
Linear search is O(n) while binary search is O(log n)
Edit:
Interpolation search makes an extra assumption (the elements have to be uniformly distributed) and do more comparisons per iteration.
You can try both and empirically measure which is better for your case
In fact, this problem is quite interesting because it is a re-cast of an information theoretic framework.
Given 20 numbers, you will end up with 21 bins (including < first one and > last one).
For each incoming number, you are to map to one of these 21 bins. This mapping is done by comparison. Each comparison gives you 1 bit of information (< or >= -- two states).
So suppose the incoming number requires 5 comparisons in order to figure out which bin it belongs to, then it is equivalent to using 5 bits to represent that number.
Our goal is to minimize the number of comparisons! We have 1 million numbers each belonging to 21 ordered code words. How do we do that?
This is exactly an entropy compression problem.
Let a[1],.. a[20], be your 20 numbers.
Let p(n) = pr { incoming number is < n }.
Build the decision tree as follows.
Step 1.
let i = argmin |p(a[i]) - 0.5|
define p0(n) = p(n) / (sum(p(j), j=0...a[i-1])), and p0(n)=0 for n >= a[i].
define p1(n) = p(n) / (sum(p(j), j=a[i]...a[20])), and p1(n)=0 for n < a[i].
Step 2.
let i0 = argmin |p0(a[i0]) - 0.5|
let i1 = argmin |p1(a[i1]) - 0.5|
and so on...
and by the time we're done, we end up with:
i, i0, i1, i00, i01, i10, i11, etc.
each one of these i gives us the comparison position.
so now our algorithm is as follows:
let u = input number.
if (u < a[i]) {
if (u < a[i0]) {
if (u < a[i00]) {
} else {
}
} else {
if (u < a[i01]) {
} else {
}
}
} else {
similarly...
}
so the i's define a tree, and the if statements are walking the tree. we can just as well put it into a loop, but it's easier to illustrate with a bunch of if.
so for example, if you knew that your data were uniformly distributed between 0 and 2^63, and your 20 number were
0,1,2,3,...19
then
i = 20 (notice that there is no i1)
i0 = 10
i00 = 5
i01 = 15
i000 = 3
i001 = 7
i010 = 13
i011 = 17
i0000 = 2
i0001 = 4
i0010 = 6
i0011 = 9
i00110 = 8
i0100 = 12
i01000 = 11
i0110 = 16
i0111 = 19
i01110 = 18
ok so basically, the comparison would be as follows:
if (u < a[20]) {
if (u < a[10]) {
if (u < a[5]) {
} else {
...
}
} else {
...
}
} else {
return 21
}
so note here, that I am not doing binary search! I am first checking the end point. why?
there is 100*((2^63)-20)/(2^63) percent chance that it will be greater than a[20]. this is basically like 99.999999999999999783159565502899% chance!
so this algorithm as it is has an expected number of comparison of 1 for a dataset with the properties specified above! (this is better than log log :p)
notice what I have done here is I am basically using fewer compares to find numbers that are more probable and more compares to find numbers that are less probable. for example, the number 18 requires 6 comparisons (1 more than needed with binary search); however, the numbers 20 to 2^63 require only 1 comparison. this same principle is used for lossless (entropy) data compression -- use fewer bits to encode code words that appear often.
building the tree is a one time process and you can use the tree 1 million times later.
the question is... when does this decision tree become binary search? homework exercise! :p the answer is simple. it's similar to when you can't compress a file any more.
ok, so I didn't pull this out of my behind... the basis is here:
http://en.wikipedia.org/wiki/Arithmetic_coding
You could perform binary search using std::lower_bound and std::upper_bound. These give you back iterators, so you can use std::distance to get an index.