Trying to multiply the kiddy way - c++

I'm supposed to multiply two 3-digit numbers the way we used to do in childhood.
I need to multiply each digit of a number with each of the other number's digit, calculate the carry, add the individual products and store the result.
I was able to store the 3 products obtained (for I/P 234 and 456):
1404
1170
0936
..in a 2D array.
Now when I try to arrange them in the following manner:
001404
011700
093600
to ease addition to get the result; by:
for(j=5;j>1;j--)
{
xx[0][j]=xx[0][j-2];
}
for(j=4;j>0;j--)
{
xx[1][j]=xx[1][j-1];
}
xx is the 2D array I've stored the 3 products in.
everything seems to be going fine till I do this:
xx[0][0]=0;
xx[0][1]=0;
xx[1][0]=0;
Here's when things go awry. The values get all mashed up. On printing, I get 001400 041700 093604.
What am I doing wrong?

Assuming the first index of xx is the partial sum, that the second index is the digit in that sum, and that the partial sums are stored with the highest digit at the lowest index,
for (int i = 0; i < NUM_DIGITS; i++) // NUM_DIGITS = number of digits in multiplicands
{
for (int j = 5; j >= 0; j--) // Assuming 5 is big enough
{
int index = (j - 1) - (NUM_DIGITS - 1) - i;
xx[i][j] = index >= 0 ? xx[i][index] : 0;
}
}
There are definitely more efficient/logical ways of doing this, of course, such as avoiding storing the digits individually, but within the constraints of the problem, this should give you the right answer.

Related

How to trace error with counter in do while loop in C++?

I am trying to get i to read array with numbers and get the smaller number, store it in variable and then compare it with another variable that is again from two other numbers (like 2,-3).
There is something wrong in the way I implement the do while loop. I need the counter 'i' to be updated twice so it goes through I have 2 new variables from 4 compared numbers. When I hard code it n-1,n-2 it works but with the loop it gets stuck at one value.
int i=0;
int closestDistance=0;
int distance=0;
int nextDistance=0;
do
{
distance = std::min(values[n],values[n-i]); //returns the largest
distance=abs(distance);
i++;
nextDistance=std::min(values[n],values[n-i]);
nextDistance=abs(closestDistance); //make it positive then comp
if(distance<nextDistance)
closestDistance=distance;//+temp;
else
closestDistance=nextDistance;
i++;
}
while(i<n);
return closestDistance;
Maybe this:
int i = 0;
int m = 0;
do{
int lMin = std::min(values[i],values[i + 1]);
i += 2;
int rMin = std::min(values[i], values[i + 1]);
m = std::min(lMin,rMin);
i += 2;
}while(i < n);
return m;
I didn't understand what you meant, but this compares values in values 4 at a time to find the minimal. Is that all you needed?
Note that if n is the size of values, this would go out of bounds. n would have to be the size minus 4, leading to odd exceptional cases.
The issue with your may be in the call to abs. Are all the values positive? Are you trying to find the smallest absolute value?
Also, note that using i += 2 twice ensures that you do not repeat any values. This means that you will go over 4 unique values. Your code goes through 3 in each iteration of the loop.
I hope this clarified.
What are you trying to do in following lines.
nextDistance=std::min(values[n],values[n-i]);
nextDistance=abs(closestDistance); //make it positive , then computed

n-th or Arbitrary Combination of a Large Set

Say I have a set of numbers from [0, ....., 499]. Combinations are currently being generated sequentially using the C++ std::next_permutation. For reference, the size of each tuple I am pulling out is 3, so I am returning sequential results such as [0,1,2], [0,1,3], [0,1,4], ... [497,498,499].
Now, I want to parallelize the code that this is sitting in, so a sequential generation of these combinations will no longer work. Are there any existing algorithms for computing the ith combination of 3 from 500 numbers?
I want to make sure that each thread, regardless of the iterations of the loop it gets, can compute a standalone combination based on the i it is iterating with. So if I want the combination for i=38 in thread 1, I can compute [1,2,5] while simultaneously computing i=0 in thread 2 as [0,1,2].
EDIT Below statement is irrelevant, I mixed myself up
I've looked at algorithms that utilize factorials to narrow down each individual element from left to right, but I can't use these as 500! sure won't fit into memory. Any suggestions?
Here is my shot:
int k = 527; //The kth combination is calculated
int N=500; //Number of Elements you have
int a=0,b=1,c=2; //a,b,c are the numbers you get out
while(k >= (N-a-1)*(N-a-2)/2){
k -= (N-a-1)*(N-a-2)/2;
a++;
}
b= a+1;
while(k >= N-1-b){
k -= N-1-b;
b++;
}
c = b+1+k;
cout << "["<<a<<","<<b<<","<<c<<"]"<<endl; //The result
Got this thinking about how many combinations there are until the next number is increased. However it only works for three elements. I can't guarantee that it is correct. Would be cool if you compare it to your results and give some feedback.
If you are looking for a way to obtain the lexicographic index or rank of a unique combination instead of a permutation, then your problem falls under the binomial coefficient. The binomial coefficient handles problems of choosing unique combinations in groups of K with a total of N items.
I have written a class in C# to handle common functions for working with the binomial coefficient. It performs the following tasks:
Outputs all the K-indexes in a nice format for any N choose K to a file. The K-indexes can be substituted with more descriptive strings or letters.
Converts the K-indexes to the proper lexicographic index or rank of an entry in the sorted binomial coefficient table. This technique is much faster than older published techniques that rely on iteration. It does this by using a mathematical property inherent in Pascal's Triangle and is very efficient compared to iterating over the set.
Converts the index in a sorted binomial coefficient table to the corresponding K-indexes. I believe it is also faster than older iterative solutions.
Uses Mark Dominus method to calculate the binomial coefficient, which is much less likely to overflow and works with larger numbers.
The class is written in .NET C# and provides a way to manage the objects related to the problem (if any) by using a generic list. The constructor of this class takes a bool value called InitTable that when true will create a generic list to hold the objects to be managed. If this value is false, then it will not create the table. The table does not need to be created in order to use the 4 above methods. Accessor methods are provided to access the table.
There is an associated test class which shows how to use the class and its methods. It has been extensively tested with 2 cases and there are no known bugs.
To read about this class and download the code, see Tablizing The Binomial Coeffieicent.
The following tested code will iterate through each unique combinations:
public void Test10Choose5()
{
String S;
int Loop;
int N = 500; // Total number of elements in the set.
int K = 3; // Total number of elements in each group.
// Create the bin coeff object required to get all
// the combos for this N choose K combination.
BinCoeff<int> BC = new BinCoeff<int>(N, K, false);
int NumCombos = BinCoeff<int>.GetBinCoeff(N, K);
// The Kindexes array specifies the indexes for a lexigraphic element.
int[] KIndexes = new int[K];
StringBuilder SB = new StringBuilder();
// Loop thru all the combinations for this N choose K case.
for (int Combo = 0; Combo < NumCombos; Combo++)
{
// Get the k-indexes for this combination.
BC.GetKIndexes(Combo, KIndexes);
// Verify that the Kindexes returned can be used to retrive the
// rank or lexigraphic order of the KIndexes in the table.
int Val = BC.GetIndex(true, KIndexes);
if (Val != Combo)
{
S = "Val of " + Val.ToString() + " != Combo Value of " + Combo.ToString();
Console.WriteLine(S);
}
SB.Remove(0, SB.Length);
for (Loop = 0; Loop < K; Loop++)
{
SB.Append(KIndexes[Loop].ToString());
if (Loop < K - 1)
SB.Append(" ");
}
S = "KIndexes = " + SB.ToString();
Console.WriteLine(S);
}
}
You should be able to port this class over fairly easily to C++. You probably will not have to port over the generic part of the class to accomplish your goals. Your test case of 500 choose 3 yields 20,708,500 unique combinations, which will fit in a 4 byte int. If 500 choose 3 is simply an example case and you need to choose combinations greater than 3, then you will have to use longs or perhaps fixed point int.
You can describe a particular selection of 3 out of 500 objects as a triple (i, j, k), where i is a number from 0 to 499 (the index of the first number), j ranges from 0 to 498 (the index of the second, skipping over whichever number was first), and k ranges from 0 to 497 (index of the last, skipping both previously-selected numbers). Given that, it's actually pretty easy to enumerate all the possible selections: starting with (0,0,0), increment k until it gets to its maximum value, then increment j and reset k to 0 and so on, until j gets to its maximum value, and so on, until j gets to its own maximum value; then increment i and reset both j and k and continue.
If this description sounds familiar, it's because it's exactly the same way that incrementing a base-10 number works, except that the base is much funkier, and in fact the base varies from digit to digit. You can use this insight to implement a very compact version of the idea: for any integer n from 0 to 500*499*498, you can get:
struct {
int i, j, k;
} triple;
triple AsTriple(int n) {
triple result;
result.k = n % 498;
n = n / 498;
result.j = n % 499;
n = n / 499;
result.i = n % 500; // unnecessary, any legal n will already be between 0 and 499
return result;
}
void PrintSelections(triple t) {
int i, j, k;
i = t.i;
j = t.j + (i <= j ? 1 : 0);
k = t.k + (i <= k ? 1 : 0) + (j <= k ? 1 : 0);
std::cout << "[" << i << "," << j << "," << k << "]" << std::endl;
}
void PrintRange(int start, int end) {
for (int i = start; i < end; ++i) {
PrintSelections(AsTriple(i));
}
}
Now to shard, you can just take the numbers from 0 to 500*499*498, divide them into subranges in any way you'd like, and have each shard compute the permutation for each value in its subrange.
This trick is very handy for any problem in which you need to enumerate subsets.

Weighted probability with long doubles

I am working with an array of roughly 2000 elements in C++.
Each element represents the probability of that element being selected randomly.
I then have convert this array into a cumulative array, with the intention of using this to work out which element to choose when a dice is rolled.
Example array:
{1,2,3,4,5}
Example cumulative array:
{1,3,6,10,15}
I want to be able to select 3 in the cumulative array when numbers 3, 4 or 5 are rolled.
The added complexity is that my array is made up of long doubles. Here's an example of a few consecutive elements:
0.96930161525189592646367317541056252139242133125662803649902343750
0.96941377254127855667142910078837303444743156433105468750000000000
0.96944321382974149711383993199831365927821025252342224121093750000
0.96946143938926617454089618153290075497352518141269683837890625000
0.96950069444055009509463721739663810694764833897352218627929687500
0.96951751803395748961766908990966840065084397792816162109375000000
This could be a terrible way of doing weighted probabilities with this data set, so I'm open to any suggestions of better ways of working this out.
You can use partial_sum:
unsigned int SIZE = 5;
int array[SIZE] = {1,2,3,4,5};
int partials[SIZE] = {0};
partial_sum(array, array+SIZE, partials);
// partials is now {1,3,6,10,15}
The value you want from the array is available from the partial sums:
12 == array[2] + array[3] + array[4];
12 == partials[4] - partials[1];
The total is obviously the last value in the partial sums:
15 == partial[4];
consider storing the information as an integer numerator and denominator so that there is no loss of precision until the final step.
You can actually do this using stream selection without having to compute an array of partial sums. Here's code I have for this in Java:
public static int selectRandomWeighted(double[] wts, Random rnd) {
int selected = 0;
double total = wts[0];
for( int i = 1; i < wts.length; i++ ) {
total += wts[i];
if( rnd.nextDouble() <= (wts[i] / total)) {
selected = i;
}
}
return selected;
}
The above could potentially be further improved using Kahan summation if you want to preserve as many digits of accuracy in the sum as possible.
However, if you want to draw from this array repeatedly, then pre-computing an array of partial sums and using binary search to find the right index will be faster.
Ok I think I've solved this one.
I just did a binary split search, but instead of just having
if (arr[middle] == value)
I added in an OR
if (arr[middle] == value || (arr[middle] < value && arr[middle+1] > value))
This seems to handle it in the way I was hoping for.

Optimizing algorithm to find number of six digit numbers satisfying certain property

Problem: "An algorithm to find the number of six digit numbers where the sum of the first three digits is equal to the sum of the last three digits."
I came across this problem in an interview and want to know the best solution. This is what I have till now.
Approach 1: The Brute force solution is, of course, to check for each number (between 100,000 and 999,999) whether the sum of its first three and last three digits are equal. If yes, then increment certain counter which keeps count of all such numbers.
But this checks for all 900,000 numbers and so is inefficient.
Approach 2: Since we are asked "how many" such numbers and not "which numbers", we could do better. Divide the number into two parts: First three digits (these go from 100 to 999) and Last three digits (these go from 000 to 999). Thus, the sum of three digits in either part of a candidate number can range from 1 to 27.
* Maintain a std::map<int, int> for each part where key is the sum and value is number of numbers (3 digit) having that sum in the corresponding part.
* Now, for each number in the first part find out its sum and update the corresponding map.
* Similarly, we can get updated map for the second part.
* Now by multiplying the corresponding pairs (e.g. value in map 1 of key 4 and value in map 2 of key 4) and adding them up we get the answer.
In this approach, we end up checking 1K numbers.
My question is how could we further optimize? Is there a better solution?
For 0 <= s <= 18, there are exactly 10 - |s - 9| ways to obtain s as the sum of two digits.
So, for the first part
int first[28] = {0};
for(int s = 0; s <= 18; ++s) {
int c = 10 - (s < 9 ? (9 - s) : (s - 9));
for(int d = 1; d <= 9; ++d) {
first[s+d] += c;
}
}
That's 19*9 = 171 iterations, for the second half, do it similarly, with the inner loop starting at 0 instead of 1, that's 19*10 = 190 iterations. Then sum first[i]*second[i] for 1 <= i <= 27.
Generate all three-digit numbers; partition them into sets based on their sum of digits. (Actually, all you need to do is keep a vector that counts the size of the sets). For each set, the number of six-digit numbers that can be generated is the size of the set squared. Sum up the squares of the set sizes to get your answer.
int sumCounts[28]; // sums can go from 0 through 27
for (int i = 0; i < 1000; ++i) {
sumCounts[sumOfDigits(i)]++;
}
int total = 0;
for (int i = 0; i < 28; ++i) {
count = sumCounts[i];
total += count * count;
}
EDIT Variation to eliminate counting leading zeroes:
int sumCounts[28];
int sumCounts2[28];
for (int i = 0; i < 100; ++i) {
int s = sumOfDigits(i);
sumCounts[s]++;
sumCounts2[s]++;
}
for (int i = 100; i < 1000; ++i) {
sumCounts[sumOfDigits(i)]++;
}
int total = 0;
for (int i = 0; i < 28; ++i) {
count = sumCounts[i];
total += (count - sumCounts2[i]) * count;
}
Python Implementation
def equal_digit_sums():
dists = {}
for i in range(1000):
digits = [int(d) for d in str(i)]
dsum = sum(digits)
if dsum not in dists:
dists[dsum] = [0,0]
dists[dsum][0 if len(digits) == 3 else 1] += 1
def prod(dsum):
t = dists[dsum]
return (t[0]+t[1])*t[0]
return sum(prod(dsum) for dsum in dists)
print(equal_digit_sums())
Result: 50412
One idea: For each number from 0 to 27, count the number of three-digit numbers that have that digit sum. This should be doable efficiently with a DP-style approach.
Now you just sum the squares of the results, since for each answer, you can make a six-digit number with one of those on each side.
Assuming leading 0's aren't allowed, you want to calculate how many different ways are there to sum to n with 3 digits. To calculate that you can have a for loop inside a for loop. So:
firstHalf = 0
for i in xrange(max(1,n/3),min(9,n+1)): #first digit
for j in xrange((n-i)/2,min(9,n-i+1)): #second digit
firstHalf +=1 #Will only be one possible third digit
secondHalf = firstHalf + max(0,10-|n-9|)
If you are trying to sum to a number, then the last number is always uniquely determined. Thus in the case where the first number is 0 we are just calculating how many different values are possible for the second number. This will be n+1 if n is less than 10. If n is greater, up until 18 it will be 19-n. Over 18 there are no ways to form the sum.
If you loop over all n, 1 through 27, you will have your total sum.

How to find best matching for all columns of a 2D array?

Let's say that I have a 2D array that looks like:
________________
|10|15|14|20|30|
|14|10|73|71|55|
|73|30|42|84|74|
|14|74|XX|15|10|
----------------
As I showed, the columns don't need to be same size.
Now I need to find the best matching for each column (the one that has most exactly the same items and lowest different). Of course, I could do that in n^2 but it's too slow for me. How can I do it?
I thought about a k-dimension tree and finding the closest neighbor for every one, but I don't know if it's good and it will work as I want (probably not).
Result for example:
First column is most likely third (only three different - 10, 14, 42)
Second column -> fifth (only two different - 15 and 55)
and so on and so on... :)
If you know that all the numbers in the table are 2-digit numbers (i.e. 10 =< x <100), for each column create an array of booleans where you will mark the existing numbers:
bool array[5][100];
std::fill( &array[0][0], &array[0][0] + sizeof(array) , false ); // init to false
for (int i = 0; i < 5; i++)
{
for (int j = 0; j <5; j++)
{
array[i][table[i][j]] = true;
}
}
Should be easy from there.