I have created a random binary generator but it's "output" are written numbers, how i can make them a list?
bin_gen(0).
bin_gen(Y) :- random(0,2,S),write(S),Y1 is Y - 1,bin_gen(Y1).
Y is the length. I want for example the output 1011 to be [1,0,1,1] after the bin_gen(0).
Given an integer, you can convert it to a list of digits by repeatedly dividing by 10, taking the remainder as the ones place digit, and using the rounded down result for the next iteration:
list_digits(Int, Digits) :-
list_digits_aux(Int, [], Digits).
list_digits_aux(Int, Digits, [Int|Digits]) :- Int < 10.
list_digits_aux(Int, Digits, Acc) :-
NextInt is Int div 10,
NextInt > 0,
D is Int rem 10,
list_digits_aux(NextInt, [D|Digits], Acc).
In this code, we use an auxiliary predicate with an accumulator argument, so that we don't have to reverse the result at the end.
However, iiuc, if you want the list of digits, you can just tweak your current predicate slightly to construct the list of digits, rather than printing out each digit:
bin_gen(0, []).
bin_gen(Y, [D|Digits]) :-
random(0,2,D),
succ(Y1, Y),
bin_gen(Y1, Digits).
Decimal to Binary:
Conversion steps:
Divide the number by 2.
Get the integer quotient for the next iteration.
Get the remainder for the binary digit.
Repeat the steps until the quotient is equal to 0.
Prolog Code:
decimal_binary(Dec,Binary):-
( Dec=0 -> %If Decimal is 0 then return Binary 0
write('Binary = [0]'); %Else
decimal_binary1(Dec,Binary1), % Get Binary List
reverse(Binary1,Binary),!). %Reverse the Binary List
decimal_binary1(0,[]). %Base Case: Stop when Quotient is 0.
decimal_binary1(Dec,[Remainder|List]):- %Generate Binary List till Base Case succeeds
divmod(Dec,2,Quotient,Remainder), %Built-in to get Quotient and Remainder
decimal_binary1(Quotient,List).
divmod(Dividend, Divisor, Quotient, Remainder) :- %Built-in to get Quotient and Remainder
Quotient is Dividend div Divisor,
Remainder is Dividend mod Divisor.
Examples:
?- decimal_binary(13,Binary)
Binary = [1, 1, 0, 1]
?- decimal_binary(126,Binary)
Binary = [1, 1, 1, 1, 1, 1, 0]
?- decimal_binary(75,Binary)
Binary = [1, 0, 0, 1, 0, 1, 1]
?- decimal_binary(0,Binary)
Binary = [0]
Related
I have a question, which is to find the modulo 11 of a large number. The number is stored in a string whose maximum length is 1000. I want to code it in c++. How should i go about it?
I tried doing it with long long int, but its impossible that it can handle the corner case value.
A number written in decimal positional system as a_na_{n-1}...a_0 is the number
a_n*10^n+a_{n-1}*10^{n-1}+...+a_0
Note first that this number and the number
a_0-a_{1}+a_{2}+...+(-1)^{n}a_n
which is the sum of its digits with alternating signs have the same remainder after division by 11. You can check that by subtracting both numbers and noting that the result is a multiple of 11.
Based on this, if you are given a string consisting of the decimal representation of a number, then you can compute the remainder modulo 11 like this:
int remainder11(const std::string& s) {
int result{0};
bool even{true};
for (int i = s.length() - 1; i > -1; --i) {
result += (even ? 1 : -1) * ((int)(s[i] - '0'));
even = !even;
}
return ((result % 11) + 11) % 11;
}
Ok, here is the magic (math) trick.
First imagine you have a decimal number that consists only of 1s.
Say 111111, for example. It is obvious that 111111 % 11 is 0. (Since you can always write it as the sum of a series of 11*10^n). This can be generalized to all integers consists purely of even numbers of ones. (e.g. 11, 1111, 11111111). For those with odd number of ones, just subtract one from it and you will get a 10 times some number that consists of odd numbers of one (e.g 111=1+11*10), so their modulo to 11 would be 1.
A decimal number can be always written as the form of
where a0 is the least significant digit and an is the most significant digit. Note that 10^n can be written as 10^n - 1 + 1, and 10^n - 1 is a number consists of n nines. If n is even, then you will get 9 times some even number of ones, and its modulo to 11 is always 0. If n is odd, then we get 9 times some odd number of ones, and its modulo to 11 is always 9. And don't forget we've still got a +1 after 10^n - 1 + 1 so we need to add a to the result.
We are very close to our results now: we just have to add things up and do a final modulo to 11. The pseudo-code would be like:
Initialize sum to 0.
Initialize index to 0.
For every digit d from the least to most significant:
If the index is even, sum += d
Otherwise, sum += 10 * d
++index
sum %= 11
Return sum % 11
In answers to this other question, the following solution is provided, curtesy of OpenBSD, rewritten for brevity,
uint32_t foo( uint32_t limit ) {
uint32_t min = -limit % limit, r = 0;
for(;;) {
r = random_function();
if ( r >= min ) break;
}
return r % limit;
}
How exactly does the line uint32_t min = -limit % limit work? What I'd like to know is, is there a mathematical proof that it does indeed calculate some lower limit for the random number and adequately removes the modulo bias?
In -limit % limit, consider that the value produced by -limit is 2w−limit, where w is the width in bits of the unsigned type being used, because unsigned arithmetic is defined to wrap modulo 2w. (The assumes the type of limit is not narrower than int, which would result in it being promoted to int and signed arithmetic being used, and the code could break.) Then recognize that 2w−limit is congruent to 2w modulo limit. So -limit % limit produces the remainder when 2w is divided by limit. Let this be min.
In the set of integers {0, 1, 2, 3,… 2w−1}, a number with remainder r (0 ≤ r < limit) when divided by limit appears at least floor(2w/limit) times. We can identify each of them: For 0 ≤ q < floor(2w/limit), q•limit + r has remainder r and is in the set. If 0 ≤ r < min, then there is one more such number in the set, with q = floor(2w/limit). Those account for all the numbers in the set {0, 1, 2, 3,… 2w−1}, because floor(2w/limit)•limit + min = 2w, so our counts are complete. For r different remainders, there are floor(2w/limit)+1 numbers with that remainder in the set, and for min−r other remainders, there are floor(2w/limit) with that remainder in the set.
Now suppose we randomly draw a number uniformly from this set {0, 1, 2, 3,… 2w−1}. Clearly numbers with the remainders 0 ≤ r < min might occur slightly more often, because there are more of them in the set. By rejecting one instance of each such number, we exclude them from our distribution. Effectively, we are drawing from the set { min, min+1, min+2,… 2w−1}. The result is a distribution that has exactly floor(2w/limit) occurrences of each number with a particular remainder.
Since each remainder is represented an equal number of times in the effective distribution, each remainder has an equal chance of being selected by a uniform draw.
I have guessed 5 % 2 is 1 , -5 % 2 is -1
But, In Python, I get the same result.
I think it's not math problem.
>>> -5 % 2
1 ( I think this should be -1 )
>>> 5 % 2
1
>>> -7 % 6
5 ( I think this should be -1 )
>>> 7 % 6
1
Why? Because the modulo operator is defined that way in python.
The documentation states:
The modulo operator always yields a result with the same sign as its
second operand (or zero); [...]
And:
The function math.fmod() returns a result whose sign matches the
sign of the first argument instead, [...] Which approach is more
appropriate depends on the application.
You can look at the % operation in at least a couple of different ways. One important point of view is that m % n finds the element of Z[n] which is congruent to m, where Z[n] is an algebraic representation of the integers restricted to 0, 1, 2, ..., n, called the ring of integers modulo n. Note that all integers, positive, negative, and 0, are congruent to some element 0, 1, 2, ..., n in Z[n].
This ring (that is, this set plus certain operations on it) has many well-known and useful properties. For that reason, it's often advantageous to try to cast a problem in a form that leads to Z[n], where it may be easier to work. This, ultimately, is the reason the Python % was given its definition -- in the end, it has to do with operations in the ring of integers modulo n.
This article about modular arithmetic (in particular, the part about integers modulo n) could be a good starting point if you'd like to know more about this topic.
How to calculate the sum of reciprocal of set bits of a range of numbers [A,B]
(Here A,B>0&&A,B<10^9) ?
My Approach:
By using a simple for loop from A to B, I have counted the setbits of a number using _builtin_popcount (an inbuilt function for counting setbits of a number in C/C++) and then took its reciprocal and added. This is an O(n) approach.But it takes longer time due to larger constraints.How can I optimise further?Can a O(log(n)) algorithm be possible?
Let F(N, k) = |{m | m is an integer lying in [0, N] and m's binary representation has exactly k bits set}|. The answer you want is SUM{ (F(B,k) - F(A-1,k))/k | 1<=k<=MSB(B)}, where MSB = most significant bit.
You can compute F(N,k) recursively. Handle the boundaries of your recursion correctly. The actual recursion is
F(N, k) = F(N^(1<<MSB(N)), k-1) + F((1<<MSB(N))-1, k)
In words, you consider those numbers which have the same MSB as N and those which have MSB less than that of N and recurse.
The runtime is O(log(B)*log(B)).
EDIT : Illustrating the recursion :
N = 1101, in binary, k=2.
The set of numbers <= N, with MSB being the same as N are {1000, 1001, 1010, 1011, 1100, 1101}. Notice that they are in fact the same as this set 1000 + {000, 001, 010, 011, 100, 101}. In other words, they are all the numbers <= N^(1<<MSB(N)) = 1101 ^ 1000 = 101. Since you already count the MSB bit, the number of bits you need from the set {000, 001, 010, 011, 100, 101} is k-1. That explains the F(N^(1<<MSB(N)), k-1) term.
The set of numbers <=N, with MSB being less than N are {000, 001, 010, 011, 100, 101, 110, 111}. In other words, all the numbers <= (1<<MSB(N)) - 1 = 1000 - 1 = 111. So far, you haven't counted any set bits. So, you still need k bits from the numbers in the set {000, 001, 010, 011, 100, 101, 110, 111}. That is where the F((1<<MSB(N))-1, k) term comes from.
Looking for a better algorithmic approach to my problem. Any insights to this is greatly appreciated.
I have an array of numbers, say
short arr[] = [16, 24, 24, 29];
// the binary representation of this would be [10000, 11000, 11000, 11101]
I need to add the bit in position 0 of every number and bit in position 1 of every number and so on.. store it in an array, so my output array should look like this:
short addedBitPositions = [4, 3, 1, 0, 1];
// 4 1s in leftmost position, 3 1s in letmost-1 position ... etc.
The solution that I can think is this:
addedBitPosition[0] = (2 pow 4) & arr[0] + (2 pow 4) & arr[1] +
(2 pow 4) & arr[2] + (2 pow 4) & arr[3];
addedBitPosition[1] = (2 pow 3) & arr[0] + (2 pow 3) & arr[1] +
(2 pow 3) & arr[2] + (2 pow 3) & arr[3];
... so on
If the length of arr[] grows to M and the number of bits in each M[i] grows to N, then the time complexity of this solution is O(M*N).
Can I do better? Thanks!
You may try masking each number with masks 0b100...100...100...1 with 1's in each k positions. Then add the masked numbers together — you will receive sums of bits in positions 0, k, 2k... Repeat this with mask shifted left by 1, 2, ..., (k-1) to receive the sums for other bits. Of course, some more bithackery is needed to extract the sums. Also, k must be chosen such that 2 ** k < M (the length of the array), to avoid overflow.
I'm not sure it is really worth it, but might be for very long arrays and N > log₂ M.
EDIT
Actually, N > log₂ M is not a strict requirement. You may do this for "small enough" chunks of the whole array, then add together extracted values (this is O(M)). In bit-hacking the actual big-O is often overshadowed by the constant factor, so you'd need to experiment to pick the best k and the size of array chunk (assuming this technique gives any improvement over the straightforward summation you described, of course).