Why is the sum of bitwise AND and bitwise XOR equal to bitwise OR? - bit-manipulation

Is there a reason why this happens?
#include <stdio.h>
void main() {
int i, j; //Takes i as 0 with short
printf("Enter two integers: ");
scanf("%d %d", &i, &j);
printf("\n%d & %d = %d\n", i, j, (i & j));
printf("\n%d ^ %d = %d\n", i, j, (i ^ j));
printf("\n%d | %d = %d\n", i, j, (i | j));
if ((i | j) == (i & j) + (i ^ j))
printf("\nYES\n");
else
printf("\nNO\n");
}

First note that i & j and i ^ j are disjoint: if a bit is set in one of them, the corresponding bit is necessarily reset in the other. That's a consequence of the truth tables of AND and XOR. AND has only a single row with a 1 in it, and XOR has a 0 in that row, so they're never simultaneously both 1.
That means we can forget about the special complications of addition (there is no carry, which makes addition purely bitwise: equivalent to both OR and XOR), and analyze this expression as if we were dealing with just booleans.
One way to look at it is that i & j exactly compensates for the case that i ^ j does not cover. If you write out the truth tables: (only 1 bit shown)
i j i&j i^j (i&j)|(i^j)
0 0 0 0 0
0 1 0 1 1
1 0 0 1 1
1 1 1 0 1
The last column has values identical to i | j.

By using Logic gate truth table we can easily find how it works.
+---+---+------------+-----------+------------+
| A | B | AND output | OR output | XOR output |
+---+---+------------+-----------+------------+
| 0 | 0 | 0 | 0 | 0 |
| 0 | 1 | 0 | 1 | 1 |
| 1 | 0 | 0 | 1 | 1 |
| 1 | 1 | 1 | 1 | 0 |
+---+---+------------+-----------+------------+
For instance, let i = 5, j = 6. In binary we get i = 00000101, j = 00000110.
(i | j) = (00000101 | 00000110) = 01101111
(i & j) = (00000101 & 00000110) = 01100100
(i ^ j) = (00000101 ^ 00000110) = 00001011
(i & j) + (i ^ j) = 01100100 + 00001011 = 01101111 = (i | j)
Therefor, (i | j) = (i & j) + (i ^ j)

Related

Maximize the summation Operation

Given an array of n numbers and integer k and m. We have to select subsequence of length k of the array. given a function s = summation from i=1 to i=k A(i)*(i mod m). We have to maximise s.
Constraints
n<10000
k<1000
|A(i)| < 10000000
m<10000000
Suppose array is 4 9 8 2 6 7 4. And k is 4 and m is 3. For this case answer is 32. ( 9*1 + 8*2 + 2*0 + 7*1 )
My code:
#include<bits/stdc++.h>
using namespace std;
#define ll long long int
#define g_long long long
#define maxx(a, b) a > b ? a : b
int main()
{
ll n, k, m, i, j;
cin >> n >> k >> m;
ll arr[n + 1] =
{ 0 };
for (i = 0; i < n; i++)
{
cin >> arr[i];
}
ll data[8][8] =
{ 0 };
for (i = 1; i <= k; ++i)
{
for (j = 1; j <= 7; ++j)
{
ll ans = maxx((data[i - 1][j - 1] + (arr[j - 1] * (i % m))),
(data[i][j - 1]));
data[i][j] = ans;
}
}
cout << data[k][n];
}
My approach is to first generate a subsequence of length k than keep on updating maximum value.
This code passes some of the test cases but some are giving wrong answer.
Can anyone help me what I am doing wrong in my code or suggest a better approach for this question?
The 2-D Dimensional Dp table which we are going to form by the following observation:
We have to take the maximum between the two values: (dp[i-1][j-1]+(arr[j-1]*(i%m)),dp[i][j-1])
where arr is the array i.e. [4 9 8 2 6 7 4] and dp is 2-dimensional DP-Table.
DP Table is given with rows as values of k (from 0 to k) and with columns as elements of the array.
DP| 0 | 4 | 09 | 08 | 02 | 06 | 07 | 04
00 || 0 | 0 | 00 | 00 | 00 | 00 | 00 | 00
01 || 0 | 4 | 09 | 09 | 09 | 09 | 09| 09
02 || 0 | 0 | 22 | 25 | 25 | 25 | 25| 25
03 || 0 | 0 | 00 |22 | 25 | 25 | 25| 25
04 || 0 | 0 | 00 |00 | 24 | 26 | 32 | 32
The following python code passes all the test cases as discussed in comments:
n = 7
k = 4
m = 3
arr = [49999, 4999, 4999, 4999, 99999, 99999, 49999]
# Initialising 2-D DP with 0 of size (k+1)*(n+1)
# Extra rows for handling edge cases
dp = [[0 for i in range(n+1)] for j in range(k+1)]
for i in range(1,k+1):
for j in range(i,n+1):
ans = max(dp[i-1][j-1]+(arr[j-1]*(i%m)),dp[i][j-1])
dp[i][j] = ans
# Maximum element at the bottom-right-most of 2-D DP
print(dp[k][n])
Thanks to #MBo for sharing top-down approach...
#functools.lru_cache
def mx(i, k):
if (i < 0 or k == 0):
return 0
else:
return max(mx(i-1, k), (mx(i-1, k-1) + l[i]*(k%m)))

A many-to-one mapping in the natural domain using discrete input variables?

I would like to find a mapping f:X --> N, with multiple discrete natural variables X of varying dimension, where f produces a unique number between 0 to the multiplication of all dimensions. For example. Assume X = {a,b,c}, with dimensions |a| = 2, |b| = 3, |c| = 2. f should produce 0 to 12 (2*3*2).
a b c | f(X)
0 0 0 | 0
0 0 1 | 1
0 1 0 | 2
0 1 1 | 3
0 2 0 | 4
0 2 1 | 5
1 0 0 | 6
1 0 1 | 7
1 1 0 | 8
1 1 1 | 9
1 2 0 | 10
1 2 1 | 11
This is easy when all dimensions are equal. Assume binary for example:
f(a=1,b=0,c=1) = 1*2^2 + 0*2^1 + 1*2^0 = 5
Using this naively with varying dimensions we would get overlapping values:
f(a=0,b=1,c=1) = 0*2^2 + 1*3^1 + 1*2^2 = 4
f(a=1,b=0,c=0) = 1*2^2 + 0*3^1 + 0*2^2 = 4
A computationally fast function is preferred as I intend to use/implement it in C++. Any help is appreciated!
Ok, the most important part here is math and algorythmics. You have variable dimensions of size (from least order to most one) d0, d1, ... ,dn. A tuple (x0, x1, ... , xn) with xi < di will represent the following number: x0 + d0 * x1 + ... + d0 * d1 * ... * dn-1 * xn
In pseudo-code, I would write:
result = 0
loop for i=n to 0 step -1
result = result * d[i] + x[i]
To implement it in C++, my advice would be to create a class where the constructor would take the number of dimensions and the dimensions itself (or simply a vector<int> containing the dimensions), and a method that would accept an array or a vector of same size containing the values. Optionaly, you could control that no input value is greater than its dimension.
A possible C++ implementation could be:
class F {
vector<int> dims;
public:
F(vector<int> d) : dims(d) {}
int to_int(vector<int> x) {
if (x.size() != dims.size()) {
throw std::invalid_argument("Wrong size");
}
int result = 0;
for (int i = dims.size() - 1; i >= 0; i--) {
if (x[i] >= dims[i]) {
throw std::invalid_argument("Value >= dimension");
}
result = result * dims[i] + x[i];
}
return result;
}
};

what is mean by dict

I came through a program which was like below
firstMissingPositive(vector<int> &A) {
vector<bool> dict(A.size()+1,false);
for(int i=0;i<A.size();i++){
if(A[i]>0 && A[i]<dict.size()) dict[A[i]]=true;
}
if(A.size()==1 && A[0]!=1) return 1;
else if(A.size()==1 && A[0]==1) return 2;
int i=0;
for(i=1;i<dict.size();i++){
if(dict[i]==false) return i;
}
return i;
}
In this program, I could not get what is mean by following line
vector<bool> dict(A.size()+1,false);
What is dict and this statement?
It's simply a variable.
The definition of the variable calls a specific constructor of the vector to initialize it with a specific size, and initialize all elements to a specific value.
It's equivalent to
vector<bool> dict;
dict.resize(A.size()+1,false);
See e.g. this std::vector constructor reference for more information about available constructors.
It is an definition of a variable "dict" of type vector. And please Google it first
You are declaring container of bool's (it means variables which stores only 0/1 (8B)) which has same count of elements as int vector A and all these elements are set to false -> 0.
It calls this constructor
vector (size_type n, const value_type& val,
const allocator_type& alloc = allocator_type());
Example:
This is vector A:
0 1 2 3 4 <- Indexes
+---+---+---+---+---+
| 0 | 1 | 2 | 3 | 4 | (int)
+---+---+---+---+---+
Its size is 5, so it would declare container with size 5, initialized to 0's.
0 1 2 3 4 <- Indexes
+---+---+---+---+---+
| 0 | 0 | 0 | 0 | 0 | (bool)
+---+---+---+---+---+
In this case its used to flag indexes in first vectror.
For example it is often used for Sieve of Eratosthenes. You can set 1's to primes with each iteration. It would be (for numbers 0-4)
0 1 2 3 4
+---+---+---+---+---+
| 0 | 0 | 1 | 1 | 0 |
+---+---+---+---+---+
Then you know on which indexes are primes in vector A.
for (int i = 0; i < A.size(); i++)
{
if ( dict[i] == true )
{
std::cout << "Prime number: << A[i] << std::endl;
}
}

Int into Array C++ help please

I am trying to separate an integer into an array. I have been using modulo 10 and then dividing by 10 but I believe that will only work for numbers 6 digits or less I may be wrong but it is not working for me. This is what I have:
for(int i=0; i<=8; i++){
intAr[i] = intVal%10;
intVal /= 10;
}
It is not working for me and help would be lovely
The problem i guess you have is that the number in the array is reversed. So try this:
for(i=8;i>=0;i--)
{
intAr[i] = intVal%10;
intVal /= 10;
}
This will work and have the number stored correctly in the array
If you are expecting the numbers to be stored in your array right-to-left, you'll need to reverse the way your store the values:
for(int i=0; i < 9; i++)
{
intAr[9 - i - 1] = intVal % 10;
intVal /= 10;
}
This will store your number (103000648) like this
|-0-|-1-|-2-|-3-|-4-|-5-|-6-|-7-|-8-|
| 1 | 0 | 3 | 0 | 0 | 0 | 6 | 4 | 8 |
instead of
|-0-|-1-|-2-|-3-|-4-|-5-|-6-|-7-|-8-|
| 8 | 4 | 6 | 0 | 0 | 0 | 3 | 0 | 1 |

setting a vector to a matrix algorithm help in C++

I have a array X that has M*N elements, I'm trying to create a matrix A of size M x N with this same data. I'm using gsl for the matrix and X is declared as an array. I'm having trouble and I keep getting overlap in the matrix.
Here is an example of what I am trying to do:
Vector X[4*2]
1,2,3,4,5,6,7,8
Matrix A 4X2
1, 2
3, 4
5, 6
7, 8
//heres one of my many fail attempts as an example
//creation of array X here
X[n*m] = someCbasedformulafromtheweb(n, m);
//gsl matrix allocation for matrix A N x M
gsl_matrix * A = gsl_matrix_alloc(n, m);
for(int i=0; i<n; i++) {
for(int j=0; j<m; j++) {
// setting the x[i*j] entry to gsl_matrix A at positions i , j
gsl_matrix_set (A,i,j, x[i*j]);
}
}
I don't have gsl to play with, but wouldn't this work?
for (i=0 ; i<4 ; ++i)
for (j=0 ; j<2 ; ++j)
X[2*i + j] = gsl_matrix_get (&A, i, j));
Your problem is at this line:
gsl_matrix_set (A,i,j, x[i*j]);
Here is the table of things:
i | j | x[i*j]
0 | 0 | x[0]
0 | 1 | x[0]
1 | 0 | x[0]
1 | 1 | x[1]
2 | 0 | x[0]
2 | 1 | x[2]
3 | 0 | x[0]
3 | 1 | x[3]
Instead you need to use:
gsl_matrix_set (A,i,j, x[2*i+j]);
i | j | x[2*i+j]
0 | 0 | x[0]
0 | 1 | x[1]
1 | 0 | x[2]
1 | 1 | x[3]
2 | 0 | x[4]
2 | 1 | x[5]
3 | 0 | x[6]
3 | 1 | x[7]