Maximize the summation Operation - c++

Given an array of n numbers and integer k and m. We have to select subsequence of length k of the array. given a function s = summation from i=1 to i=k A(i)*(i mod m). We have to maximise s.
Constraints
n<10000
k<1000
|A(i)| < 10000000
m<10000000
Suppose array is 4 9 8 2 6 7 4. And k is 4 and m is 3. For this case answer is 32. ( 9*1 + 8*2 + 2*0 + 7*1 )
My code:
#include<bits/stdc++.h>
using namespace std;
#define ll long long int
#define g_long long long
#define maxx(a, b) a > b ? a : b
int main()
{
ll n, k, m, i, j;
cin >> n >> k >> m;
ll arr[n + 1] =
{ 0 };
for (i = 0; i < n; i++)
{
cin >> arr[i];
}
ll data[8][8] =
{ 0 };
for (i = 1; i <= k; ++i)
{
for (j = 1; j <= 7; ++j)
{
ll ans = maxx((data[i - 1][j - 1] + (arr[j - 1] * (i % m))),
(data[i][j - 1]));
data[i][j] = ans;
}
}
cout << data[k][n];
}
My approach is to first generate a subsequence of length k than keep on updating maximum value.
This code passes some of the test cases but some are giving wrong answer.
Can anyone help me what I am doing wrong in my code or suggest a better approach for this question?

The 2-D Dimensional Dp table which we are going to form by the following observation:
We have to take the maximum between the two values: (dp[i-1][j-1]+(arr[j-1]*(i%m)),dp[i][j-1])
where arr is the array i.e. [4 9 8 2 6 7 4] and dp is 2-dimensional DP-Table.
DP Table is given with rows as values of k (from 0 to k) and with columns as elements of the array.
DP| 0 | 4 | 09 | 08 | 02 | 06 | 07 | 04
00 || 0 | 0 | 00 | 00 | 00 | 00 | 00 | 00
01 || 0 | 4 | 09 | 09 | 09 | 09 | 09| 09
02 || 0 | 0 | 22 | 25 | 25 | 25 | 25| 25
03 || 0 | 0 | 00 |22 | 25 | 25 | 25| 25
04 || 0 | 0 | 00 |00 | 24 | 26 | 32 | 32
The following python code passes all the test cases as discussed in comments:
n = 7
k = 4
m = 3
arr = [49999, 4999, 4999, 4999, 99999, 99999, 49999]
# Initialising 2-D DP with 0 of size (k+1)*(n+1)
# Extra rows for handling edge cases
dp = [[0 for i in range(n+1)] for j in range(k+1)]
for i in range(1,k+1):
for j in range(i,n+1):
ans = max(dp[i-1][j-1]+(arr[j-1]*(i%m)),dp[i][j-1])
dp[i][j] = ans
# Maximum element at the bottom-right-most of 2-D DP
print(dp[k][n])
Thanks to #MBo for sharing top-down approach...
#functools.lru_cache
def mx(i, k):
if (i < 0 or k == 0):
return 0
else:
return max(mx(i-1, k), (mx(i-1, k-1) + l[i]*(k%m)))

Related

Why is the sum of bitwise AND and bitwise XOR equal to bitwise OR?

Is there a reason why this happens?
#include <stdio.h>
void main() {
int i, j; //Takes i as 0 with short
printf("Enter two integers: ");
scanf("%d %d", &i, &j);
printf("\n%d & %d = %d\n", i, j, (i & j));
printf("\n%d ^ %d = %d\n", i, j, (i ^ j));
printf("\n%d | %d = %d\n", i, j, (i | j));
if ((i | j) == (i & j) + (i ^ j))
printf("\nYES\n");
else
printf("\nNO\n");
}
First note that i & j and i ^ j are disjoint: if a bit is set in one of them, the corresponding bit is necessarily reset in the other. That's a consequence of the truth tables of AND and XOR. AND has only a single row with a 1 in it, and XOR has a 0 in that row, so they're never simultaneously both 1.
That means we can forget about the special complications of addition (there is no carry, which makes addition purely bitwise: equivalent to both OR and XOR), and analyze this expression as if we were dealing with just booleans.
One way to look at it is that i & j exactly compensates for the case that i ^ j does not cover. If you write out the truth tables: (only 1 bit shown)
i j i&j i^j (i&j)|(i^j)
0 0 0 0 0
0 1 0 1 1
1 0 0 1 1
1 1 1 0 1
The last column has values identical to i | j.
By using Logic gate truth table we can easily find how it works.
+---+---+------------+-----------+------------+
| A | B | AND output | OR output | XOR output |
+---+---+------------+-----------+------------+
| 0 | 0 | 0 | 0 | 0 |
| 0 | 1 | 0 | 1 | 1 |
| 1 | 0 | 0 | 1 | 1 |
| 1 | 1 | 1 | 1 | 0 |
+---+---+------------+-----------+------------+
For instance, let i = 5, j = 6. In binary we get i = 00000101, j = 00000110.
(i | j) = (00000101 | 00000110) = 01101111
(i & j) = (00000101 & 00000110) = 01100100
(i ^ j) = (00000101 ^ 00000110) = 00001011
(i & j) + (i ^ j) = 01100100 + 00001011 = 01101111 = (i | j)
Therefor, (i | j) = (i & j) + (i ^ j)

Generate stepping numbers upto a given number N

A number is called a stepping number if all adjacent digits in the number have an absolute difference of 1.
Examples of stepping numbers :- 0,1,2,3,4,5,6,7,8,9,10,12,21,23,...
I have to generate stepping numbers upto a given number N. The numbers generated should be in order.
I used the simple method of moving over all the numbers upto N and checking if it is stepping number or not. My teacher told me it is brute force and will take more time. Now, I have to optimize my approach.
Any suggestions.
Stepping numbers can be generated using Breadth First Search like approach.
Example to find all the stepping numbers from 0 to N
-> 0 is a stepping Number and it is in the range
so display it.
-> 1 is a Stepping Number, find neighbors of 1 i.e.,
10 and 12 and push them into the queue
How to get 10 and 12?
Here U is 1 and last Digit is also 1
V = 10 + 0 = 10 ( Adding lastDigit - 1 )
V = 10 + 2 = 12 ( Adding lastDigit + 1 )
Then do the same for 10 and 12 this will result into
101, 123, 121 but these Numbers are out of range.
Now any number transformed from 10 and 12 will result
into a number greater than 21 so no need to explore
their neighbors.
-> 2 is a Stepping Number, find neighbors of 2 i.e.
21, 23.
-> generate stepping numbers till N.
The other stepping numbers will be 3, 4, 5, 6, 7, 8, 9.
C++ code to do generate stepping numbers in a given range:
#include<bits/stdc++.h>
using namespace std;
// Prints all stepping numbers reachable from num
// and in range [n, m]
void bfs(int n, int m)
{
// Queue will contain all the stepping Numbers
queue<int> q;
for (int i = 0 ; i <= 9 ; i++)
q.push(i);
while (!q.empty())
{
// Get the front element and pop from the queue
int stepNum = q.front();
q.pop();
// If the Stepping Number is in the range
// [n, m] then display
if (stepNum <= m && stepNum >= n)
cout << stepNum << " ";
// If Stepping Number is 0 or greater than m,
// need to explore the neighbors
if (stepNum == 0 || stepNum > m)
continue;
// Get the last digit of the currently visited
// Stepping Number
int lastDigit = stepNum % 10;
// There can be 2 cases either digit to be
// appended is lastDigit + 1 or lastDigit - 1
int stepNumA = stepNum * 10 + (lastDigit- 1);
int stepNumB = stepNum * 10 + (lastDigit + 1);
// If lastDigit is 0 then only possible digit
// after 0 can be 1 for a Stepping Number
if (lastDigit == 0)
q.push(stepNumB);
//If lastDigit is 9 then only possible
//digit after 9 can be 8 for a Stepping
//Number
else if (lastDigit == 9)
q.push(stepNumA);
else
{
q.push(stepNumA);
q.push(stepNumB);
}
}
}
//Driver program to test above function
int main()
{
int n = 0, m = 99;
// Display Stepping Numbers in the
// range [n,m]
bfs(n,m);
return 0;
}
Visit this link.
The mentioned link has both BFS and DFS approach.
It will provide you with explaination and code in different languages for the above problem.
We also can use simple rules to move to the next stepping number and generate them in order to avoid storing "parents".
C.f. OEIS sequence
#include <iostream>
int next_stepping(int n) {
int left = n / 10;
if (left == 0)
return (n + 1); // 6=>7
int last = n % 10;
int leftlast = left % 10;
if (leftlast - last == 1 & last < 8)
return (n + 2); // 32=>34
int nxt = next_stepping(left);
int nxtlast = nxt % 10;
if (nxtlast == 0)
return (nxt * 10 + 1); // to get 101
return (nxt * 10 + nxtlast - 1); //to get 121
}
int main()
{
int t = 0;
for (int i = 1; i < 126; i++, t = next_stepping(t)) {
std::cout << t << "\t";
if (i % 10 == 0)
std::cout << "\n";
}
}
0 1 2 3 4 5 6 7 8 9
10 12 21 23 32 34 43 45 54 56
65 67 76 78 87 89 98 101 121 123
210 212 232 234 321 323 343 345 432 434
454 456 543 545 565 567 654 656 676 678
765 767 787 789 876 878 898 987 989 1010
1012 1210 1212 1232 1234 2101 2121 2123 2321 2323
2343 2345 3210 3212 3232 3234 3432 3434 3454 3456
4321 4323 4343 4345 4543 4545 4565 4567 5432 5434
5454 5456 5654 5656 5676 5678 6543 6545 6565 6567
6765 6767 6787 6789 7654 7656 7676 7678 7876 7878
7898 8765 8767 8787 8789 8987 8989 9876 9878 9898
10101 10121 10123 12101 12121
def steppingNumbers(self, n, m):
def _solve(v):
if v>m: return 0
ans = 1 if n<=v<=m else 0
last = v%10
if last > 0: ans += _solve(v*10 + last-1)
if last < 9: ans += _solve(v*10 + last+1)
return ans
ans = 0 if n>0 else 1
for i in range(1, 10):
ans += _solve(i)
return ans

Bit order in struct is not what I would have expected

I have a framework which uses 16 bit floats, and I wanted to separate its components to then use for 32bit floats. In my first approach I used bit shifts and similar, and while that worked, it was wildly chaotic to read.
I then wanted to use custom bit sized structs instead, and use a union to write to that struct.
The code to reproduce the issue:
#include <iostream>
#include <stdint.h>
union float16_and_int16
{
struct
{
uint16_t Mantissa : 10;
uint16_t Exponent : 5;
uint16_t Sign : 1;
} Components;
uint16_t bitMask;
};
int main()
{
uint16_t input = 0x153F;
float16_and_int16 result;
result.bitMask = input;
printf("Mantissa: %#010x\n", result.Components.Mantissa);
printf("Exponent: %#010x\n", result.Components.Exponent);
printf("Sign: %#010x\n", result.Components.Sign);
return 0;
}
In the example I would expect my Mantissa to be 0x00000054, the exponent to be 0x0000001F, and sign 0x00000001
Instead I get Mantissa: 0x0000013f, Exponent: 0x00000005, Sign: 0x00000000
Which means that from my bit mask first the Sign was taken (first bit), next 5 bits to exponent, then 10 bit to mantissa, so the order is inverse of what I wanted. Why is that happening?
The worse part is that a different compiler could give the expected order. The standard has never specified the implementation details for bitfields, and specifically the order. The rationale being as usual that it is an implementation detail and that programmers should not rely nor depend on that.
The downside is that it is not possible to use bitfields in cross language programs, and that programmers cannot use bitfields for processing data having well known bitfields (for example in network protocol headers) because it is too complex to make sure how the implementation will process them.
For that reason I have always thought that it was just an unuseable feature and I only use bitmask on unsigned types instead of bitfields. But that last part is no more than my own opinion...
I would say your input is incorrect, for this compiler anyway. This is what the float16_and_int16 order looks like.
sign exponent mantissa
[15] [14:10] [9:0]
or
SGN | E X P O N E N T| M A N T I S S A |
15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
if input = 0x153F then bitMask ==
SGN | E X P O N E N T| M A N T I S S A |
15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
0 0 0 1 0 1 0 1 0 0 1 1 1 1 1 1
so
MANTISSA == 0100111111 (0x13F)
EXPONENT == 00101 (0x5)
SIGN == 0 (0x0)
If you want mantissa to be 0x54, exponent 0x1f and sign 0x1 you need
SGN | E X P O N E N T| M A N T I S S A |
15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
1 1 1 1 1 1 0 0 0 1 0 1 0 1 0 0
or
input = 0xFC64

A many-to-one mapping in the natural domain using discrete input variables?

I would like to find a mapping f:X --> N, with multiple discrete natural variables X of varying dimension, where f produces a unique number between 0 to the multiplication of all dimensions. For example. Assume X = {a,b,c}, with dimensions |a| = 2, |b| = 3, |c| = 2. f should produce 0 to 12 (2*3*2).
a b c | f(X)
0 0 0 | 0
0 0 1 | 1
0 1 0 | 2
0 1 1 | 3
0 2 0 | 4
0 2 1 | 5
1 0 0 | 6
1 0 1 | 7
1 1 0 | 8
1 1 1 | 9
1 2 0 | 10
1 2 1 | 11
This is easy when all dimensions are equal. Assume binary for example:
f(a=1,b=0,c=1) = 1*2^2 + 0*2^1 + 1*2^0 = 5
Using this naively with varying dimensions we would get overlapping values:
f(a=0,b=1,c=1) = 0*2^2 + 1*3^1 + 1*2^2 = 4
f(a=1,b=0,c=0) = 1*2^2 + 0*3^1 + 0*2^2 = 4
A computationally fast function is preferred as I intend to use/implement it in C++. Any help is appreciated!
Ok, the most important part here is math and algorythmics. You have variable dimensions of size (from least order to most one) d0, d1, ... ,dn. A tuple (x0, x1, ... , xn) with xi < di will represent the following number: x0 + d0 * x1 + ... + d0 * d1 * ... * dn-1 * xn
In pseudo-code, I would write:
result = 0
loop for i=n to 0 step -1
result = result * d[i] + x[i]
To implement it in C++, my advice would be to create a class where the constructor would take the number of dimensions and the dimensions itself (or simply a vector<int> containing the dimensions), and a method that would accept an array or a vector of same size containing the values. Optionaly, you could control that no input value is greater than its dimension.
A possible C++ implementation could be:
class F {
vector<int> dims;
public:
F(vector<int> d) : dims(d) {}
int to_int(vector<int> x) {
if (x.size() != dims.size()) {
throw std::invalid_argument("Wrong size");
}
int result = 0;
for (int i = dims.size() - 1; i >= 0; i--) {
if (x[i] >= dims[i]) {
throw std::invalid_argument("Value >= dimension");
}
result = result * dims[i] + x[i];
}
return result;
}
};

Direct formula for summing XOR

I have to XOR numbers from 1 to N, does there exist a direct formula for it ?
For example if N = 6 then 1^2^3^4^5^6 = 7 I want to do it without using any loop so I need an O(1) formula (if any)
Your formula is N & (N % 2 ? 0 : ~0) | ( ((N & 2)>>1) ^ (N & 1) ):
int main()
{
int S = 0;
for (int N = 0; N < 50; ++N) {
S = (S^N);
int check = N & (N % 2 ? 0 : ~0) | ( ((N & 2)>>1) ^ (N & 1) );
std::cout << "N = " << N << ": " << S << ", " << check << std::endl;
if (check != S) throw;
}
return 0;
}
Output:
N = 0: 0, 0 N = 1: 1, 1 N = 2: 3, 3
N = 3: 0, 0 N = 4: 4, 4 N = 5: 1, 1
N = 6: 7, 7 N = 7: 0, 0 N = 8: 8, 8
N = 9: 1, 1 N = 10: 11, 11 N = 11: 0, 0
N = 12: 12, 12 N = 13: 1, 1 N = 14: 15, 15
N = 15: 0, 0 N = 16: 16, 16 N = 17: 1, 1
N = 18: 19, 19 N = 19: 0, 0 N = 20: 20, 20
N = 21: 1, 1 N = 22: 23, 23 N = 23: 0, 0
N = 24: 24, 24 N = 25: 1, 1 N = 26: 27, 27
N = 27: 0, 0 N = 28: 28, 28 N = 29: 1, 1
N = 30: 31, 31 N = 31: 0, 0 N = 32: 32, 32
N = 33: 1, 1 N = 34: 35, 35 N = 35: 0, 0
N = 36: 36, 36 N = 37: 1, 1 N = 38: 39, 39
N = 39: 0, 0 N = 40: 40, 40 N = 41: 1, 1
N = 42: 43, 43 N = 43: 0, 0 N = 44: 44, 44
N = 45: 1, 1 N = 46: 47, 47 N = 47: 0, 0
N = 48: 48, 48 N = 49: 1, 1 N = 50: 51, 51
Explanation:
Low bit is XOR between low bit and next bit.
For each bit except low bit the following holds:
if N is odd then that bit is 0.
if N is even then that bit is equal to corresponded bit of N.
Thus for the case of odd N the result is always 0 or 1.
edit
GSerg Has posted a formula without loops, but deleted it for some reason (undeleted now). The formula is perfectly valid (apart from a little mistake). Here's the C++-like version.
if n % 2 == 1 {
result = (n % 4 == 1) ? 1 : 0;
} else {
result = (n % 4 == 0) ? n : n + 1;
}
One can prove it by induction, checking all reminders of division by 4. Although, no idea how you can come up with it without generating output and seeing regularity.
Please explain your approach a bit more.
Since each bit is independent in xor operation, you can calculate them separately.
Also, if you look at k-th bit of number 0..n, it'll form a pattern. E.g., numbers from 0 to 7 in binary form.
000
001
010
011
100
101
110
111
You see that for k-th bit (k starts from 0), there're 2^k zeroes, 2^k ones, then 2^k zeroes again, etc.
Therefore, you can for each bit calculate how many ones there are without actually going through all numbers from 1 to n.
E.g., for k = 2, there're repeated blocks of 2^2 == 4 zeroes and ones. Then,
int ones = (n / 8) * 4; // full blocks
if (n % 8 >= 4) { // consider incomplete blocks in the end
ones += n % 8 - 3;
}
For odd N, the result is either 1 or 0 (cyclic, 0 for N=3, 1 for N=5, 0 for N=7 etc.)
For even N, the result is either N or N+1 (cyclic, N+1 for N=2, N for N=4, N+1 for N=6, N for N=8 etc).
Pseudocode:
if (N mod 2) = 0
if (N mod 4) = 0 then r = N else r = N+1
else
if (N mod 4) = 1 then r = 1 else r = 0
Lets say the function that XORs all the values from 1 to N be XOR(N), then
XOR(1) = 000 1 = 0 1 ( The 0 is the dec of bin 000)
XOR(2) = 001 1 = 1 1
XOR(3) = 000 0 = 0 0
XOR(4) = 010 0 = 2 0
XOR(5) = 000 1 = 0 1
XOR(6) = 011 1 = 3 1
XOR(7) = 000 0 = 0 0
XOR(8) = 100 0 = 4 0
XOR(9) = 000 1 = 0 1
XOR(10)= 101 1 = 5 1
XOR(11)= 000 0 = 0 0
XOR(12)= 110 0 = 6 0
I hope you can see the pattern. It should be similar for other numbers too.
Try this:
the LSB gets toggled each time the N is odd, so we can say that
rez & 1 == (N & 1) ^ ((N >> 1) & 1)
The same pattern can be observed for the rest of the bits.
Each time the bits B and B+1 (starting from LSB) in N will be different, bit B in the result should be set.
So, the final result will be (including N): rez = N ^ (N >> 1)
EDIT: sorry, it was wrong. the correct answer:
for odd N: rez = (N ^ (N >> 1)) & 1
for even N: rez = (N & ~1) | ((N ^ (N >> 1)) & 1)
Great answer by Alexey Malistov! A variation of his formula: n & 1 ? (n & 2) >> 1 ^ 1 : n | (n & 2) >> 1 or equivalently n & 1 ? !(n & 2) : n | (n & 2) >> 1.
this method avoids using conditionals F(N)=(N&((N&1)-1))|((N&1)^((N&3)>>1)
F(N)= (N&(b0-1)) | (b0^b1)
If you look at the XOR of the first few numbers you get:
N | F(N)
------+------
0001 | 0001
0010 | 0011
0011 | 0000
0100 | 0100
0101 | 0001
0110 | 0111
0111 | 0000
1000 | 1000
1001 | 0001
Hopefully you notice the pattern:
if N mod 4 = 1 than F(N)=1
if N mod 4 = 3 than F(N)=0
if N mod 4 = 0 than F(N)=N
if N mod 4 = 2 than F(N)=N but with the first bit as 1 so N|1
the tricky part is getting this in one statement without conditionals ill explain the logic I used to do this.
take the first 2 significant bits of N call them:
b0 and b1 and these are obtained with:
b0 = N&1
b1 = N&3>>1
Notice that if b0 == 1 we have to 0 all of the bits, but if it isn't all of the bits except for the first bit stay the same. We can do this behavior by:
N & (b0-1) : this works because of 2's complement, -1 is equal to a number with all bits set to 1 and 1-1=0 so when b0=1 this results in F(N)=0.. so that is the first part of the function:
F(N)= (N&(b0-1))...
now this will work for for N mod 4 == 3 and 0, for the other 2 cases lets look solely at b1, b0 and F(N)0:
b0|b1|F(N)0
--+--+-----
1| 1| 0
0| 0| 0
1| 0| 1
0| 1| 1
Ok hopefully this truth table looks familiar! it is b0 XOR b1 (b1^b0). so now that we know how to get the last bit let put that on our function:
F(N)=(N&(b0-1))|b0^b1
and there you go, a function without using conditionals. also this is useful if you want to compute the XOR from positive numbers a to b. you can do:
F(a) XOR F(b).
With minimum change to the original logic:
int xor = 0;
for (int i = 1; i <= N; i++) {
xor ^= i;
}
We can have:
int xor = 0;
for (int i = N - (N % 4); i <= N; i++) {
xor ^= i;
}
It does have a loop but it would take a constant time to execute. The number of times we iterate through the for-loop would vary between 1 and 4.
How about this?
!(n&1)*n+(n%4&n%4<3)
This works fine without any issues for any n;
unsigned int xorn(unsigned int n)
{
if (n % 4 == 0)
return n;
else if(n % 4 == 1)
return 1;
else if(n % 4 == 2)
return n+1;
else
return 0;
}
Take a look at this. This will solve your problem.
https://stackoverflow.com/a/10670524/4973570
To calculate the XOR sum from 1 to N:
int ans,mod=N%4;
if(mod==0) ans=N;
else if(mod==1) ans=1;
else if(mod==2) ans=N+1;
else if(mod==3) ans=0;
If still someone needs it here simple python solution:
def XorSum(L):
res = 0
if (L-1)%4 == 0:
res = L-1
elif (L-1)%4 == 1:
res = 1
elif (L-1)%4 == 2:
res = (L-1)^1
else: #3
res= 0
return res