What is an elegant algorithm to mix the elements two by two in two arrays (of potentially differing sizes) so that the items are drawn in an alternating fashion from each array, with the leftovers added to the end?
E.g.
Array 1: 0, 2, 4, 6
Array 2: 1, 3, 5, 7
Mixed array: 0, 2, 1, 3, 4, 6, 5, 7
Don't worry about null checking or any other edge cases, I'll handle those.
Here is my solution but it does not work properly:
for (i = 0; i < N; i++) {
arr[2 * i + 0] = A[i];
arr[2 * i + 1] = A[i+1];
arr[2 * i + 0] = B[i];
arr[2 * i + 1] = B[i+1];
}
It is very fiddly to calculate the array indices explicitly, especially if your arrays can be of different and possibly odd lengths. It is easier if you keep three separate indices, one for each array:
int pairwise(int c[], const int a[], size_t alen, const int b[], size_t blen)
{
size_t i = 0; // index into a
size_t j = 0; // index into b
size_t k = 0; // index into c
while (i < alen || j < blen) {
if (i < alen) c[k++] = a[i++];
if (i < alen) c[k++] = a[i++];
if (j < blen) c[k++] = b[j++];
if (j < blen) c[k++] = b[j++];
}
return k;
}
The returned value k will be equal to alen + blen, which is the implicit dimension of the result array c. Because the availability of a next item is checked for each array operation, this code works for arrays of different lengths and when the arrays have an odd number of elements.
You can use the code like this:
#define countof(x) (sizeof(x) / sizeof(*x))
int main()
{
int a[] = {1, 2, 3, 4, 5, 6, 7, 8, 9};
int b[] = {-1, -2, -3, -4, -5, -6};
int c[countof(a) + countof(b)];
int i, n;
n = pairwise(c, a, countof(a), b, countof(b));
for (i = 0; i < n; i++) {
if (i) printf(", ");
printf("%d", c[i]);
}
puts("");
return 0;
}
(The example is in C, not C++, but your code doesn't use any of C++'s containers such as vector, so I've uses plain old ´int` arrays with explicit dimensions, which are the same in C and C++.)
Some notes on the loop you have;
You use the same position in the result array arr to assign two values to it (one from A and one from B).
The calculation for the index is possibly more complex than it needs to be, consider using two indexers given the two ways you are indexing over the arrays.
I would propose you use a loop that has two indexers (i and j) and explicitly loop over the four elements of the result (i.e. two position for each input array). In each loop you increment the indexers appropriately (by 4 for the output array and by 2 for the input arrays).
#include <iostream>
int main()
{
using namespace std;
constexpr int N = 4;
int A[N] = {2, 4, 6, 8};
int B[N] = {1, 3, 5, 7};
int arr[N*2];
for (auto i = 0, j=0; i < N*2; i+=4, j+=2) {
arr[i + 0] = A[j];
arr[i + 1] = A[j+1];
arr[i + 2] = B[j];
arr[i + 3] = B[j+1];
}
for (auto i =0; i < N*2; ++i) {
cout << arr[i] << ",";
}
cout << endl;
}
Note: you mention you take care of corner cases, so the code here requires the input arrays to be of the same length and that the length is even.
Try this:
for (i = 0; i < N; i += 2) {
arr[2 * i + 0] = A[i];
arr[2 * i + 1] = A[i+1];
arr[2 * i + 2] = B[i];
arr[2 * i + 3] = B[i+1];
}
Didn't consider any corner case, just fixing your concept. For example, check whether any array index out of bound occurs or not. You can run live here.
it should like this.
for (i = 0; i < N; i+=2) {
arr[2 * i + 0] = A[i];
arr[2 * i + 1] = A[i+1];
arr[2 * i + 2] = B[i];
arr[2 * i + 3] = B[i+1];
}
Related
I am trying to implement the FFT algorithm on C. I wrote a code based on the function "four1" from the book "Numerical Recipes in C". I know that using external libraries such as FFTW would be more efficient, but I just wanted to try this as a first approach. But I am getting an error at runtime.
After trying to debug for a while, I have decided to copy the exact same function provided in the book, but I still have the same problem. The problem seems to be in the following commands:
tempr = wr * data[j] - wi * data[j + 1];
tempi = wr * data[j + 1] + wi * data[j];
and
data[j + 1] = data[i + 1] - tempi;
the j is sometimes as high as the last index of the array, so you cannot add one when indexing.
As I said, I didn´t change anything from the code, so I am very surprised that it is not working for me; it is a well-known reference for numerical methods in C, and I doubt there are errors in it. Also, I have found some questions regarding the same code example but none of them seemed to have the same issue (see C: Numerical Recipies (FFT), for example). What am I doing wrong?
Here is the code:
#include <iostream>
#include <stdio.h>
using namespace std;
#define SWAP(a,b) tempr=(a);(a)=(b);(b)=tempr
void four1(double* data, unsigned long nn, int isign)
{
unsigned long n, mmax, m, j, istep, i;
double wtemp, wr, wpr, wpi, wi, theta;
double tempr, tempi;
n = nn << 1;
j = 1;
for (i = 1; i < n; i += 2) {
if (j > i) {
SWAP(data[j], data[i]);
SWAP(data[j + 1], data[i + 1]);
}
m = n >> 1;
while (m >= 2 && j > m) {
j -= m;
m >>= 1;
}
j += m;
}
mmax = 2;
while (n > mmax) {
istep = mmax << 1;
theta = isign * (6.28318530717959 / mmax);
wtemp = sin(0.5 * theta);
wpr = -2.0 * wtemp * wtemp;
wpi = sin(theta);
wr = 1.0;
wi = 0.0;
for (m = 1; m < mmax; m += 2) {
for (i = m; i <= n; i += istep) {
j = i + mmax;
tempr = wr * data[j] - wi * data[j + 1];
tempi = wr * data[j + 1] + wi * data[j];
data[j] = data[i] - tempr;
data[j + 1] = data[i + 1] - tempi;
data[i] += tempr;
data[i + 1] += tempi;
}
wr = (wtemp = wr) * wpr - wi * wpi + wr;
wi = wi * wpr + wtemp * wpi + wi;
}
mmax = istep;
}
}
#undef SWAP
int main()
{
// Testing with random data
double data[] = {1, 1, 2, 0, 1, 3, 4, 0};
four1(data, 4, 1);
for (int i = 0; i < 7; i++) {
cout << data[i] << " ";
}
}
The first 2 editions of Numerical Recipes in C use the unusual (for C) convention that arrays are 1-based. (This was probably because the Fortran (1-based) version came first and the translation to C was done without regard to conventions.)
You should read section 1.2 Some C Conventions for Scientific
Computing, specifically the paragraphs on Vectors and One-Dimensional Arrays. As well as trying to justify their 1-based decision, this section does explain how to adapt pointers appropriately to match their code.
In your case, this should work -
int main()
{
// Testing with random data
double data[] = {1, 1, 2, 0, 1, 3, 4, 0};
double *data1based = data - 1;
four1(data1based, 4, 1);
for (int i = 0; i < 7; i++) {
cout << data[i] << " ";
}
}
However, as #Some programmer dude mentions in the comments the workaround advocated by the book is undefined behaviour as data1based points outside the bounds of the data array.
Whilst this way well work in practice, an alternative and non-UB workaround would be to change your interpretation to match their conventions -
int main()
{
// Testing with random data
double data[] = { -1 /*dummy value*/, 1, 1, 2, 0, 1, 3, 4, 0};
four1(data, 4, 1);
for (int i = 1; i < 8; i++) {
cout << data[i] << " ";
}
}
I'd be very wary of this becoming contagious though and infecting your code too widely.
The third edition tacitly recognised this 'mistake' and, as part of supporting C++ and standard library collections, switched to use the C & C++ conventions of zero-based arrays.
Given an array A of integers N, and after inputting the integers into the array, I need to make the difference between the neighboring less or equal to D and we need to do that in minimal moves. In the end print out the sum of the numbers that have been added or subtracted.
For every 0 < i < N, |S[i] - S[i - 1]| <= D
You can increase and decrease the number of the array element
Example 1: If we have an array like this
N = 7, D = 3 [2, 10, 2, 6, 4, 3, 3], then in this array we have to make the difference between the neighboring elements less or equal than 3. We don't modify the first array element, we skip over to the second array elements where we modify it from 10 down to 5 (since A[0] + 3 = 5), then we don't change the third element, we change the fourth element from 6 down to 5 (because A[3] + 3 = 5) and we don't change the rest of the elements because the difference between them is less than D. In the end we have to print out 6 (s = 0; 10 -> 5, s = 5; 6 -> 5, s = 6)
Example 2: If we have an array like this
N = 7, D = 0 [1, 4, 1, 2, 4, 2, 2]. Since D in this case is 0, by some logic we have to make all of the numbers the same. The most optimal (and in the fewest steps to solve this) way is we start with the last elements, we leave A[6] and A[5], we skip over to A[4]. Since A[4] is 4 and A[5] is 2, we have to change the 4 down to 2. Now since A[4] is 2, we skip over A[3] and we go to A[3] we change it from 1 up to 2. Then we change A[1] from 4 down to 2 and in the end, we change A[0] from 1 up to 2. In the end we have to print out 6 (s = 0; 4 -> 2, s = 2; 1 -> 2, s = 3; 4 -> 2, s = 5; 1 -> 2, s = 6).
Some other test cases:
N = 7, D = 1 [2, 10, 0, 2, 4, 3, 3] Solution: 10
N = 5, D = 1 [6, 5, 4, 3, 2] Solution: 0
I am unable to find an algorithm or an approach to this problem. I have tried several solutions and the closest I have come to solving it was 7/30 test cases.
My code:
#include <bits/stdc++.h>
#define ll long long
using namespace std;
int main() {
int n, d, s1 = 0, s2 = 1;
cin >> n >> d;
int a[n], b[n];
for(int i = 0; i < n; i++) {
cin >> a[i];
b[i] = a[i];
}
reverse(b, b + n);
for(int i = 1; i < n; i++) {
if(a[i] - a[i - 1] <= d) {
continue;
} else {
if(a[i] > a[i - 1] + d) {
while(a[i] > a[i - 1] + d) {
a[i]--;
s1++;
}
} else if(a[i - 1] - d > a[i]) {
while(a[i - 1] - d > a[i]) {
a[i]++;
s1++;
}
}
}
}
for(int i = 1; i < n; i++) {
if(b[i] - b[i - 1] <= d) {
continue;
} else {
if(b[i] > b[i - 1] + d) {
while(b[i] > b[i - 1] + d) {
b[i]--;
s2++;
}
} else if(b[i - 1] - d > b[i]) {
while(b[i - 1] - d > b[i]) {
b[i]++;
s2++;
}
}
}
}
if(s1 >= s2)
cout << s2;
else
cout << s1;
return 0;
}
I have a c++ struct which has a dynamically allocated array as a pointer. I have a function to reverse the array, but it doesn't seem to work (I think its because the temporary variable points to the original value).
struct s {
int *array;
int length;
...
s(int n) {
this->array = new int[n];
this->length = n;
}
...
void reverse() {
for (int i = 0; i < this->length; i++) {
int n = this->array[i];
this->array[i] = this->array[this->length - i - 1];
this->array[this->length - i - 1] = n;
}
}
...
}
I think what this is doing is this->array[this->length - i - 1] = this->array[i]
Hence the array remains the same and doesn't get reversed. I don't know how to deference the array pointer or how to just take the value of this->array[i] in n.
The reason your reverse doesn't work is that you're going through the length of the whole array. You need to only go through half of it. If you go through the second half, then you're un-reversing it.
As an example, if you try to reverse [1, 2, 3, 4] you get
after i = 0: [4, 2, 3, 1]
after i = 1: [4, 3, 2, 1]
--- reversed ---
after i = 2: [4, 2, 3, 1]
after i = 3: [1, 2, 3, 4]
--- back to original ---
Instead, just make your loop
for (int i = 0; i < this->length / 2; i++) {
...
}
On a side note, using 2 indexers will simplify your code considerably:
void reverse()
{
int limit = length / 2;
for ( int front = 0 , back = length - 1; front < limit; front++ , back-- )
{
int n = array[front];
array[front] = array[back];
array[back] = n;
}
}
I need to convert one dimensional array of size N to two dimensional array of size A*B > N. Let us take such case:
int oneDimensionalArray[6] = {7, 8, 10, 11, 12, 15};
//then the second array would be
int twoDimensionalArray[2][4] = {{7, 8, 10, 11},
{10, 11, 12, 15}};
This is used in so called overlap-add method used in digital sound processing. I have tried this approach which gives improper results:
for(unsigned long i = 0; i < amountOfWindows; i++)
{
for(unsigned long j = hopSize; j < windowLength; j++)
{
//buffer without the overlapping
if( (i * amountOfWindows + j) >= bufferLength)
break;
windowedBuffer[i][j] = unwindowedBuffer[i * amountOfWindows + j];
}
}
for(unsigned long i = 1; i < amountOfWindows; i++ )
{
for(unsigned long j = 0; j < hopSize; j++)
{
// Filling the overlapping region
windowedBuffer[i][j] = windowedBuffer[i-1][windowLength - hopSize + i];
}
}
I've also tried finding the relation using the modulo operation but I can't find the right one. This is the one that I've tried:
windowedBuffer[m][n % (windowLength - hopSize)] = unwindowedBuffer[n];
Since you already know hopSize (from your comment), what you want is simply:
for (size_t i = 0; i < amountOfWindows; ++i) {
for (size_t j = 0; j < windowLength; ++j) {
windowedBuffer[i][j] = unwindowedBuffer[i * hopSize + j];
}
}
Where amountOfWindows, windowLength and hopSize are you parameters (resp. 2, 4 and 2 in your example).
Suppose the array is 1 2 3 4 5
Here N = 5 and we have to select 3 elements and we cannot select more than 2 consecutive elements, so P = 3 and k = 2. So the output here will be 1 + 2 + 4 = 7.
I came up with a recursive solution, but it has an exponential time complexity. Here is the code.
#include<iostream>
using namespace std;
void mincost_hoarding (int *arr, int max_size, int P, int k, int iter, int& min_val, int sum_sofar, int orig_k)
{
if (P == 0)
{
if (sum_sofar < min_val)
min_val = sum_sofar;
return;
}
if (iter == max_size)
return;
if (k!=0)
{
mincost_hoarding (arr, max_size, P - 1, k - 1, iter + 1, min_val, sum_sofar + arr[iter], orig_k);
mincost_hoarding (arr, max_size, P, orig_k, iter + 1, min_val, sum_sofar, orig_k);
}
else
{
mincost_hoarding (arr, max_size, P, orig_k, iter + 1, min_val, sum_sofar, orig_k);
}
}
int main()
{
int a[] = {10, 5, 13, 8, 2, 11, 6, 4};
int N = sizeof(a)/sizeof(a[0]);
int P = 2;
int k = 1;
int min_val = INT_MAX;
mincost_hoarding (a, N, P, k, 0, min_val, 0, k);
cout<<min_val;
}
Also, if supposedly P elements cannot be selected following the constraint, then we return INT_MAX.
I was asked this question in an interview. After proposing this solution, the interviewer was expecting something faster. Maybe, a DP approach towards the problem. Can someone propose a DP algorithm if there exists one, or a faster algorithm.
I have tried various tests cases and got correct answers. If you find some test cases that are giving incorrect response, please point that out too.
Below is a Java Dynamic Programming algorithm.
(the C++ version should look very similar)
It basically works as follows:
Have a 3D array of [pos][consecutive length][length]
Here length index = actual length - 1), so [0] would be length 1, similarly for consecutive length. This was done since there's no point to having length 0 anywhere.
At every position:
If at length 0 and consecutive length 0, just use the value at pos.
Otherwise, if consecutive length 0, look around for the minimum in all the previous positions (except pos - 1) with length - 1 and use that plus the value at pos.
For everything else, if pos > 0 && consecutive length > 0 && length > 0,
use [pos-1][consecutive length-1][length-1] plus the value at pos.
If one of those are 0, initialize it to an invalid value.
Initially it felt like one only needs 2 dimensions for this problem, however, as soon as I tried to figure it out, I realized I needed a 3rd.
Code:
int[] arr = {1, 2, 3, 4, 5};
int k = 2, P = 3;
int[][][] A = new int[arr.length][P][k];
for (int pos = 0; pos < arr.length; pos++)
for (int len = 0; len < P; len++)
{
int min = 1000000;
if (len > 0)
{
for (int pos2 = 0; pos2 < pos-1; pos2++)
for (int con = 0; con < k; con++)
min = Math.min(min, A[pos2][len-1][con]);
A[pos][len][0] = min + arr[pos];
}
else
A[pos][0][0] = arr[pos];
for (int con = 1; con < k; con++)
if (pos > 0 && len > 0)
A[pos][len][con] = A[pos-1][len-1][con-1] + arr[pos];
else
A[pos][len][con] = 1000000;
}
// Determine the minimum sum
int min = 100000;
for (int pos = 0; pos < arr.length; pos++)
for (int con = 0; con < k; con++)
min = Math.min(A[pos][P-1][con], min);
System.out.println(min);
Here we get 7 as output, as expected.
Running time: O(N2k + NPk)