find middle elements from an array - c++

In C++ how can i find the middle 'n' elements of an array? For example if n=3, and the array is [0,1,5,7,7,8,10,14,20], the middle is [7,7,8].
p.s. in my context, n and the elements of array are odd numbers, so i can find the middle.
Thanks!

This is quick, not tested but the basic idea...
const int n = 5;
// Get middle index
int arrLength = sizeof(myArray) / sizeof(int);
int middleIndex = (arrLength - 1) / 2;
// Get sides
int side = (n - 1) / 2;
int count = 0;
int myNewArray[n];
for(int i = middleIndex - side; i <= middleIndex + side; i++){
myNewArray[count++] = myArray[i];
}

int values[] = {0,1,2,3,4,5,6,7,8};
const size_t total(sizeof(values) / sizeof(int));
const size_t needed(3);
vector<int> middle(needed);
std::copy(values + ((total - needed) / 2),
values + ((total + needed) / 2), middle.begin());
Have not checked this with all possible boundary conditions. With the sample data I get middle = (3,4,5), as desired.

Well, if you have to pick n numbers, you know there will be size - n unpicked items. As you want to pick numbers in the middle, you want to have as many 'unpicked' number on each side of the array, that is (size - n) / 2.
I won't do your homework, but I hope this will help.

Well, the naive algorithm follows:
Find the middle, which exists because you specified that the length is odd.
Repeatedly pick off one element to the left and one element to the right. You can always do this because you specified that n is odd.
You can also make the following observation:
Note that after you've picked the middle, there are n - 1 elements remaining to pick off. This is an even number and (n - 1)/2 must come from the left of the middle element and (n - 1)/2 must come from the right. The middle element has index (length - 1)/2. Therefore, the lower index of the first element selected is (length - 1)/2 - (n - 1)/2 and the upper index of the last element selected is (length - 1)/2 + (n - 1)/2. Consequently, the indices needed are (length - n)/2 - 1 to (length + n)/2 - 1.

Related

Minimum in bitonic array with plateaus

I'm trying to find minimum in array which has this kind of structure in general:
Array consists of non-negative integers [0; 1e5-1]. It may contain any number of such steps, be sorted or just a constant. I want to find it in O(logn) thats why I'm using binary search. This code handle all cases except cases there is any plateau:
size_t left = 0, right = arr.size() - 1;
while (left < right) {
const size_t mid = left + (right - left) / 2;
if ((mid == 0 || arr[mid] < arr[mid - 1]) && (mid + 1 == size || arr[mid] < arr[mid + 1])) {
return mid;
}
if (arr[mid] > arr[mid + 1] || arr[mid] > arr[right]) {
left = mid + 1;
}
else {
right = mid;
}
}
return left;
Example of bad input: [4, 3, 3, 2, 1, 2].
Unfortenatly, I'm out of ideas how to fix this cases. Maybe it's even impossible. Thank you in advance.
I am afraid it is not possible to do in log n time in general.
Assume an array of n elements equal to 1 and a single element of 0.
Your problem now reduces into finding that 0 element.
By "visiting" (=indexing) any member 1 you gain no knowledge about position of 0 - making the search order irrelevant.
Therefore you have to visit every element to find where the 0 is.
If you really want, I think the following algorithm should be roughly O(log n + #elements-on-plateaus)
Set left, right as for binary search
Compute middle.
Go left from middle until:
If you find a decrease, set right=pos where pos is the decreased element and go 4.
If you find an increase, set left=pos where pos is the increased element and go 4.
If you reach left position, go right from middle instead and do the analogous actions.
[X] If you reach right too, you are on a plateau and range [left,right] are the minimal elements of the array.
Repeat until you hit [X].

Odd sum subarrays approach not working for large size array?

I was asked a coding question in a competition recently.
Find the total numbers of sub-arrays that can be formed from the given array of length N
having an odd
sum of its elements,
sub-arrays are not necessarily to be contiguous.
Array length: 1 <= N <= 100000
For instance: [-4,-4,1] -> [-4, 1], [-4, 1], [-4,-4, 1], [1]
// So if the input is this array then our function should return the count as
4;
My approach:
I, separated out odd elements with even elements and kept a count of both odd elements and even elements.
X: the total number of odd-length subsets from the odd elements set.
Formula used: pow(2, num_odd - 1)
// derived by the pattern:
// 1 element set : 1 odd length subset : pow(2, 1 - 1)
// 2 element set : 2 odd length subsets: pow(2, 2 - 1)
// 3 element set : 4 odd length subsets: pow(2, 3 - 1)....follows
Y: And, the total number of subsets from the even elements set.
Formula used: pow(2, num_even) - 1 //excluding empty set
So, mapping every subset of X with Y will give me all subsets having an odd sum as each set in X has an odd sum and each set in Y has an
even sum so combining them will give me all odd sum subarrays.
And all the sets in X taken separately will also contribute in the answer as sum of odd elements the odd number of times is also odd
So, final count becomes:
X * Y + X
C++ Code:
int solve(vector<int> &A){
int num_odd = 0;
for(int i = 0; i < A.size(); i++){
if(A[i] % 2 != 0)
num_odd++;
}
/// edge case
if(num_odd == 0)
return 0;
int num_even = A.size() - num_odd;
unsigned long long X = (unsigned long long int)pow(2, num_odd - 1) % 1000000007;
unsigned long long Y = (unsigned long long int)(pow(2, num_even) - 1) % 1000000007;
unsigned long long cnt_ = ( ((X * Y) % 1000000007) + X) % 1000000007;
return cnt_;
}
Now, this code was generating the correct output for small inputs but this was giving the wrong answer for inputs say length 20 or more.
I want to know, is this approach of calculating the total odd-sum subarrays would be bad if the array size is large.
Do you have some better way to solve this?

Why finding median of 2 sorted arrays of different sizes takes O(log(min(n,m)))

Pleas consider this problem:
We have 2 sorted arrays of different sizes, A[n] and B[m];
I have and implemented a classical algorithm that takes at most O(log(min(n,m))).
Here's the approach:
Start partitioning the two arrays into two groups of halves (not two parts, but both partitioned should have same number of elements). The first half contains some first elements from the first and the second arrays, and the second half contains the rest (or the last) elements form the first and the second arrays. Because the arrays can be of different sizes, it does not mean to take every half from each array. Reach a condition such that, every element in the first half is less than or equal to every element in the second half.
Please see the code above:
double median(std::vector<int> V1, std::vector<int> V2)
{
if (V1.size() > V2.size())
{
V1.swap(V2);
};
int s1 = V1.size();
int s2 = V2.size();
int low = 0;
int high = s1;
while (low <= high)
{
int px = (low + high) / 2;
int py = (s1 + s2 + 1) / 2 - px;
int maxLeftX = (px == 0) ? MIN : V1[px - 1];
int minRightX = (px == s1) ? MAX : V1[px];
int maxLeftY = (py == 0) ? MIN : V2[py - 1];
int minRightY = (py == s2) ? MAX : V2[py];
if (maxLeftX <= minRightY && maxLeftY <= minRightX)
{
if ((s1 + s2) % 2 == 0)
{
return (double(std::max(maxLeftX, maxLeftY)) + double(std::min(minRightX, minRightY)))/2;
}
else
{
return std::max(maxLeftX, maxLeftY);
}
}
else if(maxLeftX > minRightY)
{
high = px - 1;
}
else
{
low = px + 1;
}
}
throw;
}
Although the approach is pretty straightforward and it works, I still cannot convince myself of its correctness. Furthermore I cant understand why its takes O(log(min(n,m)) steps.
If anyone can briefly explain the correcthnes and why it takes O(log(min(n,m))) steps that would be awesome. Even if you can provide a link with meaningfull explanation.
Time complexity is quite straightforward, you binary search through the array with less elements to find such a partition, that enables you to find the median. You make exactly O(log(#elements)) steps, and since your #elements is exactly min(n, m) the complexity is O(log(min(n+m)).
There are exactly (n + m)/2 elements smaller than the median and the same amount of elements greater. Let's think about them as two halves (let the median belong to one of your choice).
You can surely divide the smaller array into two subarrays, that one of them lies entirely in the first half and the second one in the other half. However, you have no idea how many elements are in any of them.
Let's choose some x - your guess of number of elements from the smaller array in the first half. It must be in range from 0 to n. Then you know, since there are exactly (n + m)/2 elements smaller than the median, that you have to choose (n+m)/2 - x elements from the bigger array. Then you have to check if that partition actually works.
To check if partition is good you have to check if all the elements in the smaller half are smaller than all the elements in the greater half. You have to check if maxLeftX <= minRightY and if maxLeftY <= minRightX (then every element in the left half is smaller then every element in the right half)
If so, you've found the correct partition. You can now easily find your median (it's either max(maxLeftX, maxLeftY)), min(minRightX, minRightY) or their sum divided by 2).
If not, you either took too much elements from the smaller array (the case when maxLeftX > minRightY), so next time you have to guess smaller value for x, or too little of them, then you have to guess greater value for x.
To get the best complexity always guess in the middle of a range of possible values that x may take.

Divide array into smaller consecutive parts such that NEO value is maximal

On this years Bubble Cup (finished) there was the problem NEO (which I couldn't solve), which asks
Given array with n integer elements. We divide it into several part (may be 1), each part is a consecutive of elements. The NEO value in that case is computed by: Sum of value of each part. Value of a part is sum all elements in this part multiple by its length.
Example: We have array: [ 2 3 -2 1 ]. If we divide it like: [2 3] [-2 1]. Then NEO = (2 + 3) * 2 + (-2 + 1) * 2 = 10 - 2 = 8.
The number of elements in array is smaller then 10^5 and the numbers are integers between -10^6 and 10^6
I've tried something like divide and conquer to constantly split array into two parts if it increases the maximal NEO number otherwise return the NEO of the whole array. But unfortunately the algorithm has worst case O(N^2) complexity (my implementation is below) so I'm wondering whether there is a better solution
EDIT: My algorithm (greedy) doesn't work, taking for example [1,2,-6,2,1] my algorithm returns the whole array while to get the maximal NEO value is to take parts [1,2],[-6],[2,1] which gives NEO value of (1+2)*2+(-6)+(1+2)*2=6
#include <iostream>
int maxInterval(long long int suma[],int first,int N)
{
long long int max = -1000000000000000000LL;
long long int curr;
if(first==N) return 0;
int k;
for(int i=first;i<N;i++)
{
if(first>0) curr = (suma[i]-suma[first-1])*(i-first+1)+(suma[N-1]-suma[i])*(N-1-i); // Split the array into elements from [first..i] and [i+1..N-1] store the corresponding NEO value
else curr = suma[i]*(i-first+1)+(suma[N-1]-suma[i])*(N-1-i); // Same excpet that here first = 0 so suma[first-1] doesn't exist
if(curr > max) max = curr,k=i; // find the maximal NEO value for splitting into two parts
}
if(k==N-1) return max; // If the max when we take the whole array then return the NEO value of the whole array
else
{
return maxInterval(suma,first,k+1)+maxInterval(suma,k+1,N); // Split the 2 parts further if needed and return it's sum
}
}
int main() {
int T;
std::cin >> T;
for(int j=0;j<T;j++) // Iterate over all the test cases
{
int N;
long long int NEO[100010]; // Values, could be long int but just to be safe
long long int suma[100010]; // sum[i] = sum of NEO values from NEO[0] to NEO[i]
long long int sum=0;
int k;
std::cin >> N;
for(int i=0;i<N;i++)
{
std::cin >> NEO[i];
sum+=NEO[i];
suma[i] = sum;
}
std::cout << maxInterval(suma,0,N) << std::endl;
}
return 0;
}
This is not a complete solution but should provide some helpful direction.
Combining two groups that each have a positive sum (or one of the sums is non-negative) would always yield a bigger NEO than leaving them separate:
m * a + n * b < (m + n) * (a + b) where a, b > 0 (or a > 0, b >= 0); m and n are subarray lengths
Combining a group with a negative sum with an entire group of non-negative numbers always yields a greater NEO than combining it with only part of the non-negative group. But excluding the group with the negative sum could yield an even greater NEO:
[1, 1, 1, 1] [-2] => m * a + 1 * (-b)
Now, imagine we gradually move the dividing line to the left, increasing the sum b is combined with. While the expression on the right is negative, the NEO for the left group keeps decreasing. But if the expression on the right gets positive, relying on our first assertion (see 1.), combining the two groups would always be greater than not.
Combining negative numbers alone in sequence will always yield a smaller NEO than leaving them separate:
-a - b - c ... = -1 * (a + b + c ...)
l * (-a - b - c ...) = -l * (a + b + c ...)
-l * (a + b + c ...) < -1 * (a + b + c ...) where l > 1; a, b, c ... > 0
O(n^2) time, O(n) space JavaScript code:
function f(A){
A.unshift(0);
let negatives = [];
let prefixes = new Array(A.length).fill(0);
let m = new Array(A.length).fill(0);
for (let i=1; i<A.length; i++){
if (A[i] < 0)
negatives.push(i);
prefixes[i] = A[i] + prefixes[i - 1];
m[i] = i * (A[i] + prefixes[i - 1]);
for (let j=negatives.length-1; j>=0; j--){
let negative = prefixes[negatives[j]] - prefixes[negatives[j] - 1];
let prefix = (i - negatives[j]) * (prefixes[i] - prefixes[negatives[j]]);
m[i] = Math.max(m[i], prefix + negative + m[negatives[j] - 1]);
}
}
return m[m.length - 1];
}
console.log(f([1, 2, -5, 2, 1, 3, -4, 1, 2]));
console.log(f([1, 2, -4, 1]));
console.log(f([2, 3, -2, 1]));
console.log(f([-2, -3, -2, -1]));
Update
This blog provides that we can transform the dp queries from
dp_i = sum_i*i + max(for j < i) of ((dp_j + sum_j*j) + (-j*sum_i) + (-i*sumj))
to
dp_i = sum_i*i + max(for j < i) of (dp_j + sum_j*j, -j, -sum_j) ⋅ (1, sum_i, i)
which means we could then look at each iteration for an already seen vector that would generate the largest dot product with our current information. The math alluded to involves convex hull and farthest point query, which are beyond my reach to implement at this point but will make a study of.

Sieve of Eratosthenes on a segment

Sieve of Eratosthenes on the segment:
Sometimes you need to find all the primes that are in the range
[L...R] and not in [1...N], where R is a large number.
Conditions:
You are allowed to create an array of integers with size
(R−L+1).
Implementation:
bool isPrime[r - l + 1]; //filled by true
for (long long i = 2; i * i <= r; ++i) {
for (long long j = max(i * i, (l + (i - 1)) / i * i); j <= r; j += i) {
isPrime[j - l] = false;
}
}
for (long long i = max(l, 2); i <= r; ++i) {
if (isPrime[i - l]) {
//then i is prime
}
}
What is the logic behind setting the lower limit of 'j' in second for loop??
Thanks in advance!!
Think about what we want to find. Ignore the i*i part. We have only
(L + (i - 1)) / i * i) to consider. (I wrote the L capital since l and 1 look quite similar)
What should it be? Obviously it should be the smallest number within L..R that is divisible by i. That's when we want to start to sieve out.
The last part of the formula, / i * i finds the next lower number that is divisible by i by using the properties of integer division.
Example: 35 div 4 * 4 = 8 * 4 = 32, 32 is the highest number that is (equal or) lower than 35 which is divisible by 4.
The L is where we want to start, obviously, and the + (i-1) makes sure that we don't find the highest number equal or lower than but the smallest number equal or bigger than L that is divisible by i.
Example: (459 + (4-1)) div 4 * 4 = 462 div 4 * 4 = 115 * 4 = 460.
460 >= 459, 460 | 4, smallest number with that property
(the max( i*i, ...) is only so that i is not sieved out itself if it is within L..R, I think, although I wonder why it's not 2 * i)
For reasons of readability, I'd made this an inline function next_divisible(number, divisor) or the like. And I'd make it clear that integer division is used. If not, somebody clever might change it to regular division, with which it wouldn't work.
Also, I strongly recommend to wrap the array. It is not obvious to the outside that the property for a number X is stored at position X - L. Something like a class RangedArray that does that shift for you, allowing you a direct input of X instead of X - L, could easily take the responsibility. If you don't do that, at least make it a vector, outside of a innermost class, you shouldn't use raw arrays in C++.