If I have two seperate sorted arrays, containing equal number of entries, and I need to find the number of pairs(both numbers should be from seperate arrays) having sum = 0 in linear time, how can I do that?
I can easily do it in O(n^2) but how to do it in linear time?
OR should I merge the two arrays and then proceed?
Thanks!
You don't need the arrays to be sorted.
Stick the numbers from one of the arrays into a hash table. Then iterate over the other array. For each number n, see if -n is in the hash table.
(If either array can contain duplicates, you need to take some care around handling them.)
P.S. You can exploit the fact that the arrays are sorted. Just iterate over them from the opposite ends once, looking for items that have the same value but the opposite signs. I leave figuring out the details as an exercise (hint: think of the merge step of merge sort).
Try this:
for(i=0;j=0;i<n&&j<n;)
{
if(arr1[i]+arr2[j]==0)
{
count++;
i++;
j++;
}
else if(arr[i]>arr[j])
{
j++;
}
else
{
i++;
}
}
Following may help:
std::size_t count_zero_pair(const std::vector<int>& v1, const std::vector<int>& v2)
{
assert(is_sorted(v1.begin(), v1.end()));
assert(is_sorted(v2.begin(), v2.end()));
std::size_t res = 0;
auto it1 = v1.begin();
auto it2 = v2.rbegin();
while (it1 != v1.end() && it2 != v2.rend()) {
const int sum = *it1 + *it2;
if (sum < 0) {
++it1;
} else if (0 < sum) {
++it2;
} else { // sum == 0
// may be more complicated depending
// how you want to manage duplicated pairs
++it1;
++it2;
++res;
}
}
return res;
}
If they are already sorted, you can traverse them, one frome left to right, one from right to left:
Take two pointers, and put one at the very left of one array, the other at the very right of the other array. Look at both values you currently point on. If the absolute value of one of these values is greater than the other, advance the greater one. If the absolute values are equal, report both values, and advance both pointers. Stop, as soon as the pointer coming from the left reaches a positive value, or the pointer from the right reaches a negative value. After that, do the same with the pointers starting at the resp. other ends of the arrays.
This is essentially the solution proposed by #Matthias with an added pointer to catch duplicates. If there is a string of duplicate values in arr2, searchStart will always point to the one with the highest index so that we can check the entire string against the next value in arr1. All values in arr1 are explicitly checked, so no extra duplicate handling is required.
int pairCount = 0;
for (int base=0, searchStart=arr2Size-1; base<arr1Size; base++) {
int searchCurrent = searchStart;
while (arr1[base]+arr2[searchCurrent] > 0) {
searchCurrent--;
if (searchCurrent < 0) break;
}
searchStart=searchCurrent;
if (searchStart < 0) break;
while (arr1[base]+arr2[searchCurrent] == 0) {
std::cout << "arr1[" << base << "] + arr2[" << searchCurrent << "] = ";
std::cout << "[" << arr1[base] << "," << arr2[searchCurrent] << "]\n";
pairCount++;
searchCurrent--;
}
}
std::cout << "pairCount = " << pairCount << "\n";
Given the arrays:
arr1[] = {-5, -3, -3, -2, -1, 0, 2, 4, 4, 5, 8};
arr2[] = {-7, -5, -5, -4, -3, -2, 1, 3, 4, 5, 6, 7, 8};
we get:
arr1[0] + arr2[9] = [-5,5]
arr1[1] + arr2[7] = [-3,3]
arr1[2] + arr2[7] = [-3,3]
arr1[4] + arr2[6] = [-1,1]
arr1[6] + arr2[5] = [2,-2]
arr1[7] + arr2[3] = [4,-4]
arr1[8] + arr2[3] = [4,-4]
arr1[9] + arr2[2] = [5,-5]
arr1[9] + arr2[1] = [5,-5]
pairCount = 9
Now we come to the question of time complexity. The construction of searchStart is such that for each value in arr1 can have an extra compare with one value in arr2 (but no more than 1). Otherwise, for arrays with no duplicates this checks each value in arr2 exactly once, so this algorithm runs in O(n).
If duplicate values are present, however, it complicates things a bit. Consider the arrays:
arr1 = {-3, -3, -3}
arr2 = { 3, 3, 3}
Clearly, since all O(n²) pairs equal zero, we have to count all O(n²) pairs. This means that in the worst case, the algorithm is O(n²) and this is the best we can do. It is possibly more constructive to say that the complexity is O(n + p) where p is the number of matching pairs.
Note that if you only want to count the number of matches rather than printing them all, you can do this in linear time as well. Just change when searchStart is updated to when the last match is found and keep a counter that equals the number of matches found for the current searchStart. Then if the next arr1[base] matches arr2[searchStart], add the counter to the number of pairs.
Related
I have this problem: Given a vector with n numbers, sort the numbers so that the even ones will be on odd positions and the odd numbers will be on even positions. E.g. If I have the vector 2 6 7 8 9 3 5 1, the output should be 2 7 6 9 8 3 5 1 . The count should start from 1. So on position 1 which is actually index 0 should be an even number, on position 2 which is actually index 1 should be an odd number and so on. Now this is easy if the odd and even numbers are the same, let's say 4 even number and 4 odd numbers in the vector, but what if the number of odd numbers differs from the number of even numbers like in the above example? How do I solve that. I attached the code with one of the tries I did, but it doesn't work. Can I get some help please. I ask you to keep it simple that means only with vectors and such. No weird methods or anything cause I'm a beginner and I only know the basics. Thanks in advance!
I have to mention that n initial is globally declared and is the number of vector elements and v_initial is the initial vector with the elements that need to be rearranged.
The task says to add the remaining numbers to the end of the vector. Like if there are 3 odd and 5 even numbers, The 2 extra even numbers should be thrown at the end of the vector
void vector_pozitii_pare_impare(int v_initial[])
{
int v_pozitie[50],c1=0,c2=1;
for (i = 0; i < n_initial; i++)
{
if (v_initial[i] % 2 == 0)
{
bool isTrue = 1;
for (int k = i + 1; k < n_initial; k++)
{
if (v_initial[k] % 2 != 0)
isTrue = 0;
}
if (isTrue)
{
v_pozitie[c1] = v_initial[i];
c1++;
}
else
{
v_pozitie[c1] = v_initial[i];
c1 += 2;
}
}
else
{
bool isTrue = 1;
for (int j = i + 1; j < n_initial; j++)
{
if (v_initial[j] % 2 == 0)
{
isTrue = 0;
}
if (isTrue)
{
v_pozitie[c2] = v_initial[i];
c2++;
}
else
{
v_pozitie[c2] = v_initial[i];
c2 += 2;
}
}
}
}
This may not be a perfect solution and it just popped out right off my mind without being tested or verified, but it's just to give you an idea.
(Let A,B,C,D be odd numbers and 0,1,2 even numbers correspondingly)
Given:
A 0 B C D 1 2 (random ordered list of odd/even numbers)
Wanted:
A 0 B 1 C 2 D (input sequence altered to match the wanted odd/even criteria)
Next, we invent the steps required to get from given to wanted:
// look at 'A' -> match, next
// Result: A 0 B C D 1 2
// look at '0' -> match, next
// Result: A 0 B C D 1 2
// look at 'B' -> match, next
// Result: A 0 B C D 1 2
// look at 'C' -> mismatch, remember index and find first match starting from index+1
// Result: A 0 B C D ->1<- 2
// now swap the numbers found at the remembered index and the found one.
// Result: A 0 B 1 D C 2
// continue until the whole list has been consumed.
As I said, this algorithm may not be perfect, but my intention is to give you an example on how to solve these kinds of problems. It's not good to always think in code first, especially not with a problem like this. So you should first think about where you start, what you want to achieve and then carefully think of how to get there step by step.
I feel I have to mention that I did not provide an example in real code, because once you got the idea, the execution should be pretty much straight forward.
Oh, and just a small remark: Almost nothing about your code is C++.
A simple solution, that is not very efficient would be to split the vector into 2 vectors, that contain even and uneven numbers and then always take one from the even, one from the uneven and then the remainder, from the one that is not completely entered.
some c++ (that actually uses vectors, but you can use an array the same way, but need to change the pointer arithmetic)
I did not test it, but the principle should be clear; it is not very efficient though
EDIT: The answer below by #AAAAAAAAARGH outlines a better algorithmic idea, that is inplace and more efficient.
void change_vector_even_uneven(std::vector<unsigned>& in_vec){
std::vector<unsigned> even;
std::vector<unsigned> uneven;
for (auto it = in_vec.begin(); it != in_vec.end(); it++){
if ((*it) % 2 == 0)) even.push_back(*it);
else uneven.push_back(*it);
}
auto even_it = even.begin();
auto uneven_it = uneven.begin();
for (auto it = in_vec.begin(); it != in_vec.end(); it++){
if (even_it == even.end()){
(*it) = (*uneven_it);
uneven_it++;
continue;
}
if (uneven_it == uneven.end()){
(*it) = (*even_it);
even_it++;
continue;
}
if ((it - in_vec.begin()) % 2 == 0){
(*it) = (*even_it);
even_it++;
}
else{
(*it) = (*uneven_it);
uneven_it++;
}
}
}
The solutions is simple. We sort the even and odd values into a data structure. In a loop, we iterate over all source values. If they are even (val & 2 == 0) we add them at the end of a std::deque for evens and if odd, we add them to a std::deque for odds.
Later, we we will extract the the values from the front of the std::deque.
So, we have a first in first out principle.
The std::deque is optimized for such purposes.
Later, we make a loop with an alternating branch in it. We, alternatively extract data from the even queue and then from the odd queue. If a queue is empty, we do not extract data.
We do not need an additional std::vector and can reuse the old one.
With that, we do not need to take care for the same number of evens and odds. It will of course always work.
Please see below one of millions of possible solutions:
#include <iostream>
#include <vector>
#include <deque>
int main() {
std::vector testData{ 2, 6, 7, 8, 9, 3, 5, 1 };
// Show initial data
std::cout << "\nInitial data: ";
for (const int i : testData) std::cout << i << ' ';
std::cout << '\n';
// We will use a deques to store odd and even numbers
// With that we can efficiently push back and pop front
std::deque<int> evenNumbers{};
std::deque<int> oddNumbers{};
// Sort the original data into the specific container
for (const int number : testData)
if (number % 2 == 0)
evenNumbers.push_back(number);
else
oddNumbers.push_back(number);
// Take alternating the data from the even and the odd values
bool takeEven{ true };
for (size_t i{}; !evenNumbers.empty() && !oddNumbers.empty(); ) {
if (takeEven) { // Take even numbers
if (not evenNumbers.empty()) { // As long as there are even values
testData[i] = evenNumbers.front(); // Get the value from the front
evenNumbers.pop_front(); // Remove first value
++i;
}
}
else { // Now we take odd numbers
if (not oddNumbers.empty()) { // As long as there are odd values
testData[i] = oddNumbers.front(); // Get the value from the front
oddNumbers.pop_front(); // Remove first value
++i;
}
}
// Next take the other container
takeEven = not takeEven;
}
// Show result
std::cout << "\nResult: ";
for (const int i : testData) std::cout << i << ' ';
std::cout << '\n';
return 0;
}
Here is yet another solution (using STL), in case you want a stable result (that is, the order of your values is preserved).
#include <algorithm>
#include <vector>
auto ints = std::vector<int>{ 2, 6, 7, 8, 9, 3, 5, 1 };
// split list to even/odd sections -> [2, 6, 8, 7, 9, 3, 5, 1]
const auto it = std::stable_partition(
ints.begin(), ints.end(), [](auto value) { return value % 2 == 0; });
auto results = std::vector<int>{};
results.reserve(ints.size());
// merge both parts with equal size
auto a = ints.begin(), b = it;
while (a != it && b != ints.end()) {
results.push_back(*a++);
results.push_back(*b++);
}
// copy remaining values to end of list
std::copy(a, it, std::back_inserter(results));
std::copy(b, ints.end(), std::back_inserter(results));
The result ist [2, 7, 6, 9, 8, 3, 5, 1]. The complexity is O(n).
This answer, like some of the others, divides the data and then reassembles the result. The standard library std::partition_copy is used to separate the even and odd numbers into two containers. Then the interleave function assembles the result by alternately copying from two input ranges.
#include <algorithm>
#include <iostream>
#include <vector>
template <typename InIt1, typename InIt2, typename OutIt>
OutIt interleave(InIt1 first1, InIt1 last1, InIt2 first2, InIt2 last2, OutIt dest)
{
for (;;) {
if (first1 == last1) {
return std::copy(first2, last2, dest);
}
*dest++ = *first1++;
if (first2 == last2) {
return std::copy(first1, last1, dest);
}
*dest++ = *first2++;
}
}
void reorder_even_odd(std::vector<int> &data)
{
auto is_even = [](int value) { return (value & 1) == 0; };
// split
std::vector<int> even, odd;
std::partition_copy(begin(data), end(data), back_inserter(even), back_inserter(odd), is_even);
// merge
interleave(begin(even), end(even), begin(odd), end(odd), begin(data));
}
int main()
{
std::vector<int> data{ 2, 6, 7, 8, 9, 3, 5, 1 };
reorder_even_odd(data);
for (int value : data) {
std::cout << value << ' ';
}
std::cout << '\n';
}
Demo on Compiler Explorer
As suggested, I am using vectors and STL.
No need to be a great mathematician to understand v_pozitie will start with pairs of odd and even and terminate with the integers not in the initial pairs.
I am then updating three iterators in v_positie (no need of temporary containers to calculate the result) : even, odd and end,(avoiding push_back) and would code this way :
#include <vector>
#include <algorithm>
void vector_pozitii_pare_impare(std::vector<int>& v_initial, std::vector<int>& v_pozitie) {
int nodd (0), neven (0);
std::for_each (v_initial.begin (), v_initial.end (), [&nodd] (const int& n) {
nodd += n%2;
});
neven = v_initial.size () - nodd;
int npair (neven < nodd ?neven:nodd);
npair *=2;
std::vector<int>::iterator iend (&v_pozitie [npair]), ieven (v_pozitie.begin ()), iodd (&v_pozitie [1]);
std::for_each (v_initial.begin (), v_initial.end (), [&iend, &ieven, &iodd, &npair] (const int& s) {
if (npair) {
switch (s%2) {
case 0 :
*ieven++ = s;
++ieven;
break;
case 1 :
*iodd++ = s;
++iodd;
break;
}
--npair;
}
else *iend++ = s;
});
}
int main (int argc, char* argv []) {
const int N = 8;
int tab [N] = {2, 6, 7, 8, 9, 3, 5, 1};
std::vector<int> v_initial (tab, (int*)&tab [N]);
std::cout << "\tv_initial == ";
std::for_each (v_initial.begin (), v_initial.end (), [] (const int& s) {std::cout << s << " ";});
std::cout << std::endl;
std::vector<int> v_pozitie (v_initial.size (), -1);
vector_pozitii_pare_impare (v_initial, v_pozitie);
std::cout << "\tv_pozitie == ";
std::for_each (v_pozitie.begin (), v_pozitie.end (), [] (const int& s) {std::cout << s << " ";});
std::cout << std::endl;
}
How to divide elements in an array into a minimum number of arrays such that the difference between the values of elements of each of the formed arrays does not differ by more than 1?
Let's say that we have an array: [4, 6, 8, 9, 10, 11, 14, 16, 17].
The array elements are sorted.
I want to divide the elements of the array into a minimum number of array(s) such that each of the elements in the resulting arrays do not differ by more than 1.
In this case, the groupings would be: [4], [6], [8, 9, 10, 11], [14], [16, 17]. So there would be a total of 5 groups.
How can I write a program for the same? Or you can suggest algorithms as well.
I tried the naive approach:
Obtain the difference between consecutive elements of the array and if the difference is less than (or equal to) 1, I add those elements to a new vector. However this method is very unoptimized and straight up fails to show any results for a large number of inputs.
Actual code implementation:
#include<cstdio>
#include<iostream>
#include<vector>
using namespace std;
int main() {
int num = 0, buff = 0, min_groups = 1; // min_groups should start from 1 to take into account the grouping of the starting array element(s)
cout << "Enter the number of elements in the array: " << endl;
cin >> num;
vector<int> ungrouped;
cout << "Please enter the elements of the array: " << endl;
for (int i = 0; i < num; i++)
{
cin >> buff;
ungrouped.push_back(buff);
}
for (int i = 1; i < ungrouped.size(); i++)
{
if ((ungrouped[i] - ungrouped[i - 1]) > 1)
{
min_groups++;
}
}
cout << "The elements of entered vector can be split into " << min_groups << " groups." << endl;
return 0;
}
Inspired by Faruk's answer, if the values are constrained to be distinct integers, there is a possibly sublinear method.
Indeed, if the difference between two values equals the difference between their indexes, they are guaranteed to belong to the same group and there is no need to look at the intermediate values.
You have to organize a recursive traversal of the array, in preorder. Before subdividing a subarray, you compare the difference of indexes of the first and last element to the difference of values, and only subdivide in case of a mismatch. As you work in preorder, this will allow you to emit pieces of the groups in consecutive order, as well as detect to the gaps. Some care has to be taken to merge the pieces of the groups.
The worst case will remain linear, because the recursive traversal can degenerate to a linear traversal (but not worse than that). The best case can be better. In particular, if the array holds a single group, it will be found in time O(1). If I am right, for every group of length between 2^n and 2^(n+1), you will spare at least 2^(n-1) tests. (In fact, it should be possible to estimate an output-sensitive complexity, equal to the array length minus a fraction of the lengths of all groups, or similar.)
Alternatively, you can work in a non-recursive way, by means of exponential search: from the beginning of a group, you start with a unit step and double the step every time, until you detect a gap (difference in values too large); then you restart with a unit step. Here again, for large groups you will skip a significant number of elements. Anyway, the best case can only be O(Log(N)).
I would suggest encoding subsets into an offset array defined as follows:
Elements for set #i are defined for indices j such that offset[i] <= j < offset[i+1]
The number of subsets is offset.size() - 1
This only requires one memory allocation.
Here is a complete implementation:
#include <cassert>
#include <iostream>
#include <vector>
std::vector<std::size_t> split(const std::vector<int>& to_split, const int max_dist = 1)
{
const std::size_t to_split_size = to_split.size();
std::vector<std::size_t> offset(to_split_size + 1);
offset[0] = 0;
size_t offset_idx = 1;
for (std::size_t i = 1; i < to_split_size; i++)
{
const int dist = to_split[i] - to_split[i - 1];
assert(dist >= 0); // we assumed sorted input
if (dist > max_dist)
{
offset[offset_idx] = i;
++offset_idx;
}
}
offset[offset_idx] = to_split_size;
offset.resize(offset_idx + 1);
return offset;
}
void print_partition(const std::vector<int>& to_split, const std::vector<std::size_t>& offset)
{
const std::size_t offset_size = offset.size();
std::cout << "\nwe found " << offset_size-1 << " sets";
for (std::size_t i = 0; i + 1 < offset_size; i++)
{
std::cout << "\n";
for (std::size_t j = offset[i]; j < offset[i + 1]; j++)
{
std::cout << to_split[j] << " ";
}
}
}
int main()
{
std::vector<int> to_split{4, 6, 8, 9, 10, 11, 14, 16, 17};
std::vector<std::size_t> offset = split(to_split);
print_partition(to_split, offset);
}
which prints:
we found 5 sets
4
6
8 9 10 11
14
16 17
Iterate through the array. Whenever the difference between 2 consecutive element is greater than 1, add 1 to your answer variable.
`
int getPartitionNumber(int arr[]) {
//let n be the size of the array;
int result = 1;
for(int i=1; i<n; i++) {
if(arr[i]-arr[i-1] > 1) result++;
}
return result;
}
`
And because it is always nice to see more ideas and select the one that suites you best, here the straight forward 6 line solution. Yes, it is also O(n). But I am not sure, if the overhead for other methods makes it faster.
Please see:
#include <iostream>
#include <string>
#include <algorithm>
#include <vector>
#include <iterator>
using Data = std::vector<int>;
using Partition = std::vector<Data>;
Data testData{ 4, 6, 8, 9, 10, 11, 14, 16, 17 };
int main(void)
{
// This is the resulting vector of vectors with the partitions
std::vector<std::vector<int>> partition{};
// Iterating over source values
for (Data::iterator i = testData.begin(); i != testData.end(); ++i) {
// Check,if we need to add a new partition
// Either, at the beginning or if diff > 1
// No underflow, becuase of boolean shortcut evaluation
if ((i == testData.begin()) || ((*i) - (*(i-1)) > 1)) {
// Create a new partition
partition.emplace_back(Data());
}
// And, store the value in the current partition
partition.back().push_back(*i);
}
// Debug output: Copy all data to std::cout
std::for_each(partition.begin(), partition.end(), [](const Data& d) {std::copy(d.begin(), d.end(), std::ostream_iterator<int>(std::cout, " ")); std::cout << '\n'; });
return 0;
}
Maybe this could be a solution . . .
How do you say your approach is not optimized? If your is correct, then according to your approach, it takes O(n) time complexity.
But you can use binary-search here which can optimize in average case. But in worst case this binary search can take more than O(n) time complexity.
Here's a tips,
As the array sorted so you will pick such a position whose difference is at most 1.
Binary search can do this in simple way.
int arr[] = [4, 6, 8, 9, 10, 11, 14, 16, 17];
int st = 0, ed = n-1; // n = size of the array.
int partitions = 0;
while(st <= ed) {
int low = st, high = n-1;
int pos = low;
while(low <= high) {
int mid = (low + high)/2;
if((arr[mid] - arr[st]) <= 1) {
pos = mid;
low = mid + 1;
} else {
high = mid - 1;
}
}
partitions++;
st = pos + 1;
}
cout<< partitions <<endl;
In average case, it is better than O(n). But in worst case (where the answer would be equal to n) it takes O(nlog(n)) time.
I am currently struggling with a homework problem for my Algorithms Class. A summary of the instruction:
The user enters an integer 'n' to determine the number of test cases.
The user individually enters another integer 'num' to determine the # of elements in each test case.
The user enters the elements of the individual array.
The algorithm has to process the array and determine whether it can be partitioned into two subsequences, each of which is in strictly increasing order. If the result is positive, the program prints "Yes", otherwise it prints "No".
I have 24 hours to complete this assignment but am struggling with the primary problem - I cannot properly process the user input. (come up with an algorithm to split the two subsequences)
update: I got to this solution. It passes 4/5 tests but fails the time constraint in the last test.
#include<iostream>
#include<string>
using namespace std;
bool run(){
int numbers;
int *arr;
cin >> numbers;
arr = new int[numbers];
for (int i = 0; i < numbers; i++)
cin >> arr[i];
long long int MAX = 0;
long long int MAX2 = 0;
string stra = "";
string strb = "";
string result = "";
string total = "";
long long int sum = 0;
for (int i = 0; i < numbers; i++){
if (arr[i] >= MAX && arr[i] != arr[i - 1]){
stra += to_string(arr[i]);
MAX = arr[i];
}
else
if (arr[i] >= MAX2 && MAX2 != MAX){
strb += to_string(arr[i]);
MAX2 = arr[i];
}
}
for (int i = 0; i < numbers; i++){
result = to_string(arr[i]);
total += result;
}
long long int len1 = stra.length();
long long int len2 = strb.length();
sum += len1 + len2;
delete[] arr;
if (sum != total.length())
return false;
else
return true;
}
int main()
{
int test;
cin >> test;
while (test > 0)
{
if (run())
cout << "Yes\n";
else
cout << "No\n";
test--;
}
system("pause");
}
Example input:
2
5
3 1 5 2 4
5
4 8 1 5 3
Example output:
Yes
No
Explanation: For the array 3 1 5 2 4, the two strictly increasing subsequences are: 3 5 and 1 2 4.
It seems that the existence of any equal or decreasing subsequence of at least three elements means the array cannot be partitioned into two subsequences, each with strictly increasing order, since once we've placed the first element in one part and the second element in the other part, we have no place to place the third.
This seems to indicate that finding the longest decreasing or equal subsequence is a sure solution. Since we only need one of length 3, we can record in O(n) for each element if it has a greater or equal element to the left. Then perform the reverse. If any element has both a greater or equal partner on the left and a smaller or equal partner on the right, the answer is "no."
We can visualise the O(n) time, O(1) space method by plotting along value and position:
A choosing list B here
A x would be wrong
x
value B z
^ B x
| x
| A
| x
|
| B
| x
- - - - - - - -> position
We notice that as soon as a second list is established (with the first decrease), any element higher than the absolute max so far must be assigned to the list that contains it, and any element lower than it can, in any case, only be placed in the second list if at all.
If we were to assign an element higher than the absolute max so far to the second list (that does not contain it), we could arbitrarily construct a false negative by making the next element lower than both the element we just inserted into the second list and the previous absolute max, but greater than the previous max of the second list (z in the diagram). If we had correctly inserted the element higher than the previous absolute max into that first list, we'd still have room to insert the new, arbitrary element into the second list.
(The JavaScript code below technically uses O(n) space in order to show the partition but notice that we only rely on the last element of each part.)
function f(A){
let partA = [A[0]];
let partB = [];
for (let i=1; i<A.length; i++){
if (A[i] > partA[partA.length-1])
partA.push(A[i]);
else if (partB.length && A[i] <= partB[partB.length-1])
return false;
else
partB.push(A[i]);
}
return [partA, partB];
}
let str = '';
let examples = [
[30, 10, 50, 25, 26],
[3, 1, 5, 2, 4],
[4, 8, 1, 5, 3],
[3, 1, 1, 2, 4],
[3, 4, 5, 1, 2],
[3, 4, 1],
[4, 1, 2, 7, 3]
];
for (e of examples)
str += JSON.stringify(e) + '\n' + JSON.stringify(f(e)) + '\n\n';
console.log(str);
I would go over the entire array once and check two maximal values. If the actual array value is smaller than both maxima, it is not possible, otherwise the proper maximum is increased.
The algorithm does not have to traverse the whole array, if the split condition is violated before.
Here is my code
#include <algorithm>
#include <iostream>
#include <vector>
bool isAddable(const int item, int &max1, int &max2) {
if (max2 > item) {
return false;
}
else {
if (max1 > item) {
max2 = item;
}
else {
max1 = item;
}
return true;
}
}
void setStartValue(int &max1, int &max2, const std::vector<int> &vec) {
max1 = *std::min_element(vec.begin(), vec.begin() + 3);
max2 = *std::max_element(vec.begin(), vec.begin() + 3);
}
bool isDiviableIntoTwoIncreasingArrays(const std::vector<int> &vec) {
if (vec.size() < 3) {
return true;
}
int max1, max2;
setStartValue(max1, max2, vec);
for (int i = 2; i < vec.size(); ++i) {
if (max1 > max2) {
if (!isAddable(vec[i], max1, max2)) {
return false;
}
}
else {
if (!isAddable(vec[i], max2, max1)) {
return false;
}
}
}
return true;
}
int main() {
std::vector<int> userVec;
int tmp1;
while (std::cin >> tmp1) {
userVec.emplace_back(tmp1);
}
const std::vector<int> v1{3, 1, 5, 2, 4};
const std::vector<int> v2{4, 8, 1, 5, 3};
const std::vector<int> v3{3, 4, 1};
for (const std::vector<int> &vec : {userVec, v1, v2, v3}) {
if (isDiviableIntoTwoIncreasingArrays(vec)) {
std::cout << "Yes\n";
}
else {
std::cout << "No\n";
}
}
}
I think you could resort to using a brute force solution. Notice here I use vectors(I think you should as well) to store the data and I use recursion to exhaust out all possible combinations. Keep the problem in mind, solve it and then focus on trivial tasks like parsing the input and matching the way your coursework expects you to enter data. I have added inline comments to make this understandable.
bool canPartition(vector<int>& nums) {
if(nums.empty()) return false;
vector<int> part1 = {}, part2 = {}; // two partitions
auto ans = canPart(nums, part1, part2, 0); // pass this to our recursive function
return ans;
}
bool canPart(vector<int>& nums, vector<int>& part1, vector<int>& part2, int i)
{
if(i >= nums.size()) // we are at the end of the array is this a solution?
{
if(!part1.empty() && !part2.empty()) // only if the partitions are not empty
{
//if you want you could print part1 and part2 here
//to see what the partition looks like
return true;
}
return false;
}
bool resp1empty = false, resp2empty = false, resp1 = false, resp2 = false;
if(part1.empty()) // first partition is empty? lets add something
{
part1.push_back(nums[i]);
resp1empty = canPart(nums, part1, part2, i + 1);
part1.pop_back(); // well we need to remove this element and try another one
}
else if(nums[i] > part1.back()) // first partition is not empty lets check if the sequence is increasing
{
part1.push_back(nums[i]);
resp1 = canPart(nums, part1, part2, i + 1);
part1.pop_back();
}
if(part2.empty()) // is partition two empty? lets add something
{
part2.push_back(nums[i]);
resp2empty = canPart(nums, part1, part2, i + 1);
part2.pop_back();
}
else if(nums[i] > part2.back()) // check if sequence is increasing
{
part2.push_back(nums[i]);
resp2 = canPart(nums, part1, part2, i + 1);
part2.pop_back();
}
//if any of the recursive paths returns a true we have an answer
return resp1empty || resp2empty || resp1 || resp2;
}
You can now try this out with a main function:
vector<int> v = {3,1,5,2,4};
cout << canPartition(v);
The key take away is make a small test case, add a few more non trivial test cases, solve the problem and then look into parsing inputs for other test cases
I think this comes down to whether you have an option for a number to appear in the first list or second list or not.
So, we will keep adding numbers to list 1 and if we can't add any element, we will make it as the start of the new list.
Let's say, we have both the lists going. If we come across an element to whom we can't add to any of the lists, we return false.
There does arise a situation where we could add an element to any of the 2 lists. In this scenario, we adopt a greedy approach as to add to which list.
We prepare an array of minimum values from the right. For example, for [30,10,50,25,26], we will have an array of minimums as [10,25,25,26,(empty here since last)].
Now, let's trace how we could divide them into 2 lists properly.
30 => List A.
10 => List B. (since you can't add it first list, so make a new one from here)
50 => List A.
Here, 50 applies to come after either 30 or 10. If we choose 10, then we won't be able to accommodate the next 25 in either of the 2 lists and our program would fail here itself, since our lists would look like [30] and [10,50]. However, we could continue further if we add 50 to 30 by checking for the minimum stored for it in our minimums array, which is 25.
25 => List B.
26 => List B.
So, our final lists are [30,50] and [10,25,26].
Time complexity: O(n), Space complexity: O(n) and you can print the 2 lists as well.
If we come across a sorted array which is strictly increasing, we return true for them anyway.
I have an array of length n. I want to sort the array elements such that my new array elements are like
arr[0] = arr[n/2]
arr[1] = arr[n/4]
arr[2] = arr[3n/4]
arr[3] = arr[n/8]
arr[4] = arr[3n/8]
arr[5] = arr[5n/8]
and so on...
What I have tried, using vectors.
#include <iostream>
#include <algorithm>
#include <vector>
bool myfunc (int l, int r)
{
int m = (l+r)/2;
return m;
}
int main()
{
std::vector<int> myvector = {3,1,20,9,7,5,6,22,17,14,4};
std::sort (myvector.begin(), myvector.end(), myfunc);
for (std::vector<int>::iterator it=myvector.begin(); it!=myvector.end(); ++it)
std::cout << ' ' << *it;
std::cout << '\n';
return 0;
}
So, for an array for length 11, I expect
myvector[0] = arr[5]
myvector[1] = arr[2]
myvector[2] = arr[8]
myvector[3] = arr[0]
myvector[4] = arr[3]
myvector[5] = arr[6]
myvector[6] = arr[9]
myvector[7] = arr[1]
myvector[8] = arr[4]
myvector[9] = arr[7]
myvector[10] = arr[10]
My question is, what should be my function definition of myfunc, such that I get expected output
bool myfunc (int l, int r)
{
int m = (l+r)/2;
//Cant figure out this logic
}
I have tried debugger, but that definitely doesnt help in defining the function! Any clues would be appreciated.
It appears you want a binary search tree (BST) stored in array form, using the same internal represenation which is often used to store a heap.
The expected output is an array such that the one based indexes form a tree, where for any one-based index x, the left node of x is at index 2*x, and the right node of x is at index 2*x+1. Additionally, there are no gaps, meaning every member of the array is used, up to N. (It is a complete binary tree) Since c++ uses zero-based indexing, you need to be careful with this one-based index.
That way of representing a tree is very good for storing a heap data structure, but very bad for a binary search tree where you want to insert things, thus breaking the completeness, and forcing you into a very expensive rebalance.
You asked for a mapping from the sorted array index to this array format. We can build it using a recursive function. This recursive function will take exactly the same amount of work as it would have taken to build the binary tree, and in fact, it is nearly identical to how you would write that function, so this is not an optimal approach. We are doing as much work as the entire problem requires, just to come up with an intermediary step.
The special note here is that we do not want the median. We want to ensure that the left subtree forms a perfect binary tree, so that it fits in the array with no gaps. Therefore, it must have a power of 2, minus 1 nodes. The right subtree can be merely complete.
int log2(int n) {
if (n > 1)
return 1 + log2(n / 2);
return 0;
}
// current_position is the index in bst_indexes
void build_binary_tree_index_mapping(std::vector<int> &bst_indexes, int lower, int upper, int current_position=0) {
if (current_position >= bst_indexes.size())
return;
int power = log2(upper - lower);
int number = 1 << (power); // left subtree must be perfect
int root = lower + number - 1;
// fill current_position
// std::cout << current_position << " = " << root << std::endl;
bst_indexes[current_position] = root;
if (lower < root) {
// fill left subtree
int left_node_position = (current_position + 1) * 2 - 1;
build_binary_tree_index_mapping(bst_indexes, lower, root - 1, left_node_position);
}
if (root < upper) {
// fill right subtree
int right_node_position = (current_position + 1) * 2 + 1 - 1;
build_binary_tree_index_mapping(bst_indexes, root + 1, upper, right_node_position);
}
}
This gives me {7, 3, 9, 1, 5, 8, 10, 0, 2, 4, 6} as the index mapping. It differs from yours because you left spaces in the lower left of the tree, and I am ensuring that the array is completely filled, so I had to shift the bottom row over, then the BST property required reordering everything.
As a side note, in order to use this mapping, you first must sort the data, which is also about the same complexity as the whole problem.
Additionally, the sorted vector already gives you a superior way to do a binary search, using std::binary_search http://en.cppreference.com/w/cpp/algorithm/binary_search.
I have an application in which integers are presented in no particular order. The integers presented can be repeat values. I have to maintain them in a sorted fashion. Each time a new entry is presented, it needs to be placed in the appropriate position so that the sorted order is maintained.
std::multiset seems to be one suggested solution with the best time, O(log n), for insertion.
Now, in addition to this sorted multiset, I have to maintain the cumulative sums in another container.
That is, if the sorted entries are:
1, 5, 7, 9 (in indices 0, 1, 2 and 3)
the cumulative sum container would be:
1, 6, 13, 22 (in indices 0, 1, 2 and 3)
I am having trouble figuring out how to use the std::multiset iterator that is returned after each insert(int) operation into the multiset in order to update the cumulative sum container. Note that the cumulative sum will only affect in those entries and indices that have to be moved because of the insert operation.
That is, if to the above list, insert(8) has to be performed, the updated containers would be:
Sorted entries:
1, 5, 7, 8, 9 (in indices 0, 1, 2, 3 and 4)
Cumulative sum:
1, 6, 13, 21, 30 (in indices 0, 1, 2, 3 and 4. Note that only entries in indices 3 and 4 are affected.)
At present, the only way I have been able to implement this is by using two arrays, one for the array of values and one for the cumulative sum. A working code that implements this is presented below:
#include <iostream>
int *arr = new int[100];//Array to maintain sorted list
int *cum_arr = new int[100];//Array to maintain cumulative sum
void insert_into_position(int val, int &last_valid_index_after_insertion) {
//Inserts val into arr such that after insertion
//arr[] has entries in ascending order.
int postoadd = last_valid_index_after_insertion;
//index in array at which to insert val
//initially set to last_valid_index_after_insertion
//Search from end of array until you find the right
//position at which to insert val
for (int ind = last_valid_index_after_insertion - 1; ind >= 0; ind--) {
if (arr[ind] > val) {
postoadd--;
}
else {
break;
}
}
//Move everything from and including postoadd one position to the right.
//Update the cumulative sum array as you go
for (int ind = last_valid_index_after_insertion - 1; ind >= postoadd; ind--) {
arr[ind + 1] = arr[ind];
cum_arr[ind + 1] = cum_arr[ind] + val;
}
//Update entry in index postoadd
arr[postoadd] = val;
if (postoadd > 0)
cum_arr[postoadd] = cum_arr[postoadd - 1] + val;
else
cum_arr[0] = val;
last_valid_index_after_insertion++;
}
int main(void)
{
int length = 0;
insert_into_position(1, length);
insert_into_position(5, length);
insert_into_position(7, length);
insert_into_position(9, length);
printf("\nPrint sorted array\n");
for (int i = 0; i < length; i++)
printf("%d ", arr[i]);
printf("\nPrint Cumulative sum array\n");
for (int i = 0; i < length; i++)
printf("%d ", cum_arr[i]);
insert_into_position(8, length);
printf("\nPrint sorted array\n");
for (int i = 0; i < length; i++)
printf("%d ", arr[i]);
printf("\nPrint Cumulative sum array\n");
for (int i = 0; i < length; i++)
printf("%d ", cum_arr[i]);
getchar();
}
As can be seen from this code, to calculate the cumulative sum, the integer array index, postoadd can be used until the end of the array is reached.
Is there any combination of containers that can perform better/more efficiently than the two integer arrays?
The return type of a std::multiset.insert(int) operation is an iterator that points to the inserted entry. Can this iterator be used to update another container that stores the cumulative sum?
Use an std::multimap, which keeps the keys sorted, and allows for duplicate keys.
Example:
#include <iostream>
#include <map>
int main ()
{
std::multimap<int,int> mymultimap = { {1, 1}, {5, 6}, {7, 13}, {9, 22} };
std::multimap<int,int>::iterator it;
it = mymultimap.insert (std::pair<char,int>(8, 8));
if(mymultimap.size() > 1) {
it->second = std::prev(it)->second + it->second;
++it;
while(it!=mymultimap.end()) {
it->second = std::prev(it)->second + it->first;
++it;
}
}
// showing contents:
std::cout << "mymultimap contains:\n";
for (it=mymultimap.begin(); it!=mymultimap.end(); ++it)
std::cout << (*it).first << " => " << (*it).second << '\n';
return 0;
}
Output:
mymultimap contains:
1 => 1
5 => 6
7 => 13
8 => 21
9 => 30
PS: Another approach would be to use std::multiset where every element would be std::pair, where the first would be the number, and the second the cumulative sum.