I am solving Maximum Subarray Sum with One Deletion on LeetCode:
Given an array of integers, return the maximum sum for a non-empty subarray (contiguous elements) with at most one element deletion. For input arr = [1,-2,0,3], output should be 4.
I came up with a recursive solution as below:
class Solution {
public:
int helper(vector<int>& n, vector<int>& cache, int startIndex) {
if(startIndex>=n.size()) return INT_MIN;
if(cache[startIndex]!=-1) return cache[startIndex];
int allInclusiveSum=0, sumWithOneDel=0, lowestVal=INT_MAX, maxVal=INT_MIN;
for(int i=startIndex; i<n.size(); i++) {
allInclusiveSum+=n[i];
maxVal=max(maxVal, allInclusiveSum);
if(i!=startIndex) {
lowestVal=min(lowestVal, n[i]);
sumWithOneDel=allInclusiveSum-lowestVal;
maxVal=max(maxVal, sumWithOneDel);
}
}
maxVal=max(maxVal, helper(n, cache, startIndex+1));
return cache[startIndex]=maxVal;
}
int maximumSum(vector<int>& arr) {
int i=0, first=arr[0];
for(i=1; i<arr.size(); i++)
if(arr[i]!=first) break;
if(i==arr.size()) return first;
vector<int> cache(arr.size(), -1);
return helper(arr, cache, 0);
}
};
Unfortunately, this TLEs. Since I call recursively with startIndex+1, I don't really think I am encountering overlapping sub-problems.
Is there a way I could memoize my solution? If no, why?
With dynamic programming, we would just define a std::vector with N rows and two columns, then run through our arr in one pass, and use std::max to find max_sum:
#include <vector>
#include <algorithm>
class Solution {
public:
static inline int maximumSum(const std::vector<int> &arr) {
int length = arr.size();
std::vector<std::vector<int>> dynamic_sums(length, std::vector<int>(2, 0));
dynamic_sums[0][0] = arr[0];
int max_sum = arr[0];
for (unsigned int row = 1; row < length; row++) {
dynamic_sums[row][0] = std::max(arr[row], dynamic_sums[row - 1][0] + arr[row]);
dynamic_sums[row][1] = std::max(arr[row], std::max(dynamic_sums[row - 1][1] + arr[row], dynamic_sums[row - 1][0]));
max_sum = std::max(max_sum, std::max(dynamic_sums[row][0], dynamic_sums[row][1]));
}
return max_sum;
}
};
It's similarly O(N) time and O(N) memory.
References
For additional details, you can see the Discussion Board. There are plenty of accepted solutions with a variety of languages and explanations, efficient algorithms, as well as asymptotic time/space complexity analysis1, 2 in there.
Related
I am learning DSA and while practising my LeetCode questions I came across a question-( https://leetcode.com/problems/find-pivot-index/).
Whenever I use vector prefix(size), I am greeted with errors, but when I do not add the size, the program runs fine.
Below is the code with the size:
class Solution {
public:
int pivotIndex(vector<int>& nums) {
//prefix[] stores the prefix sum of nums[]
vector<int> prefix(nums.size());
int sum2=0;
int l=nums.size();
//Prefix sum of nums in prefix:
for(int i=0;i<l;i++){
sum2=sum2+nums[i];
prefix.push_back(sum2);
}
//Total stores the total sum of the vector given
int total=prefix[l-1];
for(int i=0; i<l;i++)
{
if((prefix[i]-nums[i])==(total-prefix[i]))
{
return i;
}
}
return -1;
}
};
I would really appreciate if someone could explain this to me.
Thanks!
You create prefix to be the same size as nums and then you push_back the same number of elments. prefix will therefore be twice the size of nums after the first loop. You never access the elements you've push_backed in the second loop so the algorithm is broken.
I suggest that you simplify your algorithm. Keep a running sum for the left and the right side. Add to the left and remove from the right as you loop.
Example:
#include <numeric>
#include <vector>
int pivotIndex(const std::vector<int>& nums) {
int lsum = 0;
int rsum = std::accumulate(nums.begin(), nums.end(), 0);
for(int idx = 0; idx < nums.size(); ++idx) {
rsum -= nums[idx]; // remove from the right
if(lsum == rsum) return idx;
lsum += nums[idx]; // add to the left
}
return -1;
}
If you use vector constructor with the integer parameter, you get vector with nums.size() elements initialized by default value. You should use indexing to set the elements:
...
for(int i = 0; i < l; ++i){
sum2 = sum2 + nums[i];
prefix[i] = sum2;
}
...
If you want to use push_back method, you should create a zero size vector. Use the constructor without parameters. You can use reserve method to allocate memory before adding new elements to the vector.
I am trying to solve this codeforces problem
http://codeforces.com/contest/281/problem/D
Given an array of integers, find the maximum xor of the first and second max element in any of the sub sequences ?
I am not able to figure out the optimal approach to solve this problem. Few of the solving techniques I articulated was using sorting, stack but I could not figure out the right solution.
I googled and found out the problem setter's code for the solution. But I could not understand the solution as it is in c++ and I am naive to it.
Below is the problem setter's code in c++
using namespace std;
using namespace io;
typedef set<int> Set;
typedef set<int, greater<int> > SetRev;
namespace solution {
const int SIZE = 100000 + 11;
int n;
int A[SIZE];
II S[SIZE];
Set P;
SetRev P_rev;
int result;
}
namespace solution {
class Solver {
public:
void solve() {
normalize();
result = get_maximum_xor();
}
int get_maximum_xor() {
int res = 0;
for (int i = 0; i < n; i++) {
int current_value = S[i].first;
Set::iterator it_after = P.upper_bound(S[i].second);
Set::iterator it_before = P_rev.upper_bound(S[i].second);
if (it_after != P.end()) {
int after_value = A[*it_after];
res = max(res, current_value ^ after_value);
}
if (it_before != P_rev.end()) {
int before_value = A[*it_before];
res = max(res, current_value, before_value);
}
P.insert(S[i].second);
P_rev.insert(S[i].second);
}
return res;
}
void normalise() {
for (int i = 0; i < n; i++) {
S[i] = II(A[i], i);
}
sort(S, S + n, greater<II>());
}
}
Can someone please explain me the solution, the approach used as I understand it in pieces and not totally ?
Ok, so Solver::solve() starts by calling normalise:
void normalise() {
for (int i = 0; i < n; i++) {
S[i] = II(A[i], i);
}
sort(S, S + n, greater<II>());
}
What that's doing is taking an array A of integers - say {4, 2, 9}, and populating an array S where A's values are sorted and paired with the index at which they appear in A - for our example, {{2, 1}, {4, 0}, {9, 2}}.
Then the solver calls get_maximum_xor()...
for (int i = 0; i < n; i++) {
int current_value = S[i].first;
Set::iterator it_after = P.upper_bound(S[i].second);
Set::iterator it_before = P_rev.upper_bound(S[i].second);
The "for i" loop is used to get successive sorted values from S (those values originally from A). While you haven't posted a complete program, so we can't know for sure nothing's prepopulating any values in P, I'll assume that. We do know P's is a std::map and upper_bound searches to find the first element in P greater than S[i].second (the index at which current_value appeared in A) and values above, then something similar for P_rev which is a std::map in which values are sorted in descending order, likely it will be kept populated with the same values as P but again we don't have the code.
Then...
if (it_after != P.end()) {
int after_value = A[*it_after];
res = max(res, current_value ^ after_value);
}
...is saying that if any of the values in P were >= S[i].second, look up A at the index it_after found (getting a sense now that P tracks the last elements in each subsequence (?)), and if the current_value XORed with that value from A is more than any earlier result candidate (res), then update res with the new larger value.
It does something similar with P_rev.
Finally...
P.insert(S[i].second);
P_rev.insert(S[i].second);
Adds the index of current_value in A to P and P_rev for future iterations.
So, while I haven't explained why or how the algorithm works (I haven't even read the problem statement), I think that should make it clear what the C++ is doing which is what you said you're struggling with - you're on your own for the rest ;-).
The task is to extract k smallest elements and their indices from double array, possibly including more elements that are tied to the k-th smallest one. E.g.:
input: {3.3,1.1,6.5,4.2,1.1,3.3}
output (k=3): {1,1.1} {4,1.1} {0,3.3} {5,3.3}
[This seems like a pretty common task, but I couldn't find a similar thread on SO - which handles ties. Hopefully, I didn't miss any and didn't duplicate the question.]
I came up with the following solution, which works and seems to be fairly efficient complexity-wise. E.g. for random 1MLN doubles and k=10 it takes ~40ms with MSVC 2013. I wonder if there's a better/cleaner/more efficient(for large data and/or large k) way to perform this task (validations for k value and similar things are our of scope here). Avoid allocating the queue with all elements? Make use of std::partial_sum or std::nth_element?
typedef std::pair<double, int> idx_pair;
typedef std::priority_queue<idx_pair, std::vector<idx_pair>, std::greater<idx_pair>> idx_queue;
std::vector<idx_pair> getKSmallest(std::vector<double> const& data, int k)
{
idx_queue q;
{
std::vector<idx_pair> idxPairs(data.size());
for (auto i = 0; i < data.size(); i++)
idxPairs[i] = idx_pair(data[i], i);
q = idx_queue(std::begin(idxPairs), std::end(idxPairs));
};
std::vector<idx_pair> result;
auto topPop = [&q, &result]()
{
result.push_back(q.top());
q.pop();
};
for (auto i = 0; i < k; i++)
topPop();
auto const largest = result.back().first;
while (q.empty() == false)
{
if (q.top().first == largest)
topPop();
else
break;
}
return result;
}
Working example is here.
Here's an alternative solution, suggested by #piotrekg2 - using nth_element with average O(N) complexity:
bool equal(double value1, double value2)
{
return value1 == value2 || std::abs(value2 - value1) <= std::numeric_limits<double>::epsilon();
}
std::vector<idx_pair> getNSmallest(std::vector<double> const& data, int n)
{
std::vector<idx_pair> idxPairs(data.size());
for (auto i = 0; i < data.size(); i++)
idxPairs[i] = idx_pair(data[i], i);
std::nth_element(std::begin(idxPairs), std::begin(idxPairs) + n, std::end(idxPairs));
std::vector<idx_pair> result(std::begin(idxPairs), std::begin(idxPairs) + n);
auto const largest = result.back().first;
for (auto it = std::begin(idxPairs) + n; it != std::end(idxPairs); ++it)
if (equal(it->first, largest))
result.push_back(*it);
return result;
}
Indeed, the code looks a bit cleaner. However, I've run some tests and empirically this solution is slightly slower than the original one with std::priority_queue.
Note: The answer below by Petar offers a similar solution using std::nth_element, which in my experiments, performs slightly better than this one and also better than the solution using std::priority_queue - perhaps because of eliminating the operation on pairs and working with primitive doubles instead.
As pointed out by asker, I will suggest first copy the vector of double and use a nth_element to find out the kth element.
Then do a linear scan and get the elements that are smaller than or equal to the kth element. The Time complexity should be linear.
However, it should be careful when comparing double.
vector<idx_pair> getKSmallest(vector<double> const& data, int k){
vector<double> data_copy = data;
nth_element(data_copy.begin(), data_copy.begin() + k, data_copy.end());
vector<idx_pair> result;
double kth_element = data_copy[k - 1];
for (int i = 0; i < data.size(); i++)
if (data[i] <= kth_element)
result.push_back({i, data[i]});
return result;
}
update: It is also possible to find the kth_element by maintaing a max heap with size at most k.
It only need O(k) memory for heap instead of O(n) memory in the nth_element method.
It needs O(n log k) time but if k is small then i think it should be comparable to O(n) method.
I am not sure about it but my reason are the heap may be cached and you don't need to spend time for copying data.
vector<idx_pair> getKSmallest(vector<double> const& data, int k)
{
priority_queue<double> pq;
for (auto d : data){
if (pq.size() >= k && pq.top() > d){
pq.push(d)
pq.pop();
}
else if (pq.size() < k)
pq.push(d);
}
double kth_element = pq.top();
vector<idx_pair> result;
for (int i = 0; i < data.size(); i++)
if (data[i] <= kth_element)
result.push_back({i, data[i]});
return result;
}
I want somehow sort an array, so that it looks like -
a[0]>=a[1]<=a[2]>=a[3]<=a[4]
I don't know where to start.
Any suggestion would be appreciated!
Sort the entire array (Choose any sort algorithm you wish to). Then take each pair from the beginning and swap the elements in the pair
2,4,1,5,6,3,7,9,8,10
Sorted to : 1,2,3,4,5,6,7,8,9,10
Pair and swap : (2,1),(4,3),(6,5),(8,7),(10,9)
result : 2,1,4,3,6,5,8,7,10,9
Here's the code, obviously you can alter the array length and numbers to meet your specifications.
#include <iostream>
#include <algorithm>
using namespace std;
void special_Sort(int *array, int size){
//doesn't return a value, changes the values inside the array
int temp;
//for swapping purposes
sort(array, array+size);
//sorts the array in ascending order
for(int i=0; i<size; i=i+2){
temp=array[i];
array[i]=array[i+1];
array[i+1]=temp;
}
//array is now sorted
}
int main(){
// array declaration, call the function, etc...
int array[10]={2,4,1,5,6,3,7,9,8,10};
int *pointer;
pointer=&array[0];
special_Sort(pointer, 10);
// if you want to print the result
// for(int i =0; i<10; i++)
// cout<<array[i]<<" ";
return 0;
}
I'm assuming here that the relations are inclusive (in the sense that they continue to the end of the line - a[0]>=max(a[1],a[2],...), and a[1]<=min(a[2],a[3],..) and so on). Otherwise this isn't uniquely defined, as {5,4,3,2,1} can get sorted for example into {5,1,4,3,2} or {3,2,5,1,4}.
So, assuming this is the case, it's easily solved by sorting the entire array in descending order, then just interleave them -
a[0], a[n-1], a[1], a[n-2], ...
and so on. Just loop with two indices, one starting from the beginning and one from the end, or use something like this -
for (i=0; i<n/2; i++) {
result[i*2] = sorted[i];
result[i*2+1] = sorted[n-i];
}
if (n%2)
result[n-1] = sorted[n/2]
If you are only sorting it in a way that you want values to rise and fall arbitrarily, you can achieve this by checking values in your array and swapping elements if they do not satisfy the constraints of your sort.
Don't have a compiler on me at the moment and you'd have to implement the swap but something like this could work:
for(i=0; i < a.length(); i++){
//If index is even
if(i%2 == 0){
if(a[i] < a[i+1]){
swap(a[i], a[i+1]);
}
} else { ///If index is odd
if(a[i]>a[i+1]){
swap(a[i], a[i+1];
}
}
}
I don't disagree with the other answers posted here so you will have to find what you need depending on the relation of the even and odd indexed elements.
Steps taken:
1) generate some random array
2) sort array
3) switch elements as needed with alternate <=, >= comparisons
Here's the code that does that: (disregard the random generator, its just an easy way to generate an array)
#define sizeArr 50
int main(void)
{
int array[sizeArr];
int i, temp;
for(i=0;i<sizeArr;i++)
{
array[i]=randomGenerator(1, 1000);
Sleep(2);//force clock tick for new srand() to be effective in rand() generator
}
//sort array
qsort(array, sizeArr, sizeof(int), cmpfunc);
//pick the first non repeat 90th percent and print
for(i=0;i<sizeArr-1;i++)
{
if(i%2==0)//alternate between >= && <=
{
if(array[i+1] >= array[i])
{
temp = array[i+1];
array[i+1]=array[i];
array[i]=temp;
}
}
else
{
if(array[i+1] <= array[i])
{
temp = array[i+1];
array[i+1]=array[i];
array[i]=temp;
}
}
}
getchar();
return 0;
}
int cmpfunc (const void * a, const void * b)
{
return ( *(int*)a - *(int*)b );
}
int randomGenerator(int min, int max)
{
int random=0, trying=0;
trying = 1;
srand(clock());
while(trying)
{
random = (rand()/32767.0)*(max+1);
(random >= min) ? (trying = 0) : (trying = 1);
}
return random;
}
What is the fastest method to check if all elements of an array(preferable integer array) are equal. Till now I have been using the following code:
bool check(int array[], int n)
{
bool flag = 0;
for(int i = 0; i < n - 1; i++)
{
if(array[i] != array[i + 1])
flag = 1;
}
return flag;
}
int check(const int a[], int n)
{
while(--n>0 && a[n]==a[0]);
return n!=0;
}
Here is a solid solution which is valid C++11.
The advantages is that you do not need to manually play with the indexes or iterators. It is a best practice to
prefer algorithm calls to handwritten loops [Herb Sutter - C++ Coding Standards]
I think this will equally efficient as Paul R's solution.
bool check(const int a[], int n)
{
return !std::all_of(a, a+n, [a](int x){ return x==a[0]; });
}
Once you have found a mismatching element you can break out of the loop:
bool check(const int array[], int n)
{
for (int i = 0; i < n - 1; i++)
{
if (array[i] != array[i + 1])
return true;
}
return false;
}
If this is performance-critical then it can be further optimised slightly as:
bool check(const int array[], int n)
{
const int a0 = array[0];
for (int i = 1; i < n; i++)
{
if (array[i] != a0)
return true;
}
return false;
}
Recast the array to a larger data type. Eg, operate on 64bit ints, or use SSE or AVX intrinsics for 128 or 256 bit operation. For example, the SSE2 intrinsic is _mm_cmpeq_epi32, whose result you'll use with _mm_or_si128. Check the result with repeated application of _mm_srli_si128 and _mm_cvtsi128_si32. Check the result every few hundred iterations for early exit.
Make sure to operate on aligned memory, check the unaligned start and end as ints, and check the first packed element with itself.
For programmer efficiency you may try the following all in one line.
vector<int> v{1, 1, 1, 1};
all_of(v.cbegin(), v.cend(), [&r=v[0]](int value){ return value == r; }->bool);
I did not test run this code, let me know if there is syntax error.
Find a library that's available on your platform that supports threading or parallel-for loops, and split the computation out such that different cores test different ranges of the array.
Some available libraries are listed here:
http://parallel-for.sourceforge.net/parallelfor.html
Or possibly, you can make use of the parallism that many GPU's offer.
bool check(int array[],int n)
{
// here 1st element is checked with others. This decreases the number of iteration by 1.
// also it returns immediately.
// The requirement is to check if all the elements are equal.
// So if 1st element is equal to others then all elements are equal.
// Otherwise the elements are not equal.
for(int i=1;i<n;i++)
{
if(array[0]!=array[i])
return false;
}
return true;
}
We'll it's basically an O(n) operation so you can't do much better than what you have, other than dispensing with the flag and just return false; on the first failure and return true; after the iteration.
In theory, I would propose this:
bool check_single(const int a[], int n)
{
for (int i = 1; i < n; ++i) {
if (a[0] != a[n]) { return false; }
}
return true;
}
Compared to other (already proposed) versions:
a[0] will be hoisted outside the loop by the compiler, meaning a single array access within the loop
we loop from 0 to n, which is better (access-wise) than loading a[0] and then looping from a[n]
Obviously, it still checks N elements and thus is O(N).
fast hash mapping technique:
bool areSame(int a[],int n)
{
unordered_map<int,int> m; //hash map to store the frequency od every
for(int i=0;i<n;i++)
m[a[i]]++;
if(m.size()==1)
return true;
else
return false;
}
I think the following is more readable than the highest rated answer and I would wager more efficient too (but havent benchmarked)
bool check(int a[], int n)
{
if (n)
{
auto first = a[0];
for(int i = 1; i < n; i++)
{
if(array[i] != first) return false;
}
return true;
}
return true; //change to false for the OPs logic. I prefer logical true here
}
bool check_identity (int a[], int b[], const int size)
{
int i;
i = 0;
while ((i < size-1) && (a[i] == b[i])) i++;
return (a[i] == b[i]);
}