I am writing a function that takes in a pointer that points to an array that is dynamically allocated, in addition to the length of the array. I am trying to find the second smallest sum of it's contiguous sub arrays.
I have been writing code to calculate the second smallest value in an array, and also a piece of code that calculates the sum of all the contiguous sub arrays. I was hoping that I would be able to "merge" these two pieces together to get what my desired end result, but I am getting stuck. I would really appreciate any help.
Thank you.
#include <iostream>
using namespace std;
int secondSmallestSum(int *numbers,int length)
{
//Below shows the sum of all contiguous sub arrays.
for(i = 0; i<= length; ++i)
{
int sum = 0;
for(int j = i; j <= length; ++j)
{
sum+=*(numbers+j);
}
}
//Below calculates the second smallest element in an array
int smallest, secondsmallest;
if (*numbers < *(numbers+1))
{
smallest = *numbers;
secondsmallest = *(numbers+1) ;
}
else {
smallest = *(numbers+1) ;
secondsmallest = *(numbers) ;
}
for (i = 2; i < length; i++) {
if (*(numbers+i) < smallest)
{
secondsmallest = smallest;
smallest = *(numbers+i);
}
else if (*(numbers+i) < secondsmallest)
{
secondsmallest = *(numbers+i);
}
}
}
You can do something like this (of course you need to add range checking).
#include <iostream>
#include <vector>
#include <algorithm>
int main(int argc, char** argv) {
std::vector<int> v{3, 1, 4, 5, 6, 2};
std::nth_element(v.begin(), v.begin() + 1, v.end());
std::cout << "The second smallest element is " << v[1] << "\n";
}
Note: using nth_element will change the order of the elements in the vector.
Correct me if I understand you wrong,
by looking at "find the second smallest sum of it's contiguous sub arrays" and the code you posted, I'm assuming your logic is
calculate all sums of all possible contiguous sub arrays
find the second smallest value in the sums
Actually there is a well known algorithm, Kadane's algorithm, that serves a similar purpose (only Kadane's finds THE smallest, not second smallest). You may want to Google it to find more.
Back to your question, I believe the following code does what you want. The code is a variant of Kadane's algorithm.
#include <climits> // for INT_MAX
int findSecondMinOfContiguousSubarray(int arr[], int n)
{
// to store the minimum value that is ending
// up to the current index
int min_ending_here = INT_MAX;
int min = INT_MAX; // absolute min
int min_second = INT_MAX - 1; // second min <- this is what you want
// traverse the array elements
for (int i = 0; i<n/*it is <, not <=*/; i++)
{
// if min_ending_here > 0, then it could not possibly
// contribute to the minimum sum further
if (min_ending_here > 0)
min_ending_here = arr[i];
// else add the value arr[i] to min_ending_here
else
min_ending_here += arr[i];
// update min and min_second
if (min_second > min_ending_here) {
if (min > min_ending_here) {
min_second = min;
min = min_ending_here;
}
else {
min_second = min_ending_here;
}
}
}
return min_second;
}
BTW, I think your code (the piece under //Below shows the sum of all contiguous sub arrays.) can not find all contiguous sub arrays.
An example, arr={1, 2, 3}, your code only consider {1,2,3}, {2,3} and {3} as contiguous sub arrays, while in fact {1,2} should also be considered.
Brutal force o(n^2) complexity (in C++ style not C style):
template<typename Container, typename Func>
void forEachSubrange(Container &container, Func &&f)
{
for (auto subBegin = container.begin(); subBegin != container.end(); ++subBegin)
{
auto subEnd = subBegin;
do {
++subEnd;
f(subBegin, subEnd);
} while (subEnd != container.end());
}
}
int secondSmallestSubrangeSum(const std::vector<int> &a)
{
int firstSum = 0; // empty sub range has zero sum
int secondSum = 0;
forEachSubrange(a, [&firstSum, &secondSum](auto b, auto e) {
auto sum = std::accumulate(b, e, 0);
if (sum < firstSum) {
secondSum = firstSum;
firstSum = sum;
} else if (sum < secondSum) {
secondSum = sum;
}
});
return secondSum;
}
I'm sure it is possible to achieve o(n).
https://wandbox.org/permlink/9cplKBIpfZBPpZ27
or more talkative https://wandbox.org/permlink/X21TdH6xtbMLpV19
Related
Problem:
You are given an array A ,of n elements.You have to remove exactly n/2 elements from an array and add it to another array B
(intially empty).Find the maximum and minimum values of difference
between these two arrays.The difference between those two arrays is
sum(abs(A[i]-B[i]).
The code only works if the size of the array(N) is even.
Can someone provide a solution which works when the size of array is odd as well.
#include <bits/stdc++.h>
using namespace std;
//This code only works for even number of elements
int main(){
int n;
cin>>n;
vector<int> a(n);
for(int i=0;i<n;i++){
cin>>a[i];
}
sort(a.begin(), a.end());
long long mn = 0,mx = 0;
for(int i=0;i<n/2;i++){
mx+=a[i+n/2]-a[i];
mn+=a[2*i+1]-a[2*i];
}
cout<<abs(mn)<<" "<<abs(mx)<<" ";
return 0;
}
For me, I like to split up the work to easily visualize where efficiencies can be made in the algorithm. The following is very similar to your solution, but works fine for both even and odd length vectors. The average runtime is O(nlogn) for sort with space complexity as O(n) for the vectors.
// Given two arrays of equal length, returns their "Difference", O(n) runtime
int ArrayDiff(vector<int> A, vector<int> B)
{
if (A.size() != B.size() || A.size() == 0) return -1;
int sum = 0;
for (int i = 0; i < A.size(); i++)
{
sum += abs(A[i] - B[i]);
}
return sum;
}
// Given a vector arr, find the max and min "Difference"
void PrintMaxAndMin(vector<int> arr)
{
int n = arr.size();
if (n <= 0) return;
vector<int> Amax, Amin, Bmax, Bmin {};
// for each iteration of removing n/2 elements, we find the max and min of the arrays
sort(arr.begin(), arr.end());
for (int i = 0; i < n/2; i++)
{
Amax.push_back(arr[i]);
Bmax.push_back(arr[n-i-1]);
Amin.push_back(arr[n-i-1]);
Bmin.push_back(arr[n-i-2]);
}
cout << ArrayDiff(Amax, Bmax) << " " << ArrayDiff(Amin, Bmin) << endl;
}
// Run the above functions on a vector of odd and even sizes
int main(){
vector<int> arr_even = { 4,3,2,1 };
cout << "Even Length Vector: ";
PrintMaxAndMin(arr_even);
vector<int> arr_odd = { 5,4,3,2,1 };
cout << "Odd Length Vector: ";
PrintMaxAndMin(arr_odd);
return 0;
}
Here's the working example: live example. Hope this helped.
Program output:
Program stdout
Even Length Vector: 4 2
Odd Length Vector: 6 2
I am a C++ student. And I need to solve this problem: "Write a program that receives a number and an array of the size of the given number. The program must find all the duplicates of the given numbers, push-back them to a vector of repeating elements, and print the vector". The requirements are I'm only allowed to use the vector library and every repeating element of the array must be pushed to the vector only once, e.g. my array is "1, 2, 1, 2, 3, 4...", the vector must be "1 ,2".
Here's what I've done so far. My code works, but I'm unable to make it add the same duplicate to the vector of repeating elements only once.
#include <iostream>
#include <vector>
int main() {
int n;
std::cin >> n;
int* arr = new int[n];
std::vector<int> repeatedElements;
for(int i = 0; i < n; ++i) {
std::cin >> arr[i];
}
for(int i = 0; i < n; ++i) {
bool foundInRepeated = false;
for(int j = 0; j < repeatedElements.size(); ++j) {
if(arr[i] == repeatedElements[j]) {
foundInRepeated = true;
break;
}
}
if(foundInRepeated) {
continue;
} else {
for(int i = 0; i < n; ++i) {
int count = 1;
for(int j = i + 1; j < n; ++j) {
if(arr[i] == arr[j]) {
++count;
}
}
if(count > 1) {
repeatedElements.push_back(arr[i]);
}
}
}
}
for(int i = 0; i < repeatedElements.size(); ++i) {
std::cout << repeatedElements[i] << " ";
}
std::cout << std::endl;
}
Consider what you're doing here:
if(foundInRepeated) {
continue;
} else {
for(int i = 0; i < n; ++i) { // why?
If the element at some index i (from the outer loop) is not found in repeatedElements, you're again iterating through the entire array, and adding elements that are repeated. But you already have an i that you're interested in, and hasn't been added to the repeatedElements. You only need to iterate through j in the else branch.
Removing the line marked why? (and the closing brace), will solve the problem. Here's a demo.
It's always good to follow a plan. Divide the bigger problem into a sequence of smaller problems is a good start. While this often does not yield an optimal solution, at least it yields a solution, which is more or less straightforward. And which subsequently can be optimized, if need be.
How to find out, if a number in the sequence has duplicates?
We could brute force this:
is_duplicate i = arr[i+1..arr.size() - 1] contains arr[i]
and then write ourselves a helper function like
bool range_contains(std::vector<int>::const_iterator first,
std::vector<int>::const_iterator last, int value) {
// ...
}
and use it in a simple
for (auto iter = arr.cbegin(); iter != arr.cend(); ++iter) {
if (range_contains(iter+1, arr.cend(), *iter) && !duplicates.contains(*iter)) {
duplicates.push_back(*iter);
}
}
But this would be - if I am not mistaken - some O(N^2) solution.
As we know, sorting is O(N log(N)) and if we sort our array first, we will
have all duplicates right next to each other. Then, we can iterate over the sorted array once (O(N)) and we are still cheaper than O(N^2). (O(N log(N)) + O(N) is still O(N log(N))).
1 2 1 2 3 4 => sort => 1 1 2 2 3 4
Eventually, while using what we have at our disposal, this could yield to a program like this:
#include <iostream>
#include <vector>
#include <iterator>
#include <algorithm>
using IntVec = std::vector<int>;
int main(int argc, const char *argv[]) {
IntVec arr; // aka: input array
IntVec duplicates;
size_t n = 0;
std::cin >> n;
// Read n integers from std::cin
std::generate_n(std::back_inserter(arr), n,
[](){
return *(std::istream_iterator<int>(std::cin));
});
// sort the array (in ascending order).
std::sort(arr.begin(), arr.end()); // O(N*logN)
auto current = arr.cbegin();
while(current != arr.cend()) {
// std::adjacent_find() finds the next location in arr, where 2 neighbors have the same value.
current = std::adjacent_find(current,arr.cend());
if( current != arr.cend()) {
duplicates.push_back(*current);
// skip all duplicates here
for( ; current != (arr.cend() - 1) && (*current == *(current+1)); current++) {
}
}
}
// print the duplicates to std::cout
std::copy(duplicates.cbegin(), duplicates.cend(),
std::ostream_iterator<int>(std::cout, " "));
return 0;
}
I'm doing a fairly easy HackerRank test which asks the user to write a function which returns the minimum number of swaps needed to sort an unordered vector in ascending order, e.g.
Start: 1, 2, 5, 4, 3
End: 1, 2, 3, 4, 5
Minimum number of swaps: 1
I've written a function which works on 13/14 test cases, but is too slow for the final case.
#include<iostream>
#include<vector>
using namespace std;
int mimumumSwaps(vector<int> arr) {
int p = 0; // Represents the (index + 1) of arr, e.g. 1, 2, ..., arr.size() + 1
int swaps = 0;
for (vector<int>::iterator i = arr.begin(); i != arr.end(); ++i) {
p++;
if (*i == p) // Element is in the correct place
continue;
else{ // Iterate through the rest of arr until the correct element is found
for (vector<int>::iterator j = arr.begin() + p - 1; j != arr.end(); ++j) {
if (*j == p) {
// Swap the elements
double temp = *j;
*j = *i;
*i = temp;
swaps++;
break;
}
}
}
}
return swaps;
}
int main()
{
vector<int> arr = { 1, 2, 5, 4, 3 };
cout << mimumumSwaps(arr);
}
How would I speed this up further?
Are there any functions I could import which could speed up processes for me?
Is there a way to do this without actually swapping any elements and simply working out the min. swaps which I imagine would speed up the process time?
All permutations can be broken down into cyclic subsets. Find said subsets.
Rotating a subset of K elements by 1 takes K-1 swaps.
Walk array until you find an element out of place. Walk that cycle until it completes. Advance, skipping elements that you've put into a cycle already. Sum (size-1) for each cycle.
To skip, maintain an ordered or unordered set of unexamined items, and fast remove as you examine them.
I think that gives optimal swap count in O(n lg n) or so.
#include <bits/stdc++.h>
#include <vector>
#include <algorithm>
using namespace std;
int minimumSwaps(vector<int> arr)
{
int i,c,j,k,l;
j=c=0;
l=k=arr.size();
while (j<k)
{
i=0;
while (i<l)
{
if (arr[i]!=i+1)
{
swap(arr[i],arr[arr[i]-1]);
c++;
}
i++;
}
k=k/2;
j++;
}
return c;
}
int main()
{
int n,q;
cin >> n;
vector<int> arr;
for (int i = 0; i < n; i++)
{
cin>>q;
arr.push_back(q);
}
int res = minimumSwaps(arr);
cout << res << "\n";
return 0;
}
I'm trying to delete all elements of an array that match a particular case.
for example..
if(ar[i]==0)
delete all elements which are 0 in the array
print out the number of elements of the remaining array after deletion
what i tried:
if (ar[i]==0)
{
x++;
}
b=N-x;
cout<<b<<endl;
this works only if i want to delete a single element every time and i can't figure out how to delete in my required case.
Im assuming that i need to traverse the array and select All instances of the element found and delete All instances of occurrences.
Instead of incrementing the 'x' variable only once for one occurence, is it possible to increment it a certain number of times for a certain number of occurrences?
edit(someone requested that i paste all of my code):
int N;
cin>>N;
int ar[N];
int i=0;
while (i<N) {
cin>>ar[i];
i++;
}//array was created and we looped through the array, inputting each element.
int a=0;
int b=N;
cout<<b; //this is for the first case (no element is deleted)
int x=0;
i=0; //now we need to subtract every other element from the array from this selected element.
while (i<N) {
if (a>ar[i]) { //we selected the smallest element.
a=ar[i];
}
i=0;
while (i<N) {
ar[i]=ar[i]-a;
i++;
//this is applied to every single element.
}
if (ar[i]==0) //in this particular case, we need to delete the ith element. fix this step.
{
x++;
}
b=N-x;
cout<<b<<endl;
i++;
}
return 0; }
the entire question is found here:
Cut-the-sticks
You could use the std::remove function.
I was going to write out an example to go with the link, but the example form the link is pretty much verbatim what I was going to post, so here's the example from the link:
// remove algorithm example
#include <iostream> // std::cout
#include <algorithm> // std::remove
int main () {
int myints[] = {10,20,30,30,20,10,10,20}; // 10 20 30 30 20 10 10 20
// bounds of range:
int* pbegin = myints; // ^
int* pend = myints+sizeof(myints)/sizeof(int); // ^ ^
pend = std::remove (pbegin, pend, 20); // 10 30 30 10 10 ? ? ?
// ^ ^
std::cout << "range contains:";
for (int* p=pbegin; p!=pend; ++p)
std::cout << ' ' << *p;
std::cout << '\n';
return 0;
}
Strictly speaking, the posted example code could be optimized to not need the pointers (especially if you're using any standard container types like a std::vector), and there's also the std::remove_if function which allows for additional parameters to be passed for more complex predicate logic.
To that however, you made mention of the Cut the sticks challenge, which I don't believe you actually need to make use of any remove functions (beyond normal container/array remove functionality). Instead, you could use something like the following code to 'cut' and 'remove' according to the conditions set in the challenge (i.e. cut X from stick, then remove if < 0 and print how many cuts made on each pass):
#include <iostream>
#include <vector>
int main () {
// this is just here to push some numbers on the vector (non-C++11)
int arr[] = {10,20,30,30,20,10,10,20}; // 8 entries
int arsz = sizeof(arr) / sizeof(int);
std::vector<int> vals;
for (int i = 0; i < arsz; ++i) { vals.push_back(arr[i]); }
std::vector<int>::iterator beg = vals.begin();
unsigned int cut_len = 2;
unsigned int cut = 0;
std::cout << cut_len << std::endl;
while (vals.size() > 0) {
cut = 0;
beg = vals.begin();
while (beg != vals.end()) {
*beg -= cut_len;
if (*beg <= 0) {
vals.erase(beg--);
++cut;
}
++beg;
}
std::cout << cut << std::endl;
}
return 0;
}
Hope that can help.
If you have no space bound try something like that,
lets array is A and number is number.
create a new array B
traverse full A and add element A[i] to B[j] only if A[i] != number
assign B to A
Now A have no number element and valid size is j.
Check this:
#define N 5
int main()
{
int ar[N] = {0,1,2,1,0};
int tar[N];
int keyEle = 0;
int newN = 0;
for(int i=0;i<N;i++){
if (ar[i] != keyEle) {
tar[newN] = ar[i];
newN++;
}
}
cout<<"Elements after deleteing key element 0: ";
for(int i=0;i<newN;i++){
ar[i] = tar[i];
cout << ar[i]<<"\t" ;
}
}
Unless there is a need to use ordinary int arrays, I'd suggest using either a std::vector or std::array, then using std::remove_if. See similar.
untested example (with c++11 lambda):
#include <algorithm>
#include <vector>
// ...
std::vector<int> arr;
// populate array somehow
arr.erase(
std::remove_if(arr.begin(), arr.end()
,[](int x){ return (x == 0); } )
, arr.end());
Solution to Cut the sticks problem:
#include <climits>
#include <iostream>
#include <vector>
using namespace std;
// Cuts the sticks by size of stick with minimum length.
void cut(vector<int> &arr) {
// Calculate length of smallest stick.
int min_length = INT_MAX;
for (size_t i = 0; i < arr.size(); i++)
{
if (min_length > arr[i])
min_length = arr[i];
}
// source_i: Index of stick in existing vector.
// target_i: Index of same stick in new vector.
size_t target_i = 0;
for (size_t source_i = 0; source_i < arr.size(); source_i++)
{
arr[source_i] -= min_length;
if (arr[source_i] > 0)
arr[target_i++] = arr[source_i];
}
// Remove superfluous elements from the vector.
arr.resize(target_i);
}
int main() {
// Read the input.
int n;
cin >> n;
vector<int> arr(n);
for (int arr_i = 0; arr_i < n; arr_i++) {
cin >> arr[arr_i];
}
// Loop until vector is non-empty.
do {
cout << arr.size() << endl;
cut(arr);
} while (!arr.empty());
return 0;
}
With a single loop:
if(condition)
{
for(loop through array)
{
if(array[i] == 0)
{
array[i] = array[i+1]; // Check if array[i+1] is not 0
print (array[i]);
}
else
{
print (array[i]);
}
}
}
I need to find the indices of the k largest elements of an unsorted, length n, array/vector in C++, with k < n. I have seen how to use nth_element() to find the k-th statistic, but I'm not sure if using this is the right choice for my problem as it seems like I would need to make k calls to nth_statistic, which I guess it would have complexity O(kn), which may be as good as it can get? Or is there a way to do this just in O(n)?
Implementing it without nth_element() seems like I will have to iterate over the whole array once, populating a list of indices of the largest elements at each step.
Is there anything in the standard C++ library that makes this a one-liner or any clever way to implement this myself in just a couple lines? In my particular case, k = 3, and n = 6, so efficiency isn't a huge concern, but it would be nice to find a clean and efficient way to do this for arbitrary k and n.
It looks like Mark the top N elements of an unsorted array is probably the closest posting I can find on SO, the postings there are in Python and PHP.
This should be an improved version of #hazelnusse which is executed in O(nlogk) instead of O(nlogn)
#include <queue>
#include <iostream>
#include <vector>
// maxindices.cc
// compile with:
// g++ -std=c++11 maxindices.cc -o maxindices
int main()
{
std::vector<double> test = {2, 8, 7, 5, 9, 3, 6, 1, 10, 4};
std::priority_queue< std::pair<double, int>, std::vector< std::pair<double, int> >, std::greater <std::pair<double, int> > > q;
int k = 5; // number of indices we need
for (int i = 0; i < test.size(); ++i) {
if(q.size()<k)
q.push(std::pair<double, int>(test[i], i));
else if(q.top().first < test[i]){
q.pop();
q.push(std::pair<double, int>(test[i], i));
}
}
k = q.size();
std::vector<int> res(k);
for (int i = 0; i < k; ++i) {
res[k - i - 1] = q.top().second;
q.pop();
}
for (int i = 0; i < k; ++i) {
std::cout<< res[i] <<std::endl;
}
}
8
4
1
2
6
Here is my implementation that does what I want and I think is reasonably efficient:
#include <queue>
#include <vector>
// maxindices.cc
// compile with:
// g++ -std=c++11 maxindices.cc -o maxindices
int main()
{
std::vector<double> test = {0.2, 1.0, 0.01, 3.0, 0.002, -1.0, -20};
std::priority_queue<std::pair<double, int>> q;
for (int i = 0; i < test.size(); ++i) {
q.push(std::pair<double, int>(test[i], i));
}
int k = 3; // number of indices we need
for (int i = 0; i < k; ++i) {
int ki = q.top().second;
std::cout << "index[" << i << "] = " << ki << std::endl;
q.pop();
}
}
which gives output:
index[0] = 3
index[1] = 1
index[2] = 0
The question has the partial answer; that is std::nth_element returns the "the n-th statistic" with a property that none of the elements preceding nth one are greater than it, and none of the elements following it are less.
Therefore, just one call to std::nth_element is enough to get the k largest elements. Time complexity will be O(n) which is theoretically the smallest since you have to visit each element at least one time to find the smallest (or in this case k-smallest) element(s). If you need these k elements to be ordered, then you need to order them which will be O(k log(k)). So, in total O(n + k log(k)).
You can use the basis of the quicksort algorithm to do what you need, except instead of reordering the partitions, you can get rid of the entries falling out of your desired range.
It's been referred to as "quick select" and here is a C++ implementation:
int partition(int* input, int p, int r)
{
int pivot = input[r];
while ( p < r )
{
while ( input[p] < pivot )
p++;
while ( input[r] > pivot )
r--;
if ( input[p] == input[r] )
p++;
else if ( p < r ) {
int tmp = input[p];
input[p] = input[r];
input[r] = tmp;
}
}
return r;
}
int quick_select(int* input, int p, int r, int k)
{
if ( p == r ) return input[p];
int j = partition(input, p, r);
int length = j - p + 1;
if ( length == k ) return input[j];
else if ( k < length ) return quick_select(input, p, j - 1, k);
else return quick_select(input, j + 1, r, k - length);
}
int main()
{
int A1[] = { 100, 400, 300, 500, 200 };
cout << "1st order element " << quick_select(A1, 0, 4, 1) << endl;
int A2[] = { 100, 400, 300, 500, 200 };
cout << "2nd order element " << quick_select(A2, 0, 4, 2) << endl;
int A3[] = { 100, 400, 300, 500, 200 };
cout << "3rd order element " << quick_select(A3, 0, 4, 3) << endl;
int A4[] = { 100, 400, 300, 500, 200 };
cout << "4th order element " << quick_select(A4, 0, 4, 4) << endl;
int A5[] = { 100, 400, 300, 500, 200 };
cout << "5th order element " << quick_select(A5, 0, 4, 5) << endl;
}
OUTPUT:
1st order element 100
2nd order element 200
3rd order element 300
4th order element 400
5th order element 500
EDIT
That particular implementation has an O(n) average run time; due to the method of selection of pivot, it shares quicksort's worst-case run time. By optimizing the pivot choice, your worst case also becomes O(n).
The standard library won't get you a list of indices (it has been designed to avoid passing around redundant data). However, if you're interested in n largest elements, use some kind of partitioning (both std::partition and std::nth_element are O(n)):
#include <iostream>
#include <algorithm>
#include <vector>
struct Pred {
Pred(int nth) : nth(nth) {};
bool operator()(int k) { return k >= nth; }
int nth;
};
int main() {
int n = 4;
std::vector<int> v = {5, 12, 27, 9, 4, 7, 2, 1, 8, 13, 1};
// Moves the nth element to the nth from the end position.
std::nth_element(v.begin(), v.end() - n, v.end());
// Reorders the range, so that the first n elements would be >= nth.
std::partition(v.begin(), v.end(), Pred(*(v.end() - n)));
for (auto it = v.begin(); it != v.end(); ++it)
std::cout << *it << " ";
std::cout << "\n";
return 0;
}
You can do this in O(n) time with a single order statistic calculation:
Let r be the k-th order statistic
Initialize two empty lists bigger and equal.
For each index i:
If array[i] > r, add i to bigger
If array[i] = r, add i to equal
Discard elements from equal until the sum of the lengths of the two lists is k
Return the concatenation of the two lists.
Naturally, you only need one list if all items are distinct. And if needed, you could do tricks to combine the two lists into one, although that would make the code more complicated.
Even though the following code might not fulfill the desired complexity constraints it might be an interesting alternative for the before-mentioned priority queue.
#include <queue>
#include <vector>
#include <iostream>
#include <iterator>
#include <algorithm>
std::vector<int> largestIndices(const std::vector<double>& values, int k) {
std::vector<int> ret;
std::vector<std::pair<double, int>> q;
int index = -1;
std::transform(values.begin(), values.end(), std::back_inserter(q), [&](double val) {return std::make_pair(val, ++index); });
auto functor = [](const std::pair<double, int>& a, const std::pair<double, int>& b) { return b.first > a.first; };
std::make_heap(q.begin(), q.end(), functor);
for (auto i = 0; i < k && i<values.size(); i++) {
std::pop_heap(q.begin(), q.end(), functor);
ret.push_back(q.back().second);
q.pop_back();
}
return ret;
}
int main()
{
std::vector<double> values = { 7,6,3,4,5,2,1,0 };
auto ret=largestIndices(values, 4);
std::copy(ret.begin(), ret.end(), std::ostream_iterator<int>(std::cout, "\n"));
}