EDIT: Sorry, this turned out just to be my mistake in initializing in the code below.
const int kDigits = 7;
std::vector<int> number(kDigits);
for (int i = kDigits - 1; i >= 0; i--) {
number[i] = i + 1;
}
The vector number is initialized to 7, 6, 5, 4, 3, 2, 1.
My goal is to generate the permutations in decreasing order:
7654321
7654312
7654231
7654213
7654132
This code works:
do {
...
}
while (std::prev_permutation(number.rbegin(), number.rend()));
However, I'm not understanding why. Since 7654321 is the largest lexicographical permutation, shouldn't while (std::prev_permutation(number.begin(), number.end())); (no reverse iterators) generate it correctly, since it would generate the previous permutation in order? However, this returns false on the first try, even though it should generate the "lower permutation."
Also, in the code shown above, since it uses reverse iterators, my mind interprets it as finding the previous permutation of 1234567 (7654321 backwards), which seems to me should have none.
Thanks so much for the help in advance! I look forward to figuring out what I misinterpreted / what I'm missing.
The vector number is initialized to 1,2,3,4,5,6,7, not 7,6,5,4,3,2,1. That's why your code works.
If you want it initialized to 7,6,5,4,3,2,1, you need to fix the initialization routine.
The first permutation, by definition, has no previous permutation in forward order. Otherwise it wouldn't be first. That's why you need rbegin to get the "first" permutation for a reverse iterator (which is the last permutation).
Related
I am solving a LeetCode problem Search in Rotated Sorted Array, in order to learn Binary Search better. The problem statement is:
There is an integer array nums sorted in ascending order (with distinct values). Prior to being passed to your function, nums is possibly rotated at an unknown pivot index. For example, [0,1,2,4,5,6,7] might be rotated at pivot index 3 and become [4,5,6,7,0,1,2]. Given the array nums after the possible rotation and an integer target, return the index of target if it is in nums, or -1 if it is not in nums.
With some online help, I came up with the solution below, which I mostly understand:
class Solution {
public:
int search(vector<int>& nums, int target) {
int l=0, r=nums.size()-1;
while(l<r) { // 1st loop; how is BS applicable here, since array is NOT sorted?
int m=l+(r-l)/2;
if(nums[m]>nums[r]) l=m+1;
else r=m;
}
// cout<<"Lowest at: "<<r<<"\n";
if(nums[r]==target) return r; //target==lowest number
int start, end;
if(target<=nums[nums.size()-1]) {
start=r;
end=nums.size()-1;
} else {
start=0;
end=r;
}
l=start, r=end;
while(l<r) {
int m=l+(r-l)/2;
if(nums[m]==target) return m;
if(nums[m]>target) r=m;
else l=m+1;
}
return nums[l]==target ? l : -1;
}
};
My question: Are we searching over a parabola in the first while loop, trying to find the lowest point of a parabola, unlike a linear array in traditional binary search? Are we finding the minimum of a convex function? I understand how the values of l, m and r change leading to the right answer - but I do not fully follow how we can be guaranteed that if(nums[m]>nums[r]), our lowest value would be on the right.
You actually skipped something important by “getting help”.
Once, when I was struggling to integrate something tricky for Calculus Ⅰ, I went for help and the advisor said, “Oh, I know how to do this” and solved it. I learned nothing from him. It took me another week of going over it (and other problems) myself to understand it sufficient that I could do it myself.
The purpose of these assignments is to solve the problem yourself. Even if your solution is faulty, you have learned more than simply reading and understanding the basics of one example problem someone else has solved.
In this particular case...
Since you already have a solution, let’s take a look at it: Notice that it contains two binary search loops. Why?
As you observed at the beginning, the offset shift makes the array discontinuous (not convex). However, the subarrays either side of the discontinuity remain monotonic.
Take a moment to convince yourself that this is true.
Knowing this, what would be a good way to find and determine which of the two subarrays to search?
Hints:
A binary search as ( n ⟶ ∞ ) is O(log n)
O(log n) ≡ O(2 log n)
I should also observe to you that the prompt gives as example an arithmetic progression with a common difference of 1, but the prompt itself imposes no such restriction. All it says is that you start with a strictly increasing sequence (no duplicate values). You could have as input [19 74 512 513 3 7 12].
Does the supplied solution handle this possibility?
Why or why not?
I am trying to find one element in one array, which has the minimum absolute value. For example, in array [5.1, -2.2, 8.2, -1, 4, 3, -5, 6], I want get the value -1. I use following code (myarray is 1D array and not sorted)
for (int i = 1; i < 8; ++i)
{
if(fabsf(myarray[i])<fabsf(myarray[0])) myarray[0] = myarray[i];
}
Then, the target value is in myarray[0].
Because I have to repeat this procedure many times, this piece of code becomes the bottleneck in my program. Does anyone know how to improve this code? Thanks in advance!
BTW, the size of the array is always eight. Could this be used to optimize this code?
Update: so far, following code works slightly better on my machine:
float absMin = fabsf(myarray[0]); int index = 0;
for (int i = 1; i < 8; ++i)
{
if(fabsf(myarray[i])<absMin) {absMin = fabsf(myarray[i]); index=i;}
}
float result = myarray[index];
I am wandering how to avoid fabsf, because I just want to compare the absolute values instead of computing them. Does anyone have any idea?
There are some urban myths like inlining, loop unrolling by hand and similar which are supposed to make your code faster. Good news is you don't have to do it, at least if you use -O3 compiler optimization.
Bad news is, if you already use -O3 there is nothing you can do to speed up this function: the compiler will optimize the hell out of your code! For example it will surely do the caching of fabsf(myarray[0]) as some suggested. The only thing you can achieve with this "refactoring" is to build bugs into your program and make it less readable.
My advice is to look somewhere else for improvements:
try to reduce the number of invocations of this code
if this code is the bottle neck, than my guess would be that you recalculate the minimal value over and over again (otherwise filling the values into the array would take approximately the same time) - so cache the results of the search
shift costs to changing the elements of the array, for example by using some fancy data structures (heaps, priority_queue) or by tracking the minimum of elements. Lets say your array has only two elements values [1,2] so minimum is 1. Now if you change
2 to 3, you don't have to do anything
2 to 0, you can easily update your minimum to 0
1 to 3, you have to loop through all elements. But maybe this case is not that often.
Can you store the values pre fabbed?
Also as #Gerstrong mentions, storing the number outside the loop and only calculating it when array changes will give you a boost.
Calling partial_sort or nth_element will sort the array only so that the correct value is in the right location.
std::nth_element(v.begin(), v.begin(), v.end(), [](float& lhs, float& rhs){
return fabsf(lhs)<fabsf(rhs);
});
Let me give some ideas that could help:
float minVal = fabsf(myarray[0]);
for (int i = 1; i < 8; ++i)
{
if(fabsf(myarray[i])<minVal) minVal = fabsf(myarray[i]);
}
myarray[0] = minVal;
But compilers nowadays are very smart and you might not get any more speed, as you already get optimized code. It depends on how your mentioned piece of code is called.
Another way to optimize this maybe is using C++ and STL, so you can do the following using the typical binary search tree std::set:
// Absolute comparator for std::set
bool absless_compare(const int64_t &a, const int64_t &b)
{
return (fabsf(a) < fabsf(b));
}
std::set<float, absless_compare> mySet = {5.1, -2.2, 8.2, -1, 4, 3, -5, 6};
const float minVal = *(mySet.begin());
With this approach by inserting your numbers they are already sorted in ascending order. The less-Comparator is usually a set for the std::set, but you can change it to use something different like in this example. This might help on larger datasets, but you mentioned you only have eight values to compare, so it really will not help.
Eight elements is a very small number, which might be kept in stack with for example the declaration of std::array<float,8> myarray close to your sorting function before filling it with data. You should that variants on your full codeset and observe what helps. Of course if you declare std::array<float,8> myarray or float[8] myarray runtime you should get the same results.
What you also could check is if fabsf really uses float as parameter and does not convert your variable to double which would degrade the performance. There is also std::abs() which for my understanding deduces the data type, because in C++ you can use templates etc.
If don't want to use fabs obviously a call like this
float myAbs(const float val)
{
return (val<0) ? -val : val;
}
or you hack the bit to zero which make your number negative. Either way, I'm pretty sure, that fabsf is fully aware of that, and I don't think a code like that will make it faster.
So I would check if the argument is converted to double. If you have C99 Standard in your system though, you should not have that issue.
One thought would be to do your comparisons "tournament" style, instead of linearly. In other words, you first compare 1 with 2, 3 with 4, etc. Then you take those 4 elements and do the same thing, and then again, until you only have one element left.
This does not change the number of comparisons. Since each comparison eliminates one element from the running, you will have exactly 7 comparisons no matter what. So why do I suggest this? Because it removes data dependencies from your code. Modern processors have multiple pipelines and can retire multiple instructions simultaneously. However, when you do the comparisons in a loop, each loop iteration depends on the previous one. When you do it tournament style, the first four comparisons are completely independent, so the processor may be able to do them all at once.
In addition to doing that, you can compute all the fabs at once in a trivial loop and put it in a new array. Since the fabs computations are independent, this can get sped up pretty easily. You would do this first, and then the tournament style comparisons to get the index. It should be exactly the same number of operations, it's just changing the order around so that the compiler can more easily see larger blocks that lack data dependencies.
The element of an array with minimal absolute value
Let the array, A
A = [5.1, -2.2, 8.2, -1, 4, 3, -5, 6]
The minimal absolute value of A is,
double miniAbsValue = A.array().abs().minCoeff();
int i_minimum = 0; // to find the position of minimum absolute value
for(int i = 0; i < 8; i++)
{
double ftn = evalsH(i);
if( fabs(ftn) == miniAbsValue )
{
i_minimum = i;
}
}
Now the element of A with minimal absolute value is
A(i_minimum)
I have a simple example routine below for erasing vector elements, the positions of which are stored in another vector. I've been using this method for some time now and only recently have experienced an error: Expression: vector iterator + offset out of range.
I seem to have found the problem, that being within the parameters of the erase() call I wasn't enclosing the 2nd part in parenthesis, which occasionally resulted in the above error when erasing elements near the end of the vector.
Now I've identified and corrected the problem, I would be grateful if somebody could just confirm that my simple routine below is in fact valid and without error, and that to call erase() within a for-loop in this way is okay.
I realise this routine only works if erasing element positions in order of first to last. Please see my code below:
vector<int> mynumbers;
mynumbers.push_back(4);
mynumbers.push_back(5);
mynumbers.push_back(6);
mynumbers.push_back(7);
vector<int> delpositions;
delpositions.push_back(1);
delpositions.push_back(2);
delpositions.push_back(3);
for(unsigned int i = 0; i < delpositions.size(); ++i)
mynumbers.erase(mynumbers.begin() + (delpositions[i] - i));
// Used To Be: delpositions[i] - i Which Caused The Error! Instead of: (delpositions[i] - i)
You do the right thing by adjusting the 'delposition' by the number of elements erased. Just ensure 'delpositions' are sorted ascending.
Erasing in reverse order (last to first) might be a bit more efficient.
I consider
vector result;
result.reserve(mynumbers.size() - delpositions.size());
// copy valid positions to result
mynumbers.swap(result)
a better solution
int N = 6;
vector< vector<int> > A(N, vector<int>(3));
/* Do operation with A */
cout<<(*max_element(a.begin(),a.end()))[2]<<endl;
I am not sure what does max_element is doing here. Can anybody help in understanding this?
PS: I came across this while reviewing indy256 's solution in the TopCoder practice room, while solving this problem.
Lexicographicaly comparing (because the elements are vectors), max_element finds the largest element in the vector a. It returns an iterator that's immediately dereferenced, giving a reference to the element. It then calls calls operator[], giving back the element at index 2 that's ultimately streamed to cout.
A less terse equivalent would be:
auto it = max_element(a.begin(), a.end());
int i = (*it)[2]; // better make sure the vector has at least 3 elements!
cout << i;
i ran into thos quesiton in a google search.... they look pretty common, but i couldn't find a decent answer. any tips/links ?
1.Remove duplicates in array in O(n) without extra array
2.Write a program whose printed output is an exact copy of the source. Needless to say, merely echoing the actual source file is not allowed.
(1) isn't possible unless the array is presorted. The basic answer is to keep two pointers into the array, one walking forward searching for unequal elements, and one trailing pointer. When the forward pointer encounters an unequal element, it copies it into the trailing pointer and increments the trailing pointer.
(2) I don't have one handy. This sounds like a pretty terrible interview question. In most interpreted languages, a 0 byte (empty) source file is valid input, and prints out nothing.. that should count.
For (1), you probably need more constraints than you've given. However, look up radix sort.
For (2), look up quine.
For your second question google for quine, you will find lots of answers!
The closest you can get to one is using a hashtable to store seen elements and assigning each non-duplicated one to an appropriate value at the start of the array (this would leave several irrelevant ones at the end) - this would take O(n) time but is not the sort of thing you want to have to write during a job interview. Alternatively, as long as the list is sorted just check whether each element is equal to the previous.
For 2 would just manually printing the content's of the file be allowed? (if so the question is more than a little bit pointless).
Edit:
Here is a fast Perl version of my solution to the first - in c++ you would have to create the hash manually:
# Return an unsorted version of an array without duplicates
sub unsortedDedup {
my %seen, my #return;
map {
$seen{$_} = 1
&& push #return, $_
unless (defined $seen{$_})
} #_;
#return;
}
The STL is often not an option in such interview questions, but here's one way to do #1 using the STL, although it does incur an additional sort (as explained by Terry's answer):
#include <iostream>
#include <algorithm>
#include <iterator>
int main()
{
int a[] = { 2, 2, 3, 2, 1, 4, 1, 3, 4, 1 };
int * end = a + sizeof(a) / sizeof(a[0]);
std::sort(a, end); // O(n log n)
end = std::unique(a, end); // O(n)
std::copy(a, end, std::ostream_iterator<int>(std::cout, " "));
std::cout << std::endl;
}
Here's the result:
$ ./a.out
1 2 3 4
std::unique() is generally implemented using the same technique Terry described in his answer (see bits/stl_algo.h in g++'s STL implementation for an example of how to implement it).
For #2, there are a number of answers for different languages here: http://www.nyx.net/~gthompso/quine.htm
There is also an alternative quine in c++ here: http://npcomplete.weebly.com/1/post/2010/02/self-reproducing-c-program-quine.html