I have a std::vector<std::vector<type T>> matrix, I insert elements of type T to this matrix and I do some instructions by line on these elements. I need also at each iteration to delete the element with a minimum cost.
I created an std::priority_queue<Costelement, std::vector<Costelement>, Costelement::Comparekeys> Costvec; where:
struct Costelement
{
int row;
int column;
std::vector<double> cost;
struct CompareCosts
{
bool operator()(const Costelement &e1, const Costelement &e2)
{
return (e1.cost > e2.cost);
}
};
};
where row and column are the position of the element in matrix having the corresponding cost. However, when I delete the element with minimum key from matrix, the positions of the elements in the corresponding row change. I used std::min_element at each iteration on matrix but this was very costly. How can we model efficiently this case?
A std::priority_queue by default is just a std::vector that is kept in a sorted state. It can still be expensive to insert and remove elements from the queue, and as you noticed, you would potentially need to update all of the Costelements in the queue when you insert or remove an element from matrix in order to relect the new positions. However, you can make that a bit more efficient by making the priority queue two-dimensional as well, something that looks like:
std::priority_queue<std::priority_queue<Costelement, ...>, ...> cost_matrix;
Basically, the inner priority queue sort the cost of the columns of a single row, the outer priority queue should then sort the cost of whole rows. Let's create ColumnCost and RowCost structs:
struct ColumnCost {
int column;
double cost;
friend bool operator<(const ColumnCost &a, const ColumnCost &b) {
return a.cost > b.cost;
}
};
struct RowCost {
int row;
std::priority_queue<ColumnCost> columns;
friend bool operator<(const RowCost &a, const RowCost &b) {
return a.columns.top() > b.columns.top();
}
};
std::priority_queue<RowCost> cost_matrix;
Now you can easily get the lowest cost element from costmatrix, which returns the RowCost which contains the lowest cost element, and then you get the ColumnCost with the lowest cost from that one:
const auto &lowest_row = cost_matrix.top();
const auto &lowest_column = lowest_row.columns.top();
int row = lowest_row.row;
int column = lowest_column.column;
When you now insert or delete an element from matrix, you insert or delete from cost_matrix in the same way. You still need to update row or column coordinates, but now it is much less work. The only thing to be aware of is that if you update add or remove an element to the priority queue of a RowCost, you need to delete and re-insert that whole row from cost_matrix to ensure the outer priority queue is kept correctly sorted.
Another possible optimization is to use a std::priority_queue to keep the rows sorted, but use std::min_element() to keep track of the minimum of each individual row. This greatly reduces the amount of memory necessary to store the cost_matrix, and you would only need to call std::min_element() to recalculate the minimum cost element of a row when you change that row.
You may want to replace a row vector with a rope (see the rope data structure in Wikipedia).
It's a binary tree based structure, which allows quite efficient removing elements and searching for an n-th element ('indexing'), so you needn't update positions in all elements when you remove one of them.
Related
I am solving a problem where, from given list of dates, we have to print the third latest date.
Input: [24-01-2001, 9-2-2068, 4-04-2019, 31-10-1943, 2-10-2013, 17-12-1990]
output:2-10-2013
I have written the following code for it
using namespace std;
struct Date
{
int Day;
int Year;
int Month;
};
// comparator function used during insertion in set
bool operator<(const Date& date1, const Date & date2)
{
if(date1.Year<date2.Year)
return true;
if(date1.Year == date2.Year and date1.Month<date2.Month)
return true;
if(date1.Year == date2.Year and date1.Month==date2.Month and date1.Day<date2.Day)
return true;
return false;
}
Date ThirdLatest(std::vector<Date> & dates) {
//using set data structure to eliminate duplicate dates
set<struct Date>UniqueDates;
//using operator function the dates are inserted into the
//set in a sorted manner
for(auto i:dates)
{
UniqueDates.insert(i);
}
//clear original dates vector
dates.clear();
//push dates from the set back into dates vector
for(auto i: UniqueDates)
{
dates.push_back(i);
}
int DatesSize=dates.size();
return dates[DatesSize-3];
}
I was just wondering about the complexity of this code as this uses just an ordered set and elements are inserted into it using overloader function operator< to sort the dates instead of using the sort() function. Insertion into ordered set is O(log n) so is the complexity of this code also log n or am I calculating it wrong?
Also, I had one more question regarding the overloader function. I studied about overloader function from here that when the symbol is mentioned the function of that can be overpassed. But in this code, how does that work because for insertion into the set the symbol < is not mentioned anywhere in the code. The code works so how is < being used here?
You have overloaded operator< as you are inserted elements into a sorted set. So, sorted set is implemented as red–black tree. Red-black tree is a kind of self-balancing binary search tree. Since, it is essentially a binary, insertion of each element would be of order O(log(n)). Insertion of n-elements would be order O(n*log(n)). The overloaded operator< is used for searching in binary tree. If the element is < then current element then search goes to left subtree and otherwise it goes to right subtree. The search is continued until element is found. Detailed explanation could be found here: https://www.geeksforgeeks.org/red-black-tree-set-1-introduction-2/.
Also instead of inserting element into set and then inserting them back in the vector, you could have sorted the existing vector itself using std::sort() method. That could be done using: std::sort(dates.begin(), dates.end()). You wouldn't require 3rd argument since you have already overloaded operator<. Refer to: https://www.cplusplus.com/reference/algorithm/sort/ for more details.
Also, it is fortunate that fields inside date all support operator< and operator==. However, in general it is not guaranteed so better to write any operator<() method by calling operator<() on its members like:
// comparator function used during insertion in set or by sort method
bool operator<(const Date& date1, const Date & date2)
{
if(date1.Year < date2.Year)
{
return true;
}
if(date2.Year < date1.Year)
{
return false;
}
// Equality case for year
if(date1.Month < date2.Month)
{
return true;
}
if(date2.Month < date1.Month)
{
return false;
}
// Equality case for year and month
if(date1.Day < date2.Day)
{
return true;
}
if(date2.Day < date1.Day)
{
return false;
}
return false;
}
Clearing the vector and pushing back all the elements also has a disadvantage that if the vector was reasonable large, on clearing it would become of smallest size possible, and get re-allocated and re-copied once it crosses the allocated memory. Even if you were do such operations, would recommend calling: vector.reserve() prior to inserting, or simply re-assigning all the values without clearing/pushing back.
And if your code requires processing of date frequently and requires entries to be ordered by date, I would recommend using: map/set instead of vector for storing dates.
Since the problem is to find 3rd largest date, rather than sorting all the elements by sorting entire vector or inserting all of them into a set, you would need to insert elements in to priority_queue of size 3. In general, to find Kth largest element, you need to maintain a priority_queue of size K. Priority queue is implemented as heap, which is completely balanced binary tree. It is not ordered like: AVL or Red-Black tree used in ordered sets and maps, but optimized for insertion, and even faster than ordered sets in this case.
Generally, the priority queue always have greatest elements first. So you would need to define a comparator so that it places smallest element at top which is required in your case.
template<class T>
class TestAscending
{
public:
bool operator() (const Date& l, const Date&r) const
{
return r < l;
}
};
// Somewhere in code you would define priority queue as
priority_queue<Date, vector<Date>, TestAscending<Date> > p;
So, you insert elements of vector into the priority_queue one by one. Until, the size of set is < k, you can add elements in the priority queue without any conditional check. When the size of the priority_queue (p.size()) becomes k, then you add element only when it is greater than top-most element of the priority queue (p.top()) (being the smallest). You do it by removing existing smallest by calling p.pop() and adding the new one with p.push().
At the end of the program topmost element p.top() will kth largest element.
Since, the priority is of size K, the complexity of program is reduced from O(nlog(n)) to O(nlog(k)) in the worst case. Since, priority queue is even faster than set in insertion, the complexity and execution time would be faster than using set of size k.
I have an in-memory bidirectional graph implemented as adjacency list like so:
class MemGraph
{
struct Node { int id; std::vector<int> from, to; }; //stores ids of adjacent edges
struct Edge { int id, from, fromIndex, to, toIndex; }; //from, to are ids of nodes and the fromIndex and toIndex are locations in those
//Index is open-addressing hash table using Node::id and Edge::id as key
struct Index { int count; std::vector<int> data; };
std::vector<Node> nodes;
Index nodeIndex;
std::vector<Edge> edges;
Index edgeIndex;
};
This allows for O(1) ammortized for all operations (insert, remove, search and random access by both id and index position). Inserts are merely appends in the respective lists. Removals are done using "swap with last" technique (requires some minor but always constant updating of the swapped element).
Is there a way to use this approach (or modified version) with files (instead of vectors) as backend while maintaining O(1) complexity?
The problem I ran into are the vectors of adjacent edges. Whenever I modify them the whole node needs to be re-inserted into the file at the end. The same applies to edges but those are always constant size whereas the nodes can be huge (and thus violates the requirement of O(1) time complexity).
I have the following problem:
I have a single vector that represents a 2 dimensional matrix, I have the number of rows and the number of columns and few other stuff that are irrelevant.
// A synomon for the type of the grayvalues
typedef unsigned int grayvalue_t;
static_assert(std::numeric_limits<grayvalue_t>::max()<=
std::numeric_limits<size_t>::max(),
"grayvalue_t maximum should be smaller than size_t maximum");
// Number of rows
size_t _R;
// Number of columns
size_t _C;
// Maximum grayvalue
grayvalue_t _MAX_G;
// Pixels' grayvalues
std::vector<grayvalue_t> _pixels;
I'm asked to swap two given rows (given by indices) in O(1), that is not a problem since I can just use memcpy and replace between two continuous blocks of memory, but the problem is that i'm also asked to swap two given columns (again by indices) in O(1) time, but in that case the columns of the matrix aren't continuous blocks of memory in the vector.
/// swaps between rows r1 and r2
/// Time complexity: O(1)
void swap_rows(const size_t& r1, const size_t& r2) {
}
/// swaps between columns c1 and c2
/// Time complexity: O(1)
void swap_cols(const size_t& c1, const size_t& c2) {
}
Am I missing anything?
Would like to get some help.
Thanks!
Like a lot of other CS problems, the answer is: one more layer of indirection.
One option is to maintain a map that maps the column index in the matrix to its actual index in your vector. That is, your matrix columns will not always be stored in order, whereas elements of a row will remain contiguous. The map starts out mapping 0 to 0 1 to 1, and so on. To swap two columns you simply swap their entries in the map. If you also need to traverse the whole array row-wise you will need to consult the map about order of columns.
ALL,
This question is a continuation of this one.
I think that STL misses this functionality, but it just my IMHO.
Now, to the question.
Consider following code:
class Foo
{
public:
Foo();
int paramA, paramB;
std::string name;
};
struct Sorter
{
bool operator()(const Foo &foo1, const Foo &foo2) const
{
switch( paramSorter )
{
case 1:
return foo1.paramA < foo2.paramA;
case 2:
return foo1.paramB < foo2.paramB;
default:
return foo1.name < foo2.name;
}
}
int paramSorter;
};
int main()
{
std::vector<Foo> foo;
Sorter sorter;
sorter.paramSorter = 0;
// fill the vector
std::sort( foo.begin(), foo.end(), sorter );
}
At any given moment of time the vector can be re-sorted.
The class also have the getter methods which are used in the sorter structure.
What would be the most efficient way to insert a new element in the vector?
Situation I have is:
I have a grid (spreadsheet), that uses the sorted vector of a class. At any given time the vector can be re-sorted and the grid will display the sorted data accordingly.
Now I will need to insert a new element in the vector/grid.
I can insert, then re-sort and then re-display the whole grid, but this is very inefficient especially for the big grid.
Any help would be appreciated.
The simple answer to the question:
template< typename T >
typename std::vector<T>::iterator
insert_sorted( std::vector<T> & vec, T const& item )
{
return vec.insert
(
std::upper_bound( vec.begin(), vec.end(), item ),
item
);
}
Version with a predicate.
template< typename T, typename Pred >
typename std::vector<T>::iterator
insert_sorted( std::vector<T> & vec, T const& item, Pred pred )
{
return vec.insert
(
std::upper_bound( vec.begin(), vec.end(), item, pred ),
item
);
}
Where Pred is a strictly-ordered predicate on type T.
For this to work the input vector must already be sorted on this predicate.
The complexity of doing this is O(log N) for the upper_bound search (finding where to insert) but up to O(N) for the insert itself.
For a better complexity you could use std::set<T> if there are not going to be any duplicates or std::multiset<T> if there may be duplicates. These will retain a sorted order for you automatically and you can specify your own predicate on these too.
There are various other things you could do which are more complex, e.g. manage a vector and a set / multiset / sorted vector of newly added items then merge these in when there are enough of them. Any kind of iterating through your collection will need to run through both collections.
Using a second vector has the advantage of keeping your data compact. Here your "newly added" items vector will be relatively small so the insertion time will be O(M) where M is the size of this vector and might be more feasible than the O(N) of inserting in the big vector every time. The merge would be O(N+M) which is better than O(NM) it would be inserting one at a time, so in total it would be O(N+M) + O(M²) to insert M elements then merge.
You would probably keep the insertion vector at its capacity too, so as you grow that you will not be doing any reallocations, just moving of elements.
If you need to keep the vector sorted all the time, first you might consider whether using std::set or std::multiset won't simplify your code.
If you really need a sorted vector and want to quickly insert an element into it, but do not want to enforce a sorting criterion to be satisfied all the time, then you can first use std::lower_bound() to find the position in a sorted range where the element should be inserted in logarithmic time, then use the insert() member function of vector to insert the element at that position.
If performance is an issue, consider benchmarking std::list vs std::vector. For small items, std::vector is known to be faster because of a higher cache hit rate, but the insert() operation itself is computationally faster on lists (no need to move elements around).
Just a note, you can use upper_bound as well depending on your needs. upper_bound will assure new entries that are equivalent to others will appear at the end of their sequence, lower_bound will assure new entries equivalent to others will appear at the beginning of their sequence. Can be useful for certain implementations (maybe classes that can share a "position" but not all of their details!)
Both will assure you that the vector remains sorted according to < result of elements, although inserting into lower_bound will mean moving more elements.
Example:
insert 7 # lower_bound of { 5, 7, 7, 9 } => { 5, *7*, 7, 7, 9 }
insert 7 # upper_bound of { 5, 7, 7, 9 } => { 5, 7, 7, *7*, 9 }
Instead of inserting and sorting. You should do a find and then insert
Keep the vector sorted. (sort once). When you have to insert
find the first element that compares as greater to the one you are going to insert.
Do an insert just before that position.
This way the vector stays sorted.
Here is an example of how it goes.
start {} empty vector
insert 1 -> find first greater returns end() = 1 -> insert at 1 -> {1}
insert 5 -> find first greater returns end() = 2 -> insert at 2 -> {1,5}
insert 3 -> find first greater returns 2 -> insert at 2 -> {1,3,5}
insert 4 -> find first greater returns 3 -> insert at 3 -> {1,3,4,5}
When you want to switch between sort orders, you can use multiple index datastructures, each of which you keep in sorted order (probably some kind of balanced tree, like std::map, which maps sort-keys to vector-indices, or std::set to store pointers to youre obects - but with different comparison functions).
Here's a library which does this: http://www.boost.org/doc/libs/1_53_0/libs/multi_index/doc/index.html
For every change (insert of new elements or update of keys) you must update all index datastructure, or flag them as invalid.
This works if there are not "too many" sort orders and not "too many" updates of your datastructure. Otherwise - bad luck, you have to re-sort everytime you want to change the order.
In other words: The more indices you need (to speed up lookup operations), the more time you need for update operations. And every index needs memory, of course.
To keep the count of indices small, you could use some query engine which combines the indices of several fields to support more complex sort orders over several fields. Like an SQL query optimizer. But that may be overkill...
Example: If you have two fields, a and b, you can support 4 sort orders:
a
b
first a then b
first b then a
with 2 indices (3. and 4.).
With more fields, the possible combinations of sort orders gets big, fast. But you can still use an index which sorts "almost as you want it" and, during the query, sort the remaining fields you couldn't catch with that index, as needed. For sorted output of the whole data, this doesn't help much. But if you only want to lookup some elements, the first "narrowing down" can help much.
Here is one I wrote for simplicity. Horribly slow for large sets but fine for small sets. It sorts as values are added:
void InsertionSortByValue(vector<int> &vec, int val)
{
vector<int>::iterator it;
for (it = vec.begin(); it < vec.end(); it++)
{
if (val < *it)
{
vec.insert(it, val);
return;
}
}
vec.push_back(val);
}
int main()
{
vector<int> vec;
for (int i = 0; i < 20; i++)
InsertionSortByValue(vec, rand()%20);
}
Here is another I found on some website. It sorts by array:
void InsertionSortFromArray(vector<int> &vec)
{
int elem;
unsigned int i, j, k, index;
for (i = 1; i < vec.size(); i++)
{
elem = vec[i];
if (elem < vec[i-1])
{
for (j = 0; j <= i; j++)
{
if (elem < vec[j])
{
index = j;
for (k = i; k > j; k--)
vec[k] = vec[k-1];
break;
}
}
}
else
continue;
vec[index] = elem;
}
}
int main()
{
vector<int> vec;
for (int i = 0; i < 20; i++)
vec.push_back(rand()%20);
InsertionSortFromArray(vec);
}
Assuming you really want to use a vector, and the sort criterium or keys don't change (so the order of already inserted elements always stays the same):
Insert the element at the end, then move it to the front one step at a time, until the preceeding element isn't bigger.
It can't be done faster (regarding asymptotic complexity, or "big O notation"), because you must move all bigger elements. And that's the reason why STL doesn't provide this - because it's inefficient on vectors, and you shouldn't use them if you need it.
Edit: Another assumption: Comparing the elements is not much more expensive than moving them. See comments.
Edit 2: As my first assumption doesn't hold (you want to change the sort criterium), scrap this answer and see my other one: https://stackoverflow.com/a/15843955/1413374
ALL,
This question is a continuation of this one.
I think that STL misses this functionality, but it just my IMHO.
Now, to the question.
Consider following code:
class Foo
{
public:
Foo();
int paramA, paramB;
std::string name;
};
struct Sorter
{
bool operator()(const Foo &foo1, const Foo &foo2) const
{
switch( paramSorter )
{
case 1:
return foo1.paramA < foo2.paramA;
case 2:
return foo1.paramB < foo2.paramB;
default:
return foo1.name < foo2.name;
}
}
int paramSorter;
};
int main()
{
std::vector<Foo> foo;
Sorter sorter;
sorter.paramSorter = 0;
// fill the vector
std::sort( foo.begin(), foo.end(), sorter );
}
At any given moment of time the vector can be re-sorted.
The class also have the getter methods which are used in the sorter structure.
What would be the most efficient way to insert a new element in the vector?
Situation I have is:
I have a grid (spreadsheet), that uses the sorted vector of a class. At any given time the vector can be re-sorted and the grid will display the sorted data accordingly.
Now I will need to insert a new element in the vector/grid.
I can insert, then re-sort and then re-display the whole grid, but this is very inefficient especially for the big grid.
Any help would be appreciated.
The simple answer to the question:
template< typename T >
typename std::vector<T>::iterator
insert_sorted( std::vector<T> & vec, T const& item )
{
return vec.insert
(
std::upper_bound( vec.begin(), vec.end(), item ),
item
);
}
Version with a predicate.
template< typename T, typename Pred >
typename std::vector<T>::iterator
insert_sorted( std::vector<T> & vec, T const& item, Pred pred )
{
return vec.insert
(
std::upper_bound( vec.begin(), vec.end(), item, pred ),
item
);
}
Where Pred is a strictly-ordered predicate on type T.
For this to work the input vector must already be sorted on this predicate.
The complexity of doing this is O(log N) for the upper_bound search (finding where to insert) but up to O(N) for the insert itself.
For a better complexity you could use std::set<T> if there are not going to be any duplicates or std::multiset<T> if there may be duplicates. These will retain a sorted order for you automatically and you can specify your own predicate on these too.
There are various other things you could do which are more complex, e.g. manage a vector and a set / multiset / sorted vector of newly added items then merge these in when there are enough of them. Any kind of iterating through your collection will need to run through both collections.
Using a second vector has the advantage of keeping your data compact. Here your "newly added" items vector will be relatively small so the insertion time will be O(M) where M is the size of this vector and might be more feasible than the O(N) of inserting in the big vector every time. The merge would be O(N+M) which is better than O(NM) it would be inserting one at a time, so in total it would be O(N+M) + O(M²) to insert M elements then merge.
You would probably keep the insertion vector at its capacity too, so as you grow that you will not be doing any reallocations, just moving of elements.
If you need to keep the vector sorted all the time, first you might consider whether using std::set or std::multiset won't simplify your code.
If you really need a sorted vector and want to quickly insert an element into it, but do not want to enforce a sorting criterion to be satisfied all the time, then you can first use std::lower_bound() to find the position in a sorted range where the element should be inserted in logarithmic time, then use the insert() member function of vector to insert the element at that position.
If performance is an issue, consider benchmarking std::list vs std::vector. For small items, std::vector is known to be faster because of a higher cache hit rate, but the insert() operation itself is computationally faster on lists (no need to move elements around).
Just a note, you can use upper_bound as well depending on your needs. upper_bound will assure new entries that are equivalent to others will appear at the end of their sequence, lower_bound will assure new entries equivalent to others will appear at the beginning of their sequence. Can be useful for certain implementations (maybe classes that can share a "position" but not all of their details!)
Both will assure you that the vector remains sorted according to < result of elements, although inserting into lower_bound will mean moving more elements.
Example:
insert 7 # lower_bound of { 5, 7, 7, 9 } => { 5, *7*, 7, 7, 9 }
insert 7 # upper_bound of { 5, 7, 7, 9 } => { 5, 7, 7, *7*, 9 }
Instead of inserting and sorting. You should do a find and then insert
Keep the vector sorted. (sort once). When you have to insert
find the first element that compares as greater to the one you are going to insert.
Do an insert just before that position.
This way the vector stays sorted.
Here is an example of how it goes.
start {} empty vector
insert 1 -> find first greater returns end() = 1 -> insert at 1 -> {1}
insert 5 -> find first greater returns end() = 2 -> insert at 2 -> {1,5}
insert 3 -> find first greater returns 2 -> insert at 2 -> {1,3,5}
insert 4 -> find first greater returns 3 -> insert at 3 -> {1,3,4,5}
When you want to switch between sort orders, you can use multiple index datastructures, each of which you keep in sorted order (probably some kind of balanced tree, like std::map, which maps sort-keys to vector-indices, or std::set to store pointers to youre obects - but with different comparison functions).
Here's a library which does this: http://www.boost.org/doc/libs/1_53_0/libs/multi_index/doc/index.html
For every change (insert of new elements or update of keys) you must update all index datastructure, or flag them as invalid.
This works if there are not "too many" sort orders and not "too many" updates of your datastructure. Otherwise - bad luck, you have to re-sort everytime you want to change the order.
In other words: The more indices you need (to speed up lookup operations), the more time you need for update operations. And every index needs memory, of course.
To keep the count of indices small, you could use some query engine which combines the indices of several fields to support more complex sort orders over several fields. Like an SQL query optimizer. But that may be overkill...
Example: If you have two fields, a and b, you can support 4 sort orders:
a
b
first a then b
first b then a
with 2 indices (3. and 4.).
With more fields, the possible combinations of sort orders gets big, fast. But you can still use an index which sorts "almost as you want it" and, during the query, sort the remaining fields you couldn't catch with that index, as needed. For sorted output of the whole data, this doesn't help much. But if you only want to lookup some elements, the first "narrowing down" can help much.
Here is one I wrote for simplicity. Horribly slow for large sets but fine for small sets. It sorts as values are added:
void InsertionSortByValue(vector<int> &vec, int val)
{
vector<int>::iterator it;
for (it = vec.begin(); it < vec.end(); it++)
{
if (val < *it)
{
vec.insert(it, val);
return;
}
}
vec.push_back(val);
}
int main()
{
vector<int> vec;
for (int i = 0; i < 20; i++)
InsertionSortByValue(vec, rand()%20);
}
Here is another I found on some website. It sorts by array:
void InsertionSortFromArray(vector<int> &vec)
{
int elem;
unsigned int i, j, k, index;
for (i = 1; i < vec.size(); i++)
{
elem = vec[i];
if (elem < vec[i-1])
{
for (j = 0; j <= i; j++)
{
if (elem < vec[j])
{
index = j;
for (k = i; k > j; k--)
vec[k] = vec[k-1];
break;
}
}
}
else
continue;
vec[index] = elem;
}
}
int main()
{
vector<int> vec;
for (int i = 0; i < 20; i++)
vec.push_back(rand()%20);
InsertionSortFromArray(vec);
}
Assuming you really want to use a vector, and the sort criterium or keys don't change (so the order of already inserted elements always stays the same):
Insert the element at the end, then move it to the front one step at a time, until the preceeding element isn't bigger.
It can't be done faster (regarding asymptotic complexity, or "big O notation"), because you must move all bigger elements. And that's the reason why STL doesn't provide this - because it's inefficient on vectors, and you shouldn't use them if you need it.
Edit: Another assumption: Comparing the elements is not much more expensive than moving them. See comments.
Edit 2: As my first assumption doesn't hold (you want to change the sort criterium), scrap this answer and see my other one: https://stackoverflow.com/a/15843955/1413374