I need to segment an image, based on a simple rule (if value is in between 2 values). I'm using only STL containers (I'm not using opencv or other libraries because I want to keep this free of dependencies while teaching myself c++)
I've stored my images as vector< vector<double> >. My brute force approach is to iterate through my image using 2 iterators and check each value, and maybe push the indices of the values that satisfy my condition to another vector<int>. I'll have to do this until all segments are found. Every time I want to pick a segment I'll iterate through the stored indices.
What is the correct way to do this?
Can this be achieved in one pass?
What is a suitable STL container for this process? I'm trying to
figure it out through this flowchart. The best I can come up
with was an unordered_multimap.
If you're moving elements to the end of the vector, use std::stable_partition.
std::vector<int> vec(20);
// 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
std::iota(begin(vec), end(vec), 0);
// 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
std::stable_partition(begin(vec), end(vec),
[](const auto& e){ return (4 <= e && e < 12);});
// 4 5 6 7 8 9 10 11 0 1 2 3 12 13 14 15 16 17 18 19
Online example here.
This will also work if you store the data in a single vector - use iterators for the beginning and end of the column/row instead of the entire range. I've got no idea how you read 'across' the data sensibly with either a 1D or 2D vector though!
(I wrote a very nice post about Sean Parent's gather algorithm before realising it doesn't answer the question. View the edit history if you want to move selected items to places other than the ends.)
If the image is divided to two segments, you can store the segmentation as vector<vector<bool>> with the same size as the image.
Related
here is my code please tell me why it is not printing from starting as in map it is printing in correct manner
#include<bits/stdc++.h>
using namespace std;
int main(){
unordered_map<int,int>arr;
for(int i=1;i<=10;i++){
arr[i]=i*i;
}
for(auto it=arr.begin();it!=arr.end();it++){
cout<<it->first<<" "<<it->second<<"\n";
}
cout<<"normal map \n";
map<int,int>arry;
for(int i=1;i<=10;i++){
arry[i]=i*i;
}
for(auto it=arry.begin();it!=arry.end();it++){
cout<<it->first<<" "<<it->second<<"\n";
}
}
and my output is
10 100
9 81
8 64
7 49
6 36
5 25
1 1
2 4
3 9
4 16
normal map
1 1
2 4
3 9
4 16
5 25
6 36
7 49
8 64
9 81
10 100
why un_ordered map printing the value in this fashion why not it printing like map
std::unordered_map doesn't order keys in any specific order. This is why it is called unordered.
Internally, the elements are not sorted in any particular order, but organized into buckets. Which bucket an element is placed into depends entirely on the hash of its key. This allows fast access to individual elements, since once the hash is computed, it refers to the exact bucket the element is placed into.
Sorry if this is a duplicate, but I did not find any answers which match mine.
Consider that I have a vector which contains 3 values. I want to construct another vector of a specified length from this vector. For example, let's say that the length n=3 and the vector contains the following values 0 1 2. The output that I expect is as follows:
0 0 0
0 0 1
0 0 2
0 1 0
0 1 1
0 1 2
0 2 0
0 2 1
0 2 2
1 0 0
1 0 1
1 0 2
1 1 0
1 1 1
1 1 2
1 2 0
1 2 1
1 2 2
2 0 0
2 0 1
2 0 2
2 1 0
2 1 1
2 1 2
2 2 0
2 2 1
2 2 2
My current implementation simply constructs for loops based on nand generates the expected output. I want to be able to construct output vectors of different lengths and with different values in the input vector.
I have looked at possible implementations using next_permutation, but unfortunately passing a length value does not seem to work.
Are there time and complexity algorithms that one can use for this case? Again, I might have compute this for up to n=17and sizeof vector around 6.
Below is my implementation for n=3. Here, encis the vector which contains the input.
vector<vector<int> > combo_3(vector<double>enc,int bw){
vector<vector<int> > possibles;
for (unsigned int inner=0;inner<enc.size();inner++){
for (unsigned int inner1=0;inner1<enc.size();inner1++){
for (unsigned int inner2=0;inner2<enc.size();inner2++){
cout<<inner<<" "<<inner1<<" "<<inner2<<endl;
unsigned int arr[]={inner,inner1,inner2};
vector<int>current(arr,arr+sizeof(arr)/sizeof(arr[0]));
possibles.push_back(current);
current.clear();
}
}
}
return possibles;
}
What you are doing is simple counting. Think of your output vector as a list of a list of digits (a vector of a vector). Each digit may have one of m different values where m is the size of your input vector.
This is not permutation generation. Generating every permutation means generating every possible ordering of an input vector, which is not what you're looking for at all.
If you think of this as a counting problem the answer may become clearer to you. For example, how would you generate all base 10 numbers with 5 digits? In that case, your input vector has size 10, and each vector in your output list has length 5.
I have an array A[]={3,2,5,11,17} and B[]={2,3,6}, size of B is always less than A. Now I have to map from every element B to distinct elements of A such that the total difference sum( abs(Bi-Aj) ) becomes minimum (Where Bi has been mapped to Aj). What is the type of algorithm?
For the example input, I could select, 2->2=0 , 3->3=0 and then 6->5=1. So the total cost is 0+0+1 = 1. I have been thinking sorting both the arrays and then take the first sizeof B elements from the A. Will this work?
It can be thought of as an unbalanced Assignment Problem.
The cost matrix shall be the difference in values of B[i] and A[j]. You can add dummy elements to B so that the problem becomes balanced and put the costs associated very high.
Then Hungarian Algorithm can be applied to solve it.
For the example case A[]={3,2,5,11,17} and B[]={2,3,6} the cost matrix shall be:
. 3 2 5 11 17
2 1 0 3 9 15
3 0 1 2 8 14
6 3 4 1 5 11
d1 16 16 16 16 16
d2 16 16 16 16 16
I'm trying to implement an ACO for 01MKP. My input values are from the OR-Library mknap1.txt. According to my algorithm, first I choose an item randomly. then i calculate the probabilities for all other items on the construction graph. the probability equation depends on pheremon level and the heuristic information.
p[i]=(tau[i]*n[i]/Σ(tau[i]*n[i]).
my pheremon matrix's cells have a constant value at initial (0.2). for this reason when i try to find the next item to go, pheremon matrix is becomes ineffective because of 0.2. so, my probability function determines the next item to go, checking the heuristic information. As you know, the heuristic information equation is
n[i]=profit[i]/Ravg.
(Ravg is the average of the resource constraints). for this reason my prob. functions chooses the item which has biggest profit value. (Lets say at first iteration my algorithm selected an item randomly which has 600 profit. then at the second iteration, chooses the 2400 profit value. But, in OR-Library, the item which has 2400 profit value causes the resource violation. Whatever I do, the second chosen is being the item which has 2400 profit.
is there anything wrong my algorithm? I hope ppl who know somethings about ACO, should help me. Thanks in advance.
Input values:
6 10 3800//no of items (n) / no of resources (m) // the optimal value
100 600 1200 2400 500 2000//profits of items (6)
8 12 13 64 22 41//resource constraints matrix (m*n)
8 12 13 75 22 41
3 6 4 18 6 4
5 10 8 32 6 12
5 13 8 42 6 20
5 13 8 48 6 20
0 0 0 0 8 0
3 0 4 0 8 0
3 2 4 0 8 4
3 2 4 8 8 4
80 96 20 36 44 48 10 18 22 24//resource capacities.
My algorithm:
for i=0 to max_ant
for j=0; to item_number
if j==0
{
item=rand()%n
ant[i].value+=profit[item]
ant[i].visited[j]=item
}
else
{
calculate probabilities for all the other items in P[0..n]
find the biggest P value.
item=biggest P's item.
check if it is in visited list
check if it causes resource constraint.
if everthing is ok:
ant[i].value+=profit[item]
ant[i].visited[j]=item
}//end of else
}//next j
update pheremon matrix => tau[a][b]=rou*tau[a][b]+deltaTou
}//next i
When reading about forward_list in the FCD of C++11 and N2543 I stumbled over one specific overload of splice_after (slightly simplified and let cit be const_iterator):
void splice_after(cit pos, forward_list<T>& x, cit first, cit last);
The behavior is that after pos everything between (first,last) is moved to this. Thus:
this: 1 2 3 4 5 6 x: 11 12 13 14 15 16
^pos ^first ^last
will become:
this: 1 2 13 14 3 4 5 6 x: 11 12 15 16
^pos ^first ^last
The description includes the complexity:
Complexity: O(distance(first, last))
I can see that this is because one needs to adjust PREDECESSOR(last).next = pos.next, and the forward_list does not allow this to happen in O(1).
Ok, but isn't joining two singly linked lists in O(1) one of the strengths of this simple data structure? Therefore I wonder -- is there no operation on forward_list that splices/merges/joins an arbitrary number of elements in O(1)?
The algorithm would be quite simple, of course. One would just need a name for the operation (pseudocode): (Updated by integrating Kerreks answer)
temp_this = pos.next;
temp_that = last.next;
pos.next = first.next;
last.next = temp_this;
first.next = temp_that;
The result is a bit different, because not (first,last) is moved, but (first,last].
this: 1 2 3 4 5 6 7 x: 11 12 13 14 15 16 17
^pos ^first ^last
will become:
this: 1 2 13 14 15 16 3 4 5 6 7 x: 11 12 17
^pos ^last ^first
I would think this is an as reasonable operation like the former one, that people might would like to do -- especially if it has the benefit of being O(1).
Am I overlooking a operation that is O(1) on many elements?
Or is my assumption wrong that (first,last] might be useful as the moved range?
Or is there an error in the O(1) algorithm?
Let me first give a corrected version of your O(1) splicing algorithm, with an example:
temp_this = pos.next;
temp_that = last.next;
pos.next = first.next;
last.next = temp_this;
first.next = temp_that;
(A sanity check is to observe that every variable appears precisely twice, once set and once got.)
Example:
pos.next last.next
v v
1 2 3 4 5 6 7 11 12 13 14 15 16 17 #
^ ^ ^ ^
pos first last end
becomes:
This: 1 2 13 14 15 16 3 4 5 6 7
That: 11 12 17
Now we see that in order to splice up to the end of that list, we need to provide an iterator to one before the end(). However, no such iterator exists in constant time. So basically the linear cost comes from discovering the final iterator, one way or another: Either you precompute it in O(n) time and use your algorithm, or you just splice one-by-one, also in linear time.
(Presumably you could implement your own singly-linked list that would store an additional iterator for before_end, which you'd have to keep updated during the relevant operations.)
There was considerable debate within the LWG over this issue. See LWG 897 for some of the documentation of this issue.
Your algorithm fails when you pass in end() as last because it will try to use the one-past-end node and relink it into the other list. It would be a strange exception to allow end() to be used in every algorithm except this one.
Also I think first.next = &last; needs to be first.next = last.next; because otherwise last will be in both lists.