Suppose I have row of points
Mat_<Point> src(1,4);
src << Point(border,border), Point(border,h-border), Point(w-border,h-border), Point(w-border,h-border);
now I want to pass this row to polylines() function which accepts InputArrayOfArrays. In my case it should be array of ONE array.
How to convert row of points to this type?
You can use another form of this function with C arrays. The first array should be size of 1 and inner should be size of 4 and contain your points.
Also you can try the same and pass a vector<vector<Point> > instead of C arrays.
In OpenCV ArrayOfArrays usually means vector of something (Mat or another vector).'
Update: Also InputArrayOfArrays is just a typedef for InputArray. So you can try to pass a vector<Point>. It will not work for every function which require InputArrayOfArrays, but it should work for polylines(). I have not tested it, so please provide your results.
Maybe someone will find it useful as a starting point. I spent almost a whole day looking for a solution. In the end I can draw with Polyline and fillPoly. I have to admit it was not a little annoying.
t2 contains 3 points .
So this add 3 Points to pt.
vector<Point> pt;
for(int ao=0; ao<t2.size(); ao++){
pt.push_back( t2.at(ao) );
}
polylines(image,pt,false,Scalar(255,255,255),2,150,0);
This to use fillPoly
just to fill a triangle
Point pt[1][3];
//set one point into pt matrix
pt[0][0].x = yourvalue_x;
pt[0][0].y = yourvalue_y;
//set onother point into pt matrix
pt[0][1].x = yourvalue_x;
pt[0][1].y = yourvalue_y;
//set onother point into pt matrix
pt[0][2].x = yourvalue.x;
pt[0][2].y = yourvalue.y;
//adding 2 times (i have one triangle ) but
ppt can contains many more polygons
//const Point* ppt[2] = {pt[0], pt[1], ... };
//i used only one triangle as a test
const Point* ppt[2] = {pt[0], pt[0]};
int npt[] = {3, 3};
fillPoly(atom_image, ppt, npt, 1, (255,0,255));
Related
I'm trying to teach myself opencv and c++ and this example program for face and eye detection includes the line:
for(size_t i = 0; i < faces.size(); i++)
I don't understand what faces.size() means, and following from that at what point i can be greater than faces.size().
How does it acquire a numerical value?
I see plenty of instances of faces throughout the rest of the program, but the only time I see size is as a parameter for face_cascade.detectMultiScale. It is capitalized though, which makes me think that it has nothing to do with faces.size().
faces.size()
Returns the size of 'faces', i.e. how many faces there are in 'faces'.
In general a basic for loop is structured like so:
for ( init; condition; increment )
{
//your code...
}
It will run as long as the condition is true, i.e. as long as 'i' is less than faces.size() (which might be '10' or some other integer value).
'i' will get bigger as for each loop iteration 1 is added to it. This is managed by the i++ instruction.
I'd suggest if you're struggling with loop syntax that openCV might not be the best place to start learning C++ as a lot of the examples expect a level of competence higher than 'beginner' (intentionally and unintentionally via simple bad coding/lack of commenting etc.)
faces is being populated here :
//-- Detect faces
face_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0|CASCADE_SCALE_IMAGE, Size(30, 30) );
According to OpenCV documentation :
void cv::CascadeClassifier::detectMultiScale ( InputArray image,
std::vector< Rect > & objects,
double scaleFactor = 1.1,
int minNeighbors = 3,
int flags = 0,
Size minSize = Size(),
Size maxSize = Size()
)
where std::vector< Rect > & objects (faces in your case) is a
Vector of rectangles where each rectangle contains the detected
object, the rectangles may be partially outside the original image.
As you can see, objects is passed by reference to allow its modification inside the function.
Also std::vector<Type>::size() will give you the size of your vector, so, i<faces.size() is necessary to get the index i inside the bounds of the vector.
Im very new to c++ and only know the very basics. array, if while for dynamic and pointer..
I am working on a code and here is what i want to do.
For example on a 2D square plane(10x10) I have randomly scattered 1000 points(an array of size 100).
The 2D square plane is divided into 10 smaller rectangles. Now I want to sort the 1000 points into these smaller rectangles. Basically, I want to make 10 dynamic arrays(one for each "small rectangle") and each of these array will contain the scattered points that are inside the corresponding region.
The most basic iteration i thought of was just use if, if, if...
But with this, I have to repeat the iteration 1000times for each region. And I think it is very inefficient.
Write a function to classify a single point, i.e. determine into which region it belongs. As a simple example that you can expand:
std::size_t classify(double px, double py, double split) {
if (px < split) {
return 0; // left plane
} else {
return 1; // right plane
}
}
Then, iterate over the points and put them into respective containers:
std::vector<std::vector<point_t>> region{2};
for (auto const & point : points) {
region[classify(point.x, point.y, split)].push_back(point);
}
This way you iterate once over all points, doing a classification (that should be possible to do in constant time in your case) for every point, which is the minimum work required.
I'm trying to guess wich is the rigid transformation matrix between two 3D points clouds.
The two points clouds are those ones:
keypoints from the kinect (kinect_keypoints).
keypoints from a 3D object (box) (object_keypoints).
I have tried two options:
[1]. Implementation of the algorithm to find rigid transformation.
**1.Calculate the centroid of each point cloud.**
**2.Center the points according to the centroid.**
**3. Calculate the covariance matrix**
cvSVD( &_H, _W, _U, _V, CV_SVD_U_T );
cvMatMul( _V,_U, &_R );
**4. Calculate the rotartion matrix using the SVD descomposition of the covariance matrix**
float _Tsrc[16] = { 1.f,0.f,0.f,0.f,
0.f,1.f,0.f,0.f,
0.f,0.f,1.f,0.f,
-_gc_src.x,-_gc_src.y,-_gc_src.z,1.f }; // 1: src points to the origin
float _S[16] = { _scale,0.f,0.f,0.f,
0.f,_scale,0.f,0.f,
0.f,0.f,_scale,0.f,
0.f,0.f,0.f,1.f }; // 2: scale the src points
float _R_src_to_dst[16] = { _Rdata[0],_Rdata[3],_Rdata[6],0.f,
_Rdata[1],_Rdata[4],_Rdata[7],0.f,
_Rdata[2],_Rdata[5],_Rdata[8],0.f,
0.f,0.f,0.f,1.f }; // 3: rotate the scr points
float _Tdst[16] = { 1.f,0.f,0.f,0.f,
0.f,1.f,0.f,0.f,
0.f,0.f,1.f,0.f,
_gc_dst.x,_gc_dst.y,_gc_dst.z,1.f }; // 4: from scr to dst
// _Tdst * _R_src_to_dst * _S * _Tsrc
mul_transform_mat( _S, _Tsrc, Rt );
mul_transform_mat( _R_src_to_dst, Rt, Rt );
mul_transform_mat( _Tdst, Rt, Rt );
[2]. Use estimateAffine3D from opencv.
float _poseTrans[12];
std::vector<cv::Point3f> first, second;
cv::Mat aff(3,4,CV_64F, _poseTrans);
std::vector<cv::Point3f> first, second; (first-->kineckt_keypoints and second-->object_keypoints)
cv::estimateAffine3D( first, second, aff, inliers );
float _poseTrans2[16];
for (int i=0; i<12; ++i)
{
_poseTrans2[i] = _poseTrans[i];
}
_poseTrans2[12] = 0.f;
_poseTrans2[13] = 0.f;
_poseTrans2[14] = 0.f;
_poseTrans2[15] = 1.f;
The problem in the first one is that the transformation it is not correct and in the second one, if a multiply the kinect point cloud with the resultant matrix, some values are infinite.
Is there any solution from any of these options? Or an alternative one, apart from the PCL?
Thank you in advance.
EDIT: This is an old post, but an answer might be useful to someone ...
Your first approach can work in very specific cases (ellipsoid point clouds or very elongated shapes), but is not appropriate for point clouds acquired by the kinect. And about your second approach, I am not familiar with OpenCV function estimateAffine3D but I suspect it assumes the two input point clouds correspond to the same physical points, which is not the case if you used a kinect point cloud (which contain noisy measurements) and points from an ideal 3D model (which are perfect).
You mentioned that you are aware of the Point Cloud Library (PCL) and do not want to use it. If possible, I think you might want to reconsider this, because PCL is much more appropriate than OpenCV for what you want to do (check the tutorial list, one of them covers exactly what you want to do: Aligning object templates to a point cloud).
However, here are some alternative solutions to your problem:
If your two point clouds correspond exactly to the same physical points, your second approach should work, but you can also check out Absolute Orientation (e.g. Matlab implementation)
If your two point clouds do not correspond to the same physical points, you actually want to register (or align) them and you can use either:
one of the many variants of the Iterative Closest Point (ICP) algorithm, if you know approximately the position of your object. Wikipedia Entry
3D feature points such as 3D SIFT, 3D SURF or NARF feature points, if you have no clue about your object's position.
Again, all these approaches are already implemented in PCL.
I have a fairly simple question: how to take one row of cv::Mat and get all the data in std::vector? The cv::Mat contains doubles (it can be any simple datatype for the purpose of the question).
Going through OpenCV documentation is just very confusing, unless I bookmark the page I can not find a documentation page twice by Googling, there's just to much of it and not easy to navigate.
I have found the cv::Mat::at(..) to access the Matrix element, but I remember from C OpenCV that there were at least 3 different ways to access elements, all of them used for different purposes... Can't remember what was used for which :/
So, while copying the Matrix element-by-element will surely work, I am looking for a way that is more efficient and, if possible, a bit more elegant than a for loop for each row.
It should be as simple as:
m.row(row_idx).copyTo(v);
Where m is cv::Mat having CV_64F depth and v is std::vector<double>
Data in OpenCV matrices is laid out in row-major order, so that each row is guaranteed to be contiguous. That means that you can interpret the data in a row as a plain C array. The following example comes directly from the documentation:
// compute sum of positive matrix elements
// (assuming that M is double-precision matrix)
double sum=0;
for(int i = 0; i < M.rows; i++)
{
const double* Mi = M.ptr<double>(i);
for(int j = 0; j < M.cols; j++)
sum += std::max(Mi[j], 0.);
}
Therefore the most efficient way is to pass the plain pointer to std::vector:
// Pointer to the i-th row
const double* p = mat.ptr<double>(i);
// Copy data to a vector. Note that (p + mat.cols) points to the
// end of the row.
std::vector<double> vec(p, p + mat.cols);
This is certainly faster than using the iterators returned by begin() and end(), since those involve extra computation to support gaps between rows.
From the documentation at here, you can get a specific row through cv::Mat::row, which will return a new cv::Mat, over which you can iterator with cv::Mat::begin and cv::Mat::end. As such, the following should work:
cv::Mat m/*= initialize */;
// ... do whatever...
cv::Mat first_row(m.row(0));
std::vector<double> v(first_row.begin<double>(), first_row.end<double>());
Note that I don't know any OpenCV, but googling "OpenCV mat" led directly to the basic types documentation and according to that, this should work fine.
The matrix iterators are random-access iterators, so they can be passed to any STL algorithm, including std::sort() .
This is also from the documentiation, so you could actually do this without a copy:
cv::Mat m/*= initialize */;
// ... do whatever...
// first row begin end
std::vector<double> v(m.begin<double>(), m.begin<double>() + m.size().width);
To access more than the first row, I'd recommend the first snippet, since it will be a lot cleaner that way and there doesn't seem to be any heavy copying since the data types seem to be reference-counted.
You can also use cv::Rect
m(cv::Rect(0, 0, 1, m.cols))
will give you first row.
matrix(cv::Rect(x0, y0, len_x, len_y);
means that you will get sub_matrix from matrix whose upper left corner is (x0,y0) and size is (len_x, len_y). (row,col)
I think this works,
an example :
Mat Input(480, 720, CV_64F, Scalar(100));
cropping the 1st row of the matrix:
Rect roi(Point(0, 0), Size(720, 1));
then:
std::vector<std::vector<double> > vector_of_rows;
vector_of_rows.push_back(Input(roi));
I am trying to write a bag of features system image recognition system. One step in the algorithm is to take a larger number of small image patches (say 7x7 or 11x11 pixels) and try to cluster them into groups that look similar. I get my patches from an image, turn them into gray-scale floating point image patches, and then try to get cvKMeans2 to cluster them for me. I think I am having problems formatting the input data such that KMeans2 returns coherent results. I have used KMeans for 2D and 3D clustering before but 49D clustering seems to be a different beast.
I keep getting garbage values for the returned clusters vector, so obviously this is a garbage in / garbage out type problem. Additionally the algorithm runs way faster than I think it should for such a huge data set.
In the code below the straight memcpy is only my latest attempt at getting the input data in the correct format, I spent a while using the built in OpenCV functions, but this is difficult when your base type is CV_32FC(49).
Can OpenCV 1.1's KMeans algorithm support this sort of high dimensional analysis?
Does someone know the correct method of copying from images to the K-Means input matrix?
Can someone point me to a free, Non-GPL KMeans algorithm I can use instead?
This isn't the best code as I am just trying to get things to work right now:
std::vector<int> DoKMeans(std::vector<IplImage *>& chunks){
// the size of one image patch, CELL_SIZE = 7
int chunk_size = CELL_SIZE*CELL_SIZE*sizeof(float);
// create the input data, CV_32FC(49) is 7x7 float object (I think)
CvMat* data = cvCreateMat(chunks.size(),1,CV_32FC(49) );
// Create a temporary vector to hold our data
// we'll copy into the matrix for KMeans
int rdsize = chunks.size()*CELL_SIZE*CELL_SIZE;
float * rawdata = new float[rdsize];
// Go through each image chunk and copy the
// pixel values into the raw data array.
vector<IplImage*>::iterator iter;
int k = 0;
for( iter = chunks.begin(); iter != chunks.end(); ++iter )
{
for( int i =0; i < CELL_SIZE; i++)
{
for( int j=0; j < CELL_SIZE; j++)
{
CvScalar val;
val = cvGet2D(*iter,i,j);
rawdata[k] = (float)val.val[0];
k++;
}
}
}
// Copy the data into the CvMat for KMeans
// I have tried various methods, but this is just the latest.
memcpy( data->data.ptr,rawdata,rdsize*sizeof(float));
// Create the output array
CvMat* results = cvCreateMat(chunks.size(),1,CV_32SC1);
// Do KMeans
int r = cvKMeans2(data, 128,results, cvTermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 1000, 0.1));
// Copy the grouping information to our output vector
vector<int> retVal;
for( int y = 0; y < chunks.size(); y++ )
{
CvScalar cvs = cvGet1D(results, y);
int g = (int)cvs.val[0];
retVal.push_back(g);
}
return retVal;}
Thanks in advance!
Though I'm not familiar with "bag of features", have you considered using feature points like corner detectors and SIFT?
You might like to check out http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/ for another open source clustering package.
Using memcpy like this seems suspect, because when you do:
int rdsize = chunks.size()*CELL_SIZE*CELL_SIZE;
If CELL_SIZE and chunks.size() are very large you are creating something large in rdsize. If this is bigger than the largest storable integer you may have a problem.
Are you wanting to change "chunks" in this function?
I'm guessing that you don't as this is a K-means problem.
So try passing by reference to const here. (And generally speaking this is what you will want to be doing)
so instead of:
std::vector<int> DoKMeans(std::vector<IplImage *>& chunks)
it would be:
std::vector<int> DoKMeans(const std::vector<IplImage *>& chunks)
Also in this case it is better to use static_cast than the old c style casts. (for example static_cast(variable) as opposed to (float)variable ).
Also you may want to delete "rawdata":
float * rawdata = new float[rdsize];
can be deleted with:
delete[] rawdata;
otherwise you may be leaking memory here.