I'm using EM on an array and I wish to store the results of EM (means, sigma etc.) in an array. Therefore, I need a Mat of Mat object, since the means,sigma etc., are in Mat form. Can somebody tell me how to initialize such an array? The size of the Mat I need is approximately 200000*3. With each column storing the means, weights and sigma values of the mixtures.
Basically, I need a 200000*3 Mat in which every element is an N*1 Mat. Any help with this please?
Related
parallel_for_(Range(0,npoints),LKTrackerInvoker(prevPyr[level*lvlstep1],deriveI,nextPyr[level*lvlstep2],prevPts,nextPts,status,err,winsize,criteria,level,maxlevel,flags(float)minEigThreshold);
Above is a function in OpenCV, which is used to calculate optical flow for previous pyramid images(const Mat& prevPyr) and next pyramid images(Mat& nextPyr).
Now I want to process nextPts(Point2f* nextPts), I need the size of nextPts.
I tried to look up OpenCV documentation but could not find any member function to get the size, Point2f cannot be converted to Mat, either. So, how can I get the size for a Point type variable and when should I use Point2f instead of Mat or InputArray, I'm not clear what the advantages of class Point are.
Any relevant reply would be appreciate a lot.
Im new to openCV and image processing I am trying to find moving objects using optical flow (Lucas-Kanade method) by comparing two saved images on disk which are frames from a camera.
The part of code I am asking about is:
Mat img = imread("t_mono1.JPG", CV_LOAD_IMAGE_UNCHANGED);//read the image
Mat img2 = imread("t_mono2.JPG", CV_LOAD_IMAGE_UNCHANGED);//read the 2nd image
Mat pts;
Mat pts2;
Mat stat;
Mat err;
goodFeaturesToTrack(img,pts,100,0.01,0.01);//extract features to track
calcOpticalFlowPyrLK(img,img2,pts,pts2,stat,err);//optical flow
cout<<" "<<pts.cols<<"\n"<<pts.rows<<"\n";`
I am getting that the size of pts is 100 by 1. I suppose it should have been 100 by 2 since the pixels have x and y coordinates. Further, when I used for loops to display the contents of the arrays the pts array was all zeros and all arrays were one dimensional.
I have seen this question:
OpenCV's calcOpticalFlowPyrLK throws exception
I tried using vectors as he did but I get an error when building it says cannot convert between different data types. I am using VS2008 with openCV2.4.11
I would like to get the (x,y) coordinates of features in the first and second images and the error but all the arrays passed to calcOpticalFlowPyrLK were one dimensional and I don't understand how can this be?
I am implementing an algorithm in which i have to use multi resolution analysis. What that paper says is that i have to perform some processing at lower scale, find some pixel locations and then remap the pixels according to the orignal scale. I really dont understand the remapping function in Open cv. If any one could help me that would be great.
Thanks.
If you want to resize picture in OpenCV, you can do this:
Mat img = imread(picturePath, CV_8UC1);
Mat resized(NEW_HEIGH, NEW_WIDTH, CV_32FC1);
img.convertTo(img, CV_32FC1);
resize(img, resized, resized.size());
If you want to acess a specific pixel, use:
img.at<var>(row, col)
in a CV_32FC1 format replace "var" with "float", for CV_8UC1 replace with "int"
Hope this will help.
I have an array of pixel data in RGBA format. Although I have already converted this data to grayscale using the GPU (thus all 4 channels are identical).
I now want to use this grayscale data in OpenCV, and I don't want to store 4 copies of the same data. Is it possible to create a cv::Mat structure from this pixel array by specifying a stride. (i.e. only read out every 4th byte)
I am currently using
GLubyte* Img = stuff from GPU;
cv::Mat tmp(height, width, CV_8UC4, Img);
But this copies all the data, or does it wrap the existing pointer into a cv::Mat without copying it? If it wraps without copy then I will be happy to use standard c++ routines to copy only the data I want from Img into a new section of memory and then wrap this as cv::Mat.
Otherwise how would you suggest doing this to reduce the amount of data being copied.
Thanks
The code that you are using
cv::Mat tmp(rows, cols, CV_8UC4, dataPointer);
does not perform any copy but only assign the data field of the Mat instance.
If it's ok for you to work with a matrix of 4 channels, then just go on.
Otherwise, if you prefer working with a 1-channel matrix, then just use the function cv::cvtColor() to create a new image with a single channel (but then you will get one additional image in memory and pay the CPU cycles for the conversion):
cv::Mat grey;
cv::cvtColor(tmp, grey, CV_BGR2GRAY);
Finally, one last thing: if you can deinterlace the colorplanes beforehand (for example on the GPU) and get some image with [blue plane, green plane, red plane], then you can pass CV_8UC1 as image type in the construction of tmp and you get a single channel grey image without any data copy.
I am new in image processing and opencv. I need to threshold my gray scale image. The image contains all value between 0 to 1350 and I want to keep all values which are more than 100. I found this function in opencv:
cv::threshold( Src1, Last, 100, max_BINARY_value,3);
I do not know what should I write in the max_BINARY_value part and aslo I do not know if the last item is selected correctly or not.
Thanks in advance.
To use cv::threshold you use
C++: double threshold(InputArray src, OutputArray dst, double thresh, double maxval, int type)
You selected your Src1, Last and your Threshold 100correctly.
maxval is only used if you use THRESH_BINARY or THRESH_BINARY_INV as type.
What you want to use is cv::THRESH_TOZERO as type. Ths keeps all values above your Threshold and sets all other Values to zero.
Please keep in mind that it is alway better to use the "Names" of this Parameters instead of their integer representation. If you read through your code in a few weeks cv::THRESH_TOZERO says everything you need, where 3 is only a number.