using dsift of the vlfeat library with cv::Mat - c++

I am currently trying to use the dsift-algorithm of the vlfeat-lib. But no matter with which values I create the filter (sample step, bin size), it returns the same number of keypoints for every frame during an execution (consecutive frames are different, from a camera). The documentation on C or C++ usage is very thin, and I could not find any good examples for these languages..
Here is the relevant code:
// create filter
vlf = vl_dsift_new_basic(320, 240, 1, 3);
// transform image in cv::Mat to float vector
std::vector<float> imgvec;
for (int i = 0; i < img.rows; ++i){
for (int j = 0; j < img.cols; ++j){
imgvec.push_back(img.at<unsigned char>(i,j) / 255.0f);
}
}
// call processing function of vl
vl_dsift_process(vlf, &imgvec[0]);
// echo number of keypoints found
std::cout << vl_dsift_get_keypoint_num(vlf) << std::endl;

it returns the same number of keypoints for every frame during an
execution
This is normal with dense SIFT implementation since the number of extracted keypoints only depends on input geometrical parameters[1], i.e. step and image size.
See the documentation:
The feature frames (keypoints) are indirectly specified by the sampling steps (vl_dsift_set_steps) and the sampling bounds (vl_dsift_set_bounds).
[1]: vl_dsift_get_keypoint_num returns self->numFrames that is only updated by _vl_dsift_update_buffers which uses geometrical information only (bounds, steps and bin sizes).

Related

I am getting memory errors though I dont rule memory manually in c++

I am trying to find the most bright point in image and make some kind "map of brightness" with function below. It works fine with one image but when I am trying to launch function for all images in directory it makes 19 iterations and then crushes with different memory errors.
I use Qt Creator 4.5. based on Qt 5.10 with Open cv library
free(): corrupted unsorted chunks
free(): invalid next size (normal)
malloc_consolidate(): invalid chunk size
I understand that I have memory problems but can't find where I did it. I am new in programming and didn't use memory functions in this program.
Function where problem appears. It crushes in a line with convertTo() usually.
cv::Mat Plot::statistic(const std::vector<cv::KeyPoint> &kp, const size_t dist, cv::Mat output_size) {
cv::Mat result = cv::Mat::zeros(output_size.size(), CV_32F);
for(int i = 0, imax= kp.size(); i < imax; ++i) {
cv::KeyPoint curr_kp=kp[i];
for(size_t m = curr_kp.pt.x - dist/2 ; m < curr_kp.pt.x + dist/2; ++m)
for(size_t n = curr_kp.pt.y - dist/2; n < curr_kp.pt.y + dist/2; ++n)
if(abs(cv::norm(cv::Point2f(m,n) - curr_kp.pt)) < dist/2)
result.at<float>(n,m) += 255.f/kp.size();
}
double maxVal=1;
cv::Point point_max;
cv::minMaxLoc(result,NULL,&maxVal,NULL,&point_max);
result.convertTo(result,CV_8U,255.f/maxVal);
return result;
}
How I'm calling it:
for(int i=2,imax=plot_analyse.size();i<imax;i++) {
QString curr_img=input_image.absoluteFilePath(plot_analyse[i]);
QString output_path=output_dir+"/pltnst_"+curr_img.section("/",-1); //Setting output folder
mt_make.Make_analysis(curr_img,5000); //Finding up to 5000 keypoints on image
std::vector<cv::KeyPoint> original_kp = mt_make.Keypoints(); //Make a variable for keypints
if (original_kp.empty()) {continue; }
cv::Mat output_mat=Do.Open_image(curr_img.toStdString()); //Opening current image
output_mat=make_maps.statistic(original_kp,100,output_mat); //Calling function
cv::imwrite(output_path.toStdString(),output_mat); //Saving ready brightness map
}
Input folder has 500+ gray images with 1000x1000 resolution. I expect that all this images will have brightness maps in output folder and points of max brightness for each.

Replace a chain of image blurs with one blur

In this question I asked how to implement a chain of blurs in one single step.
Then I found out from the gaussian blur page of Wikipedia that:
Applying multiple, successive gaussian blurs to an image has the same
effect as applying a single, larger gaussian blur, whose radius is the
square root of the sum of the squares of the blur radii that were
actually applied. For example, applying successive gaussian blurs with
radii of 6 and 8 gives the same results as applying a single gaussian
blur of radius 10, since sqrt {6^{2}+8^{2}}=10.
So I thought that blur and singleBlur were the same in the following code:
cv::Mat firstLevel;
float sigma1, sigma2;
//intialize firstLevel, sigma1 and sigma2
cv::Mat blur = gaussianBlur(firstLevel, sigma1);
blur = gaussianBlur(blur, sigma2);
float singleSigma = std::sqrt(std::pow(sigma1,2)+std::pow(sigma2,2));
cv::Mat singleBlur = gaussianBlur(firstLevel, singleSigma);
cv::Mat diff = blur != singleBLur;
// Equal if no elements disagree
assert( cv::countNonZero(diff) == 0);
But this assert fails (and actually, for example, the first row of blur is different from the first one of singleBlur).
Why?
UPDATE:
After different comments asking for more information, I'll update the answer.
What I'm trying to do is to parallelize this code. In particular, I'm focusing now on computing all the blurs at all levels in advance. The serial code (which works correctly) is the following:
vector<Mat> blurs ((par.numberOfScales+3)*levels, Mat());
cv::Mat octaveLayer = firstLevel;
int scaleCycles = par.numberOfScales+2;
//compute blurs at all layers (not parallelizable)
for(int i=0; i<levels; i++){
blurs[i*scaleCycles+1] = octaveLayer.clone();
for (int j = 1; j < scaleCycles; j++){
float sigma = par.sigmas[j]* sqrt(sigmaStep * sigmaStep - 1.0f);
blurs[j+1+i*scaleCycles] = gaussianBlur(blurs[j+i*scaleCycles], sigma);
if(j == par.numberOfScales)
octaveLayer = halfImage(blurs[j+1+i*scaleCycles]);
}
}
Where:
Mat halfImage(const Mat &input)
{
Mat n(input.rows/2, input.cols/2, input.type());
float *out = n.ptr<float>(0);
for (int r = 0, ri = 0; r < n.rows; r++, ri += 2)
for (int c = 0, ci = 0; c < n.cols; c++, ci += 2)
*out++ = input.at<float>(ri,ci);
return n;
}
Mat gaussianBlur(const Mat input, const float sigma)
{
Mat ret(input.rows, input.cols, input.type());
int size = (int)(2.0 * 3.0 * sigma + 1.0); if (size % 2 == 0) size++;
GaussianBlur(input, ret, Size(size, size), sigma, sigma, BORDER_REPLICATE);
return ret;
}
I'm sorry for the horrible indexes above, but I tried to respect the original code system (which is horrible, like starting counting from 1 instead of 0). The code above has scaleCycles=5 and levels=6, so 30 blurs are generated in total.
This is the "single blur" version, where first I compute the sigmas for each blur that has to be computed (following Wikipedia's formula) and then I apply the blur (notice that this is still serial and not parallelizable):
vector<Mat> singleBlurs ((par.numberOfScales+3)*levels, Mat());
vector<float> singleSigmas(scaleCycles);
float acc = 0;
for (int j = 1; j < scaleCycles; j++){
float sigma = par.sigmas[j]* sqrt(sigmaStep * sigmaStep - 1.0f);
acc += pow(sigma, 2);
singleSigmas[j] = sqrt(acc);
}
octaveLayer = firstLevel;
for(int i=0; i<levels; i++){
singleBlurs[i*scaleCycles+1] = octaveLayer.clone();
for (int j = 1; j < scaleCycles; j++){
float sigma = singleSigmas[j];
std::cout<<"j="<<j<<" sigma="<<sigma<<std::endl;
singleBlurs[j+1+i*scaleCycles] = gaussianBlur(singleBlurs[j+i*scaleCycles], sigma);
if(j == par.numberOfScales)
octaveLayer = halfImage(singleBlurs[j+1+i*scaleCycles]);
}
}
Of course the code above generates 30 blurs also with the same parameters of the previous version.
And then this is the code to see the difference between each signgleBlurs and blurs:
assert(blurs.size() == singleBlurs.size());
vector<Mat> blurDiffs(blurs.size());
for(int i=1; i<levels*scaleCycles; i++){
cv::Mat diff;
absdiff(blurs[i], singleBlurs[i], diff);
std::cout<<"i="<<i<<"diff rows="<<diff.rows<<" cols="<<diff.cols<<std::endl;
blurDiffs[i] = diff;
std::cout<<"blurs rows="<<blurs[i].rows<<" cols="<<blurs[i].cols<<std::endl;
std::cout<<"singleBlurs rows="<<singleBlurs[i].rows<<" cols="<<singleBlurs[i].cols<<std::endl;
std::cout<<"blurDiffs rows="<<blurDiffs[i].rows<<" cols="<<blurDiffs[i].cols<<std::endl;
namedWindow( "blueDiffs["+std::to_string(i)+"]", WINDOW_AUTOSIZE );// Create a window for display.
//imshow( "blueDiffs["+std::to_string(i)+"]", blurDiffs[i] ); // Show our image inside it.
//waitKey(0); // Wait for a keystroke in the window
Mat imageF_8UC3;
std::cout<<"converting..."<<std::endl;
blurDiffs[i].convertTo(imageF_8UC3, CV_8U, 255);
std::cout<<"converted"<<std::endl;
imwrite( "blurDiffs_"+std::to_string(i)+".jpg", imageF_8UC3);
}
Now, what I've seen is that blurDiffs_1.jpg and blurDiffs_2.jpg are black, but suddendly from blurDiffs_3.jpg until the blurDiffs_29.jpg becomes whiter and whiter. For some reason, blurDiffs_30.jpg is almost completely black.
The first (correct) version generates 1761 descriptors. The second (uncorrect) version generates >2.3k descriptors.
I can't post the blurDiffs matrices because (especially the first ones) are very big and post's space is limited. I'll post some samples. I'll not post blurDiffs_1.jpg and blurDiffs_2.jpg because they're totally blacks. Notice that because of halfImage the images become smaller and smaller (as expected).
blurDiffs_3.jpg:
blurDiffs_6.jpg:
blurDiffs_15.jpg:
blurDiffs_29.jpg:
How the image is read:
Mat tmp = imread(argv[1]);
Mat image(tmp.rows, tmp.cols, CV_32FC1, Scalar(0));
float *out = image.ptr<float>(0);
unsigned char *in = tmp.ptr<unsigned char>(0);
for (size_t i=tmp.rows*tmp.cols; i > 0; i--)
{
*out = (float(in[0]) + in[1] + in[2])/3.0f;
out++;
in+=3;
}
Someone here suggested to divide diff by 255 to see the real difference, but I don't understand why of I understood him correctly.
If you need any more details, please let me know.
A warning up front: I have no experience with OpenCV, but the following is relevant to computing Gaussian blur in general. And it is applicable as I threw a glance at the OpenCV documentation wrt border treatment and use of finite kernels (FIR filtering).
As an aside: your initial test was sensitive to round-off, but you have cleared up that issue and shown the errors to be much larger.
Beware image border effects. For pixels near the edge, the image is virtually extended using one of the offered methods (BORDER_DEFAULT, BORDER_REPLICATE, etc...). If your image is |abcd| and you use BORDER_REPLICATE you are effectively filtering an extended image aaaa|abcd|dddd. The result is klmn|opqr|stuv. There are new pixel values (k,l,m,n,s,t,u,v) that are immediately discarded to yield the output image |opqr|. If you now apply another Gaussian blur, this blur will operate on a newly extended image oooo|opqr|rrrr - different from the "true" intermediate result and thus giving you a result different from that obtained by a single Gaussian blur with a larger sigma. These extension methods are safe though: REFLECT, REFLECT_101, WRAP.
Using a finite kernel size the G(s1)*G(s2)=G(sqrt(s1^2+s2^2)) rule does not hold in general due to the tails of the kernel being cut off. You can reduce the error thus introduced by increasing the kernel size relative to the sigma, e.g.:
int size = (int)(2.0 * 10.0 * sigma + 1.0); if (size % 2 == 0) size++;
Point 3 seems to be the issue that is "biting" you. Do you really care whether the G(s1)*G(s2) property is preserved. Both results are wrong in a way. Does it affect the methodology that works on the result in a major way? Note that the example of using 10x sigma I have given may resolve the difference, but will be very much slower.
Update: I forgot to add what might the most practical resolution. Compute the Gaussian blur using a Fourier transform. The scheme would be:
Compute Fourier transform (FFT) of your input image
Multiply with the Fourier transform of the Gaussian kernel and compute inverse Fourier transform. Ignore the imaginary part of the complex output.
You can find the equation for the Gaussian in the frequency domain on wikipedia
You can perform the second step separately (i.e. in parallel) for each scale (sigma). The border condition implied computing the blur this way is BORDER_WRAP. If you prefer you can achieve the same but with BORDER_REFLECT if you use a discrete cosine transform (DCT) instead. Do not know if OpenCV provides one. You will be after the DCT-II
It's basically as what G.M. says. Remember you are not only rounding by floating points, you are also rounding by looking only at integer points (both on the image and on the Gaussian kernels).
Here's what I got from a small (41x41) image:
where blur and single are rounded by convertTo(...,CV8U) and diff is where they are different. So, in DSP terms, it may not be a great agreement. But in Image Processing, it's not that bad.
Also, I suspect that the different will be less significant as you perform Gaussian on bigger images.

How do I update this Neural Net to use image pixel data

I'm learning Neural Networks from this bytefish machine learning guide and code. I understand it well but I would like to update the code at the previous link to use image pixel data instead of random values as the input data. In this section of the aforementioned code:
cv::randu(trainingData,0,1);
cv::randu(testData,0,1);
the training and test matrices are filled with random data. Then label data is added to the classes matrices here:
cv::Mat trainingClasses = labelData(trainingData, eq);
cv::Mat testClasses = labelData(testData, eq);
using this function:
// label data with equation
cv::Mat labelData(cv::Mat points, int equation) {
cv::Mat labels(points.rows, 1, CV_32FC1);
for(int i = 0; i < points.rows; i++) {
float x = points.at<float>(i,0);
float y = points.at<float>(i,1);
labels.at<float>(i, 0) = f(x, y, equation);
// the f() function used above
//is only a case statement with 5
//switches in it eg on of the switches is:
//case 0:
//return y > sin(x*10) ? -1 : 1;
//break;
}
return labels;
}
Then points are plotted in a window here:
plot_binary(trainingData, trainingClasses, "Training Data");
plot_binary(testData, testClasses, "Test Data");
with this function:
;; Plot Data and Class function
void plot_binary(cv::Mat& data, cv::Mat& classes, string name) {
cv::Mat plot(size, size, CV_8UC3);
plot.setTo(cv::Scalar(255.0,255.0,255.0));
for(int i = 0; i < data.rows; i++) {
float x = data.at<float>(i,0) * size;
float y = data.at<float>(i,1) * size;
if(classes.at<float>(i, 0) > 0) {
cv::circle(plot, Point(x,y), 2, CV_RGB(255,0,0),1);
} else {
cv::circle(plot, Point(x,y), 2, CV_RGB(0,255,0),1);
}
}
imshow(name, plot);
}
The plotted points, as I understand it, represent the input data multiplied by the equations in the f() function and is used by the predict functions to predict which point to plot in the mlp, knn, svm etc. functions. How do I update what is going on here to do something with Image pixel data. Any advice to get me farther would be appreciated.
"How do I update what is going on here to do something with Image pixel data" is a broad and generic question. May I ask in exchange: what do you want to do with "Image pixel data"?
Do you want an answer to: what can be done with "Image pixel data" on machine learning algorithms like ANN, SVM etc. ?
The answer is a loooong list of things encompassing thousands of research papers and hundreds of PhD theses. Some examples include: supervised and/or un-supervised classification of images into labels/tags/categories based on features like image content, objects in image, patterns in image etc. The possibilities are endless. You may perhaps want to take a look at this: http://stuff.mit.edu/afs/athena/course/urop/profit/PDFS/EdwardTolson.pdf
Now, coming back to you original objective: "I would like to update the code at the previous link to use image pixel data instead of random values as the input data"...
The implementation technique would depend largely on what you want to do. I can cite one/two easy techniques for extracting feature vectors from image, which can be fed into any machine learning algorithm of your choice...
Example 1:
You may start with using pixel intensity data as a feature vector. Here's how you may go ahead with it:
Load image using
Mat image = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
Resize image into a smaller area using resize. You may want to begin with small image sizes, like 8x8 or 10x10 pixels.
Loop through the image matrix, somewhat like this:
for(int row = 0; row < img.rows; ++row)
{
uchar* p = img.ptr(row);
for(int col = 0; col < img.cols; ++col)
{
*p++ //points to each pixel value in turn assuming a CV_8UC1 greyscale image
}
}
A collection of all the pixel values will give you a feature vector for that image.
Now suppose you have two classes of image. For each set of feature vector you generate, you'll have to prepare (for supervised classification) a corresponding label Mat (somewhat like the example you've mentioned). It needs to contain the class label (say, 0 and 1) for all the feature vectors present in your feature Mat.
Now feed the feature vectors and label Mat to your machine learning code and see what happens.
However, the ability of image classification based on image pixel data alone is quite limited. There are thousands of techniques for extracting image features, most of which are dependent on the application area.
Example 2:
I'll finish off with one more example for extracting feature vectors, which, in some cases, will prove to be more effective than simple image pixel values.
You may use the Histogram of Oriented Gradients descriptor for slightly better results, use this:
cv::HOGDescriptor hog;
vector<float> descriptors;
hog.compute(mat, descriptors);
The vector descriptors is your feature vector.
HOGDescriptors, when used with SVM, provides a decent classification mechanism.
You can put the pixel data of an image into a Mat called trainingData using something similar to this:
cv::Mat labelData(cv::Mat points, int equation)
{
cv::Mat labels(points.rows, 1, CV_32FC1);
for(int i = 0; i < points.rows; i++)
{
float x = points.at<float>(i,0);
float y = points.at<float>(i,1);
labels.at<float>(i, 0) = f(x, y, equation);
}
return labels;
}
Now, instead of labelData, we're going to return a Mat of pixel data. One obvious way is to use the image itself as a feature vector. However, some machine learning algorithms in openCV, including ANN, SVM etc., required special formatting of input data.
You may try something like this:
cv::Mat trainingData(cv::Mat image)
{
cv::Mat trainingVector(image.rows*image.cols, 1, CV_32FC1);
for(int i = 0; i < image.rows; i++)
{
for(int j = 0; j < image.cols; j++)
{
float valueOfPixel = image.at<float>(i,j);
trainingVector.at<float>((i*image.cols)+j, 0) = valueOfPixel;
}
}
return trainingVector;
}
(Please recheck the syntax of the code before using, I just typed it out here)
So, what the above block effectively does is change the 2D matrix of the image into a 1D array. Now, how and where you use it depends on your requirements.
Please make necessary modifications before invoking the machine learning modules.
Hope this answers your question.
Thanks.

Finding all objects in an image based on color

I am looking for a way to take an image and get masks of all objects in it by color. My goal is to be able to separate similarly colored objects into layers so I can further examine each layer. The plan is to use each mask against the original image to create a histogram of the colors in each object and determine the similarity with other objects in the image. If something is similar enough it will be combined with other objects to form a layer.
The problem is that I can not find a function in opencv to find all objects in an image based on color contiguity. I am sure such an algorithm exists, but it seems to be evading me. Does anyone know of an algorithm or function like this?
The best method that I have found is K-means Clustering. This separates the image into different layers based on color. It uses a k-neighbors algorithm to do so. With this I am able to effectively split the image into several layers that are of similar color.
#define numClusters 7
cv::Mat src = cv::imread("img0.png");
cv::Mat kMeansSrc(src.rows * src.cols, 3, CV_32F);
//resize the image to src.rows*src.cols x 3
//cv::kmeans expects an image that is in rows with 3 channel columns
//this rearranges the image into (rows * columns, numChannels)
for( int y = 0; y < src.rows; y++ )
{
for( int x = 0; x < src.cols; x++ )
{
for( int z = 0; z < 3; z++)
kMeansSrc.at<float>(y + x*src.rows, z) = src.at<Vec3b>(y,x)[z];
}
}
cv::Mat labels;
cv::Mat centers;
int attempts = 2;
//perform kmeans on kMeansSrc where numClusters is defined previously as 7
//end either when desired accuracy is met or the maximum number of iterations is reached
cv::kmeans(kMeansSrc, numClusters, labels, cv::TermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 8, 1), attempts, KMEANS_PP_CENTERS, centers );
//create an array of numClusters colors
int colors[numClusters];
for(int i = 0; i < numClusters; i++) {
colors[i] = 255/(i+1);
}
std::vector<cv::Mat> layers;
for(int i = 0; i < numClusters; i++)
{
layers.push_back(cv::Mat::zeros(src.rows,src.cols,CV_32F));
}
//use the labels to draw the layers
//using the array of colors, draw the pixels onto each label image
for( int y = 0; y < src.rows; y++ )
{
for( int x = 0; x < src.cols; x++ )
{
int cluster_idx = labels.at<int>(y + x*src.rows,0);
layers[cluster_idx].at<float>(y, x) = (float)(colors[cluster_idx]);;
}
}
std::vector<cv::Mat> srcLayers;
//each layer to mask a portion of the original image
//this leaves us with sections of similar color from the original image
for(int i = 0; i < numClusters; i++)
{
layers[i].convertTo(layers[i], CV_8UC1);
srcLayers.push_back(cv::Mat());
src.copyTo(srcLayers[i], layers[i]);
}
I suggest you convert the image to the HSV-space (Hue-Saturation-Value). Then make a histogram based on the Hue value to find thresholds online, or define them before (depends if this is a general problem or a given one).
Crate one-channel images for each layer you want to form. (set them as black)
Then then use the HSV-image and mark a layer based on the threshold values. You might want to add some constant thresholds for value and saturation too (to avoid dark and light areas)
Does this make sense to you?
I think that you should proceed in the following proceess:
Smooth you image if it has too much details.
find edges
Find all contours
Try to find the color of each contour..lets say you want to keep all contours which are red. So, keep only those contours which are red.
Once you find the contours which you want to keep, then create a mask image based upon the contours you want to keep.
Using mask image, extract the required objects from the original image.

How to use clustering with opencv c++ to classify the connected component based on the area and height

Hi, with opencv c++, I want to do clustering to classify the connected components based on the area and height.
I do understand the concept of the clustering but i have hard time to implement it in opencv c++.
In the opencv
http://docs.opencv.org/modules/core/doc/clustering.html
There is a clustering methods kmeans
Most of the website I searched, they just explain the concept and parameters of the kmeans function in opencv c++ and most of them were copied from the opencv document website.
double kmeans(InputArray data, int K, InputOutputArray bestLabels, TermCriteria criteria, int attempts, int flags, OutputArray centers=noArray() )
There is also good example here but it was implemented in Python
http://docs.opencv.org/trunk/doc/py_tutorials/py_ml/py_kmeans/py_kmeans_opencv/py_kmeans_opencv.html?highlight=kmeans
As i mentioned above, I have all the connected components and i can calculate areas and height of each connect components.
I want to use clustering to distinguish between connected components.
For instance, with k-means methods i would use k=2.
Thank..
I am posting the snippet, Hope this will help you....
The Height and Area of component can be used as a feature for kmean. Now here for each feature kmean will give you centre. i.e. 1 center for Area and 1 center for height of component.
Mat labels;
int attempts = 11;
Mat centers;
int no_of_features = 2;//(i.e. height, area)
Mat samples(no_of_connected_components, no_of_features, CV_32F);
int no_of_sub_classes = 1; // vary for more sub classes
for (int j = 0; j < no_of_connected_components; j++)
{
for (int x = 0; x < no_of_features; x++)
{
samples.at<float>(j, x) = connected_component_values[j,x];
//fill the values(height, area) of connected component labelling
}
}
cv::kmeans(samples, no_of_sub_classes, labels, TermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 10000, 0.001), attempts, KMEANS_PP_CENTERS, centers);
for (size_t si_i = 0; si_i < no_of_sub_classes ; si_i++)
{
for (size_t si_j = 0; si_j < no_of_features; si_j++)
{
KmeanTable[si_i*no_of_sub_classes + si_i][si_j] = centers.at<float>(si_i, si_j);
}
}
Here I am storing the center in kmeanTable 2D array you can use yours. Now for each connected component you can calculate the euclidean distance from centers.
The lower difference features qualify for classification.
Check this out.
Except instead of iterating over x,y, and z you'll iterate over component, and property (area, and height).