OpenCV 2.4.11 SVM Prediction from Test Images - c++

I am currently working on training and testing greyscale images. So far I've trained the images using the svm.train() method.
However, I fail at testing the images. So far My Code for Testing the Images:
for (int i = 0; i < test_files.size(); i++){
temporary_image = imread(test_files[i], 0);
Mat image1d(1, temporary_image.cols, CV_32FC1);
//Mat row_image = temporary_distance.reshape(1, 1);
float result = svm.predict(image1d);
printf("\n%d\n", result);
}
Could you please tell me, how can I fix the problem?
svm.predict(image1d) --> This gives the error.
Whether I make it float result = svm.predict(image1d) or svm.predict(image1d) simply same problem occurs.
Before Asking this question I read
Error on SVM using images
using OpenCV and SVM with images

Related

OpenCV DNN face detection in UWP/C++: bad results

I'm using OpenCV and Cafe to perform face detection on some images I receive from a stream. First, I tried with python:
prototxt_file = 'deploy.prototxt'
weights_file = 'res10_300x300_ssd_iter_140000.caffemodel'
dnn = cv2.dnn.readNetFromCaffe(prototxt_file, weights_file)
for image in images:
blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 1.0, (300, 300),
(104.0, 177.0, 123.0))
dnn.setInput(blob)
detections = dnn.forward()
for i in range(0, detections.shape[2]):
confidence = detections[0, 0, i, 2]
box = detections[0, 0, i, 3:7]
if confidence > 0.5:
//Do something
This works quite well. Now, I want to do the same within a C++ Windows UWP App, so I compiled OpenCV from source for UWP (tried with versions 3.4.1 and 4.3.0). After going through this example I tried the following:
std::string caffeConfigFilePath = "deploy.prototxt";
std::string caffeWeightFilePath = "res10_300x300_ssd_iter_140000.caffemodel";
net = cv::dnn::readNetFromCaffe(caffeConfigFilePath, caffeWeightFilePath);
for (image in images)
{
cv::Mat imageResized, imageBlob;
std::vector<cv::Mat> outs;
cv::resize(image, imageResized, cv::Size(300, 300));
cv::dnn::blobFromImage(imageResized, imageBlob, 1, cv::Size(300, 300),
(104.0, 177.0, 123.0));
net.setInput(imageBlob, "data");
net.forward(outs, "detection_out");
CV_Assert(outs.size() > 0);
for (size_t k = 0; k < outs.size(); k++)
{
float* data = (float*)outs[k].data;
for (size_t i = 0; i < outs[k].total(); i += 7)
{
float confidence = data[i + 2];
if (confidence > 0.5)
{
//Do something
}
}
}
This gives me very bad results. I get a lot of detections with a confidence of 1.0, covering the entire image. The face itself, however, is not detected. So I thought I might be reading the output wrong. I also tried the code posted with this question, but the results are the same. I checked everything I could think of (input images in right format, model correctly loaded, etc.) but could not identify the error.
Since the DNN module is usually not included in an OpenCV UWP build (I had to comment some lines in the CMake.txt, but then it compiled without errors), can it be that using it is just not possible from a UWP app? What else could be the reason the code is working in python, but an almost identical code is not working in C++?

OpenCv SimpleBlobDetector does not find all blobs. C++ , VS2015

I have a simple task for OpenCV SimpleBlobDetector
cv::SimpleBlobDetector::Params params;
cv::Ptr<cv::SimpleBlobDetector> detector = cv::SimpleBlobDetector::create(params);
std::vector<cv::KeyPoint> keypoints;
detector->detect(crop, keypoints);
drawKeypoints(crop, keypoints, crop, cv::Scalar(0, 0, 255), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
cv::imshow("crop", crop);
cv::waitKey(0);
It is not detecting half of the blobs in my image.
Please see picture below,
I tried adding parameters and varying them, at no point has it ever detected every single blob.
Blob detection is a simple and straightforward algorithm that should be completely refined in every image processing API. Is this not the case with OpenCV?
//params.minThreshold = 0;
//params.maxThreshold = 255;
//params.filterByArea = true;
//params.minArea = 1000;
//params.maxArea = 5000;
//params.filterByCircularity = true;
//params.minCircularity = 0.4;
//params.filterByConvexity = true;
//params.minConvexity = 0.87;
//params.filterByInertia = true;
//params.minInertiaRatio = 0.71;
I'm using either OpenCV 3.3 or 3.2, I can't seem to find the version number in the sources
Im not sure if this is properly going to answer my question, but I had to write my own blob detection, it appears that OpenCV SimpleBlobDetector is not so simple.

Image is multiplied three times in OpenCV, what causes this?

I have one gray scale image which is just the R channel of a photo, now I'm trying to write that R channel into a new image, which is an RGB image. Ideally, the new image would look just like the old image, but red.
What happens though is that in the new image, the old image appears three times squished next to each other.
Here you can see the gray scale image and the output image.
Here is my code, I think it's pretty straightforward:
Mat img_in = imread("in.png", CV_LOAD_IMAGE_GRAYSCALE);
Mat img_out = Mat::zeros(img_in.size(), CV_8UC3);
for (int i = 0; i < img_in.rows; i++)
{
for (int j = 0; j < img_in.cols; j++)
{
img_out.at<Vec3b>(i,j)[2] = img_in.at<Vec3b>(i,j)[2];
}
}
imwrite("test_img_in.png", img_in);
imwrite("test_img_out.png", img_out);
At first I thought it was some kind of indices mixup, but I've tried a lot of combinations, and it always multiplies the output image three times horizontally, never vertically.
Now my thought is that it comes from some OpenCV specification, like the CV_8UC3 type (I've tried others too), which I've chosen because I think it support RGB images. Unfortunately, I don't know too much about OpenCV itself, that's why I'm seeking help here.
PS: This is part of a whole bigger program which wants to generate a color image from three gray scale channel images, but I'm currently stuck on combining the aligned gray scale images, since this happens. The code I posted is isolated from the rest of the program and works like this on its own.
My OpenCV version is 2.4.11.
The problem is here:
img_out.at<Vec3b>(i,j)[2] = img_in.at<Vec3b>(i,j)[2];
As you said the input image is gray. So, just use:
img_out.at<Vec3b>(i,j)[2] = img_in.at<unsigned char>(i,j);
you will get the same result by loading your image as 3 channel and subtract Scalar(255,255,0)
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char **argv)
{
Mat src = imread(argv[1]);
imshow("src", src );
src -= Scalar(255,255,0);
imshow("Red channel", src );
waitKey();
return 0;
}

Unable to stitch images via OpenCV in C++

I need to stitch few images using OpenCV in C++, so I wrote the following code:
#include <opencv2/opencv.hpp>
#include <opencv2/stitching.hpp>
#include <cstdio>
#include <vector>
void main()
{
std::vector<cv::Mat> vImg;
cv::Mat rImg;
vImg.push_back(cv::imread("./stitching_img/S1.png"));
vImg.push_back(cv::imread("./stitching_img/S2.png"));
vImg.push_back(cv::imread("./stitching_img/S3.png"));
cv::Stitcher stitcher = cv::Stitcher::createDefault();
unsigned long AAtime = 0, BBtime = 0;
AAtime = cv::getTickCount();
cv::Stitcher::Status status = stitcher.stitch(vImg, rImg);
BBtime = cv::getTickCount();
printf("%.2lf sec \n", (BBtime - AAtime) / cv::getTickFrequency());
if (cv::Stitcher::OK == status)
cv::imshow("Stitching Result", rImg);
else
std::printf("Stitching fail.");
cv::waitKey(0);
}
Unfortunately, it always says "Stitching fail" on the following files -- http://imgur.com/a/32ZNS while it works on these files -- http://imgur.com/a/ve5sY
What am I doing wrong? How can I fix it?
Thanks in advance.
cv::Stitchers works by finding common features in the separate images and use those to figure out where the images fit together. In your samples where the stitching works you can find a lot of overlap: the blue roof, the features of the buildings across the road, etc.
In the set where it fails for you, there is no overlap, so the algorithm can't figure out how to fit them together. It seems like you can 'stitch' these images by just putting them next together. For this you can use hconcat as described at this answer: https://stackoverflow.com/a/20079134/1737727
There is a very simple way of displaying two images side by side. The following function can be used which is provided by opencv.
Mat image1, image2;
hconcat(image1,image2,image1);//Syntax->
hconcat(source1,source2,destination);
This function can also be used to copy a set of columns from an image to another image.
Mat image;
Mat columns=image.colRange(20,30);
hconcat(image,columns,image);
vconcat is a similar function to stich images vertically.

Updating OpenCV CvNormalBayesClassifier

I'm trying to use CvNormalBayesClassifier to train my program to learn skin pixel colors. I have a set of training images and response images. The response images are in black and white, skin regions are marked white. The following is my code,
CvNormalBayesClassifier classifier;
for (int i = 0; i < numFiles; i++) {
string trainFile = "images/" + int2str(i) + ".jpg";
string responseFile = "images/" + int2str(i) + "_mask.jpg";
Mat trainData = imread(trainFile, 1);
Mat responseData = imread(responseFile, CV_LOAD_IMAGE_GRAYSCALE);
trainData = trainData.reshape(1, trainData.rows * trainData.cols);
responseData = responseData.reshape(0, responseData.rows * responseData.cols);
trainData.convertTo(trainData, CV_32FC1);
responseData.convertTo(responseData, CV_32FC1);
classifier.train(trainData, responseData, Mat(), Mat(), i != 0);
}
However, it is giving the following error,
The function/feature is not implemented (In the current implementation the new training data must have absolutely the same set of class labels as used in the original training data) in CvNormalBayesClassifier::train
Many thanks.
As the error message states, you cannot 'update' the classifier in light of new class labels. The Normal Bayes Classifier learns a Mixture of Gaussians to represent the training data. If you suddenly start adding new labels this mixture model will cease to be correct and a new model must be learned from scratch.
Ok, I found that the problem was that the black and white images have been compressed and thus contain values ranging from 0-255. Therefore, there can be a new class label in the other images.
To solve this problem, use thresholding to make the value all become 0 or 255.